You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[JUJU-3894] Forward port some fixes from 2.9 to master (#870)
* Add kubernetes as supported series as per juju/core/series/supported.go
Fixes#865
* Add example local charm to test and reproduce #865
Unfortunately we can't write an integration test that uses this
because we run our tests on lxd so the charm will be deployed but will
never actually get 'active'. We could test the deployment itself (that
it doesn't error), but if we did that, the 3.0 will actually fail to
deploy on lxd because the series is kubernetes so the test will be
invalid anyways.
* Fix bug in Type.from_json() parsing simple entries
Should fix#850 & #851
* Add integration test for simple assumes expression
This is not the ideal test because it depends on that the upstream
charm having the simple
assumes:
-juju
expression. However, simulating the bug in the facade.py for parsing
such expressions is non-trivial as we need a test to call something
like the CharmsFacade.CharmInfo() to trigger parsing the metadata
(which is actually where it fails in the reported bug). Creating a
local charm wouldn't work because we locally handle the metadata
(wihtout going through anything in the facade.py where the bug is
located). Maybe we can manually call AddCharm in the test for a local
charm and then manually call the CharmInfo with the returned url.
* Fix wait_for_units flag to not block when enough units ready
wait_for_idle will keep waiting if there are less number of units
available than requested (via the wait_for_units flag). However, if
there are already a number of units in a desired status ready to go,
more than (or equal to) wait_for_units, then it shouldn't block until
other not-yet-available units to get into that desired state as well.
fixes#837
* Add integration test for wait_for_units in wait_for_idle
* Fix failing wait_for_idle test
As per discussion in
#841 (comment)
Should fix#837
* Remove accidental printf for debugging
* Small patch for wait_for_idle
* Fix wait_for_exat_units=0 case
* Fix logical typo
* Fix merge resolve error for parsing assumes
* Fix base channel discovery for local charms
`utils.get_local_charm_base()` was incorrectly using the `--channel`
argument (the charm's channel) for discovering the channel part of the
base. (we should stop using the word 'channel' for two different
things).
This fixes that by taking out the incorrect part of the code.
Should fix#839
* Fixes to pass the CI problems regarding missing postgresql charm. (#847)
* Add test for deploying local charm with channel argument
* Add application.get_status to get the most up to date status from API
Introduces an internal self._status which is initially set to the
'unknown' which has the lowest severity. The regular property
self.status uses both self._status and the unit statuses to derive the
most severe status as the application status
* Use application.get_status in wait_for_idle to use most up to date
application status
* Fix unit test TestModelWaitForIdle::test_wait_for_active_status
* Fix linter
* Expect and handle exceptions from the AllWatcher task
fixes#829
The `_all_watcher` task is a coroutine for the AllWatcher to run in
the background all the time forever, and it involves a while loop
that's being controlled manually through some flags (asyncio events),
e.g. things like `_watch_stopping`, `watch_stopped`.
The problem is that when the `_all_watcher` raises an exception (or
receives one from things like `get_config()` like in the case of
ether in the event loop, not handled/or re-raised. This is because
this coroutine is not `await`ed (for good reason), it can't be
`await`ed because there won't ever be any results, this method is
supposed to be working in the background forever getting the deltas
for us. As a result of this, if `_all_watcher` fails, then external
flags like `_watch_received` is never set, and whoever's calling
`await self._watch_received.wait()` will block forever (in this case
the `_after_connect()`. Similarly the `disconnect()` waits for the
`_watch_stopped` flag, which won't be set either, so if we call
disconnect when all_watcher failed then it'll hang forever.
This change fixes this problem by allowing (at the wait-for-flag
spots) to wait for two things, 1) whichever flag we're waiting for, 2)
`_all_watcher` task to be `"done()"`. In the latter case, we should
expect to see an exception because that task is not supposed to be
finished. More importantly, if we do see that the
`_watcher_task.done()`, then we don't sit and wait forever for the
_all_watcher event flags to be set, so we won't hang.
Also a nice side effect of this should be that we should be getting
less number of extra exception outputs saying that the "Task exception
is never handled", since we do call the `.exception()` on the
`_all_watcher` task. Though we'll probably continue to get those from
the tasks like `_pinger` and `_debug_log` etc. However, this is a good
first example solution to handle them as well.
* Assume ubuntu focal base for legacy k8s charms
Updates the get_local_charm_base with Base(20.04/stable, ubuntu) for
legacy k8s charms as per
juju/cmd/juju/application/utils.DeduceOrigin()
* Fix get_local_charm_base call.
---------
Co-authored-by: Juan M. Tirado <juanmanuel-tirado@users.noreply.github.com>
Co-authored-by: Juan Tirado <juan.tirado@canonical.com>
0 commit comments