* Fix restoring unencrypted backup in corner case
If a backup has a encrypted and unencrypted location, and the encrypted
location is beeing restored first, the encryption key is still cached.
When the user restores the unencrypted backup next, it will fail because
the Supervisor tries to use encryption key still.
* Add integration test for restoring backups with and without encryption
* Rename _validate_location_password to _set_location_password
* Reload backup metadata from restore location
* Revert "Reload backup metadata from restore location"
This reverts commit 9b47a1cfe9.
* Make pytest work/punt the ball on docker config restore issue
* Address pylint error
* Handle non-existing file in Backup password check too
Make sure we handle a non-existing backup file also when validating
the password.
* Update supervisor/backups/manager.py
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Add test case and fix password check when multiple locations
* Mock default backup unprotected by default
Instead of setting the protected property which we might not use
everywhere, simply mock the default backup to be unprotected.
* Fix mock of protected backup
* Introduce test for validate_password
Testing showed that validate_password doesn't return anything. Extend
tests to cover this case and fix the actual code.
---------
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Avoid reordering add-on repositories on Backup load
The `ensure_builtin_repositories` function uses a set to deduplicate
items, which sometimes led to a change of order in elements. This is
problematic when deduplicating Backups.
Simply avoid mangling the list of add-on repositories on load. Instead
rely on `update_repositories` which uses the same function to ensure
built-in repositories when loading the store configuration and restoring
a backup file.
* Update tests
* ruff format
* ruff check
* ruff check fixes
* ruff format
* Update tests/store/test_validate.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Simplify test
---------
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Make the API return 404 for non-existing backup files
* Introduce BackupFileNotFoundError exception
* Return 404 on full restore as well
* Fix remaining API tests
* Improve error handling in delete
* Fix pytest
* Fix tests and change error handling to agreed logic
---------
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Avoid test failure by not checking exact size of backup
This is a workaround for the fact that the backup size is not exactly
the same every time. This is due to the fact that the inner gziped tar
file can vary in size due to difference in json file (key order) and
potentially also different field values (UUID, backup slug).
It seems that sorting the keys makes the actual difference today, but
this has runtime overhead and might not catch all cases.
Simply check if size property is there and a number bigger than 0
instead.
* Fix pytest
* Extend backup upload API with file name parameter
Add a query parameter which allows to specify the file name on upload.
All locations will store the backup with the same file name.
* ruff format
* Update tests to cover bad filename
* Fix ruff check error
* Drop unnecessary logging
* Use version which is treated CalVer by AwesomeVersion
The current dev version `99.9.9dev` is treated as unkown version type
by AwesomeVersion. This prevents the version from comparing with
actual Supervisor versions, e.g. from an exsiting backup file.
Make the development version a valid CalVer version so development
versions can handle non-development backups.
* Bump to year 9999
* Backup protected status can vary per location
* Fix test_backup_remove_error test
* Update supervisor/backups/backup.py
* Add Docker registry configuration to backup metadata
* Make use of backup location fixture
* Address pylint
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Extend backup API with file name field
Allow to specify a backup file name when creating a backup. This allows
for user friendly backup file names. If none is specified, the current
behavior remains (backup file name is the backup slug).
* Check passed file name using regex
* Use custom filename on download only if backup file name is backup slug
* ruff format
* Remove path from location for download file name
* Bump Supervisor to Python 3.13
* Update ruff configuration to 0.9.1
Adjust pyproject.toml for ruff 0.9.1. Also make sure that latest version
of ruff is used in pre-commit.
* Set default configuration for pytest-asyncio
* Run ruff check
* Drop deprecated decorator no_type_check_decorator
The upstream PR (https://github.com/python/cpython/issues/106309) says
this never got really implemented by type checkers.
* Bump devcontainer to latest release
Introduce a validate password method which only peaks into the archive
to validate the password before starting the actual restore process.
This makes sure that a wrong password returns an error even when
restoring the backup in background.
* Fix and extend cloud backup support
* Clean up task for cloud backup and remove by location
* Args to kwargs on backup methods
* Fix backup remove error test and typing clean up
When an error occurs when streaming Supervisor logs, the fallback method
receives the follow kwarg as well, which is invalid for the Docker log
handler:
TypeError: APISupervisor.logs() got an unexpected keyword argument 'follow'
The exception is still printed to the logs but with all the extra noise
caused by this error. Removing the argument makes the stack trace more
comprehensible and the fallback actually works as desired.
* Throttle connectivity check on connectivity issue
If Supervisor detects a connectivity issue, currenlty every function
which requires internet get delayed by 10s due to the connectivity
check. This especially slows down initial startup when there are
connectivity issues. It is unlikely to resolve immeaditly, so throttle
the connectivity check to check every 30s.
* Fix pytest
* Reset throttle in test and refactor helper
* CodeRabbit suggestion
---------
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Return cursor of the first host logs entry via headers
Return first entry's cursor via custom `X-First-Cursor` header that can
be consumed by the client and used for continual requesting of the
historic logs. Once the first fetch returns data, the cursor can be
supplied as the first argument to the Range header in another call,
fetching accurate slice of the journal with the previous log entries
using the `Range: entries=cursor[[:num_skip]:num_entries]` syntax.
Let's say we fetch logs with the Range header `entries=:-19:20` (to
fetch very last 20 lines of the logs, see below why not
`entries:-20:20`) and we get `cursor50` as the reply (the actual value
will be much more complex and with no guaranteed format). To fetch
previous slice of the logs, we use `entries=cursor50:-20:20`, which
would return 20 lines previous to `cursor50` and `cursor30` in the
cursor header. This way we can go all the way back to the history.
One problem with the cursor is that it's not possible to determine when
the negative num_skip points beyond the first log entry. In that case
the client either needs to know what the first entry is (via
`entries=:0:1`) or can iterate naively and stop once two subsequent
requests return the same first cursor.
Another caveat, even though it's unlikely it will be hit in real usage,
is that it's not possible to fetch the last line only - if no cursor is
provided, negative num_skip argument is needed, and in that case we're
pointing one record back from the current cursor, which is the previous
record. The least we can return without knowing any cursor is thus
`entries=:-1:2` (where the `2` can be omitted, however with
`entries=:-1:1` we would lose the last line). This also explains why
different `num_skip` and `num_entries` must be used for the first fetch.
* Fix typo (fallback->callback)
* Refactor journal_logs_reader to always return the cursor
* Update tests for new cursor handling
If no cursor is specified and negative num_skip is used, we're pointing
one record back from the last one, so host logs always returned 101
lines as the default. This was also the case of the lines query argument
that used the number directly as num_skip. Instead of doing that, point
N-1 records to the back and then get N records. Handle 1 record and
invalid numbers silently to avoid the need for error handling in
unpractical edge cases.
* Update DNS plug-in on network change
Restart the DNS plug-in when the primary network changes network
changes. This makes sure any potential host OS DNS configuration
changes get picked up by the DNS plug-in as well.
* Add a test case
---------
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Improve WiFi settings error handling
Currently, the frontend potentially provides no WiFi settings dictionary
but still tries to update other (IP address) settings on the interface.
This leads to a stack trace since network manager is not able to fetch
the WiFi settings from the settings dictionary. Simply fill out what
we can and let NetworkManager provide an error.
Also allow to disable a network interface which has no configuration.
This avoids an error when switching to auto and back to disabled then
press save on a new wireless network interface.
* Add debug message when already disabled
* Add pytest for incomplete WiFi settings as psoted by frontend
Simulate the frontend posting no WiFi settings. Make sure the Supervisor
handles this gracefully.
* Allow to set user DNS through API with auto mode
Currently it is only possible to set DNS servers when in static mode.
However, there are use cases to set DNS servers when in auto mode as
well, e.g. if no local DNS server is provided by the DHCP, or the provided
DNS turns out to be non-working.
* Fix use separate data structure for IP configuration fallout
Make sure gateway is correctly converted to the internal IP
representation. Fix type info.
* Overwrite WiFi settings completely too
* Add test for DNS configuration
* Run ruff format
* ruff format
* Use schema validation as source for API defaults
Instead of using replace() simply set the API defaults in the API
schema.
* Revert "Use schema validation as source for API defaults"
This reverts commit 885506fd37.
* Use explicit dataclass initialization
This avoid the unnecessary replaces from before. It also makes it more
obvious that this part of the API doesn't patch existing settings.
Since headers are clumsy considering the Core proxy between the frontend
and Supervisor, add a way to adjust number of lines and verbose log
format using query parameters as well. If both query parameters and
headers are supplied, prefer the former, as it's more prominent when
reading through the request logs.
* Make IPv4 and IPv6 parse errors raise an API error
Currently, IP address parsing errors lead to an execption which is not
handled by the `api_validate()` call. By using concrete IPv4 and IPv6
types and `vol.Coerce()` parsing errors are properly handled.
* ruff format
* ruff check
* Improve connection settings fixture
Make the connection settings fixture behave more closely to the actual
NetworkManager. The behavior has been tested with NetworkManager 1.42.4
(Debian 12) and 1.44.2 (HAOS 13.1). This likely behaves similar in older
versions too.
* Introduce separate skeleton and settings for wireless
Instead of having a combined network settings object which has
Ethernet and Wirless settings, create a separate settings object for
wireless.
* Handle addresses/address-data property like NetworkManager
* Address ruff check
* Improve network API test
Add a test which changes from "static" to "auto". Validate that settings
are updated accordingly. Specifically, today this does clear the DNS
setting (by not providing the property).
* ruff format
* ruff check
* Complete TEST_INTERFACE rename
* Add partial network update as test case
* Test stub for keeping shared images after update
* Keep shared images on addon update
* ImageNotFound should only skip the one image not all
* Fix tests and nonetype error
* Normalize logic between two cleanup methods
* Add manual_forced option to addon boot config
* Include client library in pull request template
* Add boot_config to api output so frontend can use it
* `manual_forced` to `manual_only`
This PR minimizes the D-Bus requirements for tests. It does this by
using dbus-daemon directly instead of dbus-launch. The latter is meant
for graphical applications and therefor has X11 dependencies. It also
leaves the D-Bus daemon running after the tests are done. This will
accumulate dbus-daemon processes over time which is not ideal.
I've also considered using dbus-run-session since it is meant to launch
processes with a private D-Bus session. For Python tests one could
launch it like so:
dbus-run-session -- python3 -m pytest ...
Then `DBUS_SESSION_BUS_ADDRESS` would be used automatically by the
`MessageBus` class. However, to keep the current behavior of the tests,
launching the D-Bus daemon manually is the better option.
* Use separate data structure for IP configuration
So far we use the same IpConfig data structure to represent the users
IP setting and the currently applied IP configuration.
This commit separates the two in IpConfig (for the currently applied
IP configuration) and IpSetting (representing the user provided IP
setting).
* Use custom string constants for connection settings
Use separate string constants for all connection settings. This makes
it easier to search where a particular NetworkManager connection
setting is used.
* Use Python typing for IpAddress in IpProperties
* Address pytest issue