NMDeviceType 13 (NM_DEVICE_TYPE_BRIDGE) was not listed in the
DeviceType enum, causing a warning when NetworkManager reported
a bridge interface.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Add periodic progress logging during initial Core installation
Log installation progress every 15 seconds while downloading the
Home Assistant Core image during initial setup (landing page to core
transition). Uses asyncio.Event with wait_for timeout to produce
time-based logs independent of Docker pull events, ensuring visibility
even when the network stalls.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Add test coverage
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
* Fix getting Supervisor IP address in testing
Newer Docker versions (probably newer than 29.x) do not have a global
IPAddress attribute under .NetworkSettings anymore. There is a network
specific map under Networks. For our case the hassio has the relevant
IP address. This network specific maps already existed before, hence
the new inspect format works for old as well as new Docker versions.
While at it, also adjust the test fixture.
* Actively wait for hassio IPAddress to become valid
* Remove blocking I/O added to import_image
* Add scanned modules to extra blockbuster functions
* Use same cast avoidance approach in export_image
* Remove unnecessary local image_writer variable
* Remove unnecessary local image_tar_stream variable
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Raise HomeAssistantWSError when Core WebSocket is unreachable
Previously, async_send_command silently returned None when Home Assistant
Core was not reachable, leading to misleading error messages downstream
(e.g. "returned invalid response of None instead of a list of users").
Refactor _can_send to _ensure_connected which now raises
HomeAssistantWSError on connection failures while still returning False
for silent-skip cases (shutdown, unsupported version). async_send_message
catches the exception to preserve fire-and-forget behavior.
Update callers that don't handle HomeAssistantWSError: _hardware_events
and addon auto-update in tasks.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Simplify HomeAssistantWebSocket command/message distinction
The WebSocket layer had a confusing split between "messages" (fire-and-forget)
and "commands" (request/response) that didn't reflect Home Assistant Core's
architecture where everything is just a WS command.
- Remove dead WSClient.async_send_message (never called)
- Rename async_send_message → _async_send_command (private, fire-and-forget)
- Rename send_message → send_command (sync wrapper)
- Simplify _ensure_connected: drop message param, always raise on failure
- Simplify async_send_command: always raise on connection errors
- Remove MIN_VERSION gating (minimum supported Core is now 2024.2+)
- Remove begin_backup/end_backup version guards for Core < 2022.1.0
- Add debug logging for silently ignored connection errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Wait for Core to come up before backup
This is crucial since the WebSocket command to Core now fails with the
new error handling if Core is not running yet.
* Wait for Core install job instead
* Use CLI to fetch jobs instead of Supervisor API
The Supervisor API needs authentication token, which we have not
available at this point in the workflow. Instead of fetching the token,
we can use the CLI, which is available in the container.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When an addon updates from having no ingress to having ingress, the
ingress token map was never rebuilt. Both update() and rebuild() called
_check_ingress_port() to assign a dynamic port but skipped the
sys_ingress.reload() call that registers the token. This caused
Ingress.get() to return None, resulting in a 503 error.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The CLI calls in the tests are still using deprecated add-ons terminology,
causing deprecation warnings. Change the commands and flags to the new ones.
* Add D-Bus tolerant enum base classes to prevent crashes on unknown values
D-Bus services (systemd, NetworkManager, RAUC, UDisks2) can introduce
new enum values at any time via OS updates. Standard Python enum
construction raises ValueError for unknown values, which would crash
the Supervisor.
Introduce DBusStrEnum and DBusIntEnum base classes that use Python's
_missing_ hook to create pseudo-members for unknown values. These
pseudo-members pass isinstance checks (satisfying typeguard), preserve
the original value, don't pollute __members__, and report unknown
values to Sentry (deduplicated per class+value) for observability.
Migrate 17 D-Bus enums in dbus/const.py and udisks2/const.py to the
new base classes. Enums only sent TO D-Bus (StopUnitMode, StartUnitMode,
etc.) are left unchanged. Remove the manual try/except workaround in
NetworkInterface.type now that DBusIntEnum handles it automatically.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Add explicit enum conversions for systemd-resolved D-Bus properties
The resolved properties (dns_over_tls, dns_stub_listener, dnssec, llmnr,
multicast_dns, resolv_conf_mode) were returning raw string values from
D-Bus without converting to their declared enum types. This would fail
runtime type checking with typeguard.
Now safe to add explicit conversions since these enums use DBusStrEnum,
which tolerates unknown values from D-Bus without crashing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Avoid blocking I/O in D-Bus enum Sentry reporting
Move sentry_sdk.capture_message out of the event loop by adding a
fire_and_forget_capture_message helper that offloads the call to the
executor when a running loop is detected.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Handle exceptions when reporting message to Sentry
* Narrow typing of reported values
Use str/int explicitly since that is what the two existing Enum classes
can actually report.
* Adjust test style
* Apply suggestions from code review
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Fix RestartPolicy type annotation for runtime type checking
The restart_policy property returned a plain str from the Docker API
instead of a RestartPolicy instance, causing TypeCheckError with
typeguard. Use explicit mapping via _restart_policy_from_model(),
consistent with the existing _container_state_from_model() pattern,
to always return a proper RestartPolicy enum member. Unknown values
from Docker are logged and default to RestartPolicy.NO.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Drop unnecessary _RESTART_POLICY_MAP
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The explicit event loop parameter passed to WSClient has been deprecated
since Python 3.8. Replace self._loop.create_future() with
asyncio.get_running_loop().create_future() and remove the loop parameter
from __init__, connect_with_auth, and its call site.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a type check for device options in AddonOptions._single_validate
to ensure the value is a string before passing it to Path(). When a
non-string value (e.g. a dict) is provided for a device option, this
now raises a proper vol.Invalid error instead of an unhandled TypeError.
Fixes SUPERVISOR-175H
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace the dynamic `getattr(self.sys_websession, method)(...)` pattern
with the explicit `self.sys_websession.request(method, ...)` call. This
is type-safe and avoids runtime failures from typos in method names.
Also wrap the timeout parameter in `aiohttp.ClientTimeout` for
consistency with the typed `request()` signature.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The retrieve-changed-files action only supports pull_request and push
events. Restrict the "Get changed files" step to those event types so
manual workflow_dispatch runs no longer fail. Also always build wheels
on manual dispatches since there are no changed files to compare against.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Remove the automated frontend update workflow and version tracking file
as the frontend repository no longer builds supervisor-specific assets.
Frontend updates will now follow a different distribution mechanism.
Related to home-assistant/frontend#29132
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace ctypes integer types (c_uint32, c_uint64) with standard Python int
in SlotStatusDataType to satisfy typeguard runtime type checking. D-Bus
returns standard Python integers, not ctypes objects.
Also fix the mark() method return type from tuple[str, str] to list[str] to
match the actual D-Bus return value, and add missing optional fields
"bundle.hash" and "installed.transaction" to SlotStatusDataType.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix environment variable type errors by converting IP addresses to strings
Environment variables must be strings, but IPv4Address and IPv4Network
objects were being passed directly to container environment dictionaries,
causing typeguard validation errors.
Changes:
- Convert IPv4Address objects to strings in homeassistant.py for
SUPERVISOR and HASSIO environment variables
- Convert IPv4Network object to string in observer.py for
NETWORK_MASK environment variable
- Update tests to expect string values instead of IP objects in
environment dictionaries
- Remove unused ip_network import from test_observer.py
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Use explicit string conversion for extra_hosts IP addresses
Use the !s format specifier in the f-string to explicitly convert
IPv4Address objects to strings when building the ExtraHosts list.
While f-strings implicitly convert objects to strings, using !s makes
the conversion explicit and consistent with the environment variable
fixes in the previous commit.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Add the Docker storage driver (e.g., overlay2, vfs) to the context
information sent with Sentry error reports. This helps correlate
issues with specific storage backends and improves debugging of
Docker-related problems.
The storage driver is now included in both SETUP and RUNNING state
error reports under contexts.docker.storage_driver.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix MCP API proxy support for streaming and headers
This commit fixes two issues with using the core API core/api/mcp through
the API proxy:
1. **Streaming support**: The proxy now detects text/event-stream responses
and properly streams them instead of buffering all data. This is required
for MCP's Server-Sent Events (SSE) transport.
2. **Header forwarding**: Added MCP-required headers to the forwarded headers:
- Accept: Required for content negotiation
- Last-Event-ID: Required for resuming broken SSE connections
- Mcp-Session-Id: Required for session management across requests
The proxy now also preserves MCP-related response headers (Mcp-Session-Id)
and sets X-Accel-Buffering to "no" for streaming responses to prevent
buffering by intermediate proxies.
Tests added to verify:
- MCP headers are properly forwarded to Home Assistant
- Streaming responses (text/event-stream) are handled correctly
- Response headers are preserved
* Refactor: reuse stream logic for SSE responses (#3)
* Fix ruff format + cover streaming payload error
* Fix merge error
* Address review comments (headers / streaming proxy) (#4)
* Address review: header handling for streaming/non-streaming
* Forward MCP-Protocol-Version and Origin headers
* Do not forward Origin header through API proxy (#5)
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
The CpuArch enum was being used inconsistently throughout the codebase,
with some code expecting enum values and other code expecting strings.
This caused type checking issues and potential runtime errors.
Changes:
- Fix match_base() to return CpuArch enum instead of str
- Add explicit string conversions using !s formatting where arch values
are used in f-strings (build.py, model.py)
- Convert CpuArch to str explicitly in contexts requiring strings
(docker/addon.py, misc/filter.py)
- Update all tests to use CpuArch enum values instead of strings
- Update test mocks to return CpuArch enum values
This ensures type consistency and improves MyPy type checking accuracy
across the architecture detection and management code.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Update type hints throughout backup/restore code to match actual types:
- Change tarfile.TarFile to SecureTarFile for backup/restore methods
- Add None union types for Backup properties that can return None
- Fix exclude_database parameter to accept None in restore method
- Update API backup methods to handle None return from backup tasks
- Fix condition check for exclude_database to explicitly compare with True
- Add assertion to help type checker with indexed assignment
These changes improve type safety and resolve type checking issues
discovered by runtime type validation.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The manifest fetcher was using docker.io as the registry API endpoint,
but Docker Hub's actual registry API is at registry-1.docker.io. When
trying to access https://docker.io/v2/..., requests were being redirected
to https://www.docker.com/ (the marketing site), which returned HTML
instead of JSON, causing manifest fetching to fail.
This matches exactly what Docker itself does internally - see
daemon/pkg/registry/config.go:49 where Docker hardcodes
DefaultRegistryHost = "registry-1.docker.io" for registry operations.
Changes:
- Add DOCKER_HUB_API constant for the actual API endpoint
- Add _get_api_endpoint() helper to translate docker.io to
registry-1.docker.io for HTTP API calls
- Update _get_auth_token() and _fetch_manifest() to use the API endpoint
- Keep docker.io as the registry identifier for naming and credentials
- Add tests to verify the API endpoint translation
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Migrate info and events to aiodocker
* Migrate container logs to aiodocker
* Fix dns plugin loop test
* Fix mocking for docker info
* Fixes from feedback
* Harden monitor error handling
* Deleted failing tests because they were not useful
* Add frontend_development_pr to backup exclusion list
* update frontend dev pr folder name to frontend_development_artifacts
* Update backup exclusion list to replace frontend_development_artifacts with .cache
* Fix Docker exec exit code handling by using detach=False
When executing commands inside containers using `container_run_inside()`,
the exec metadata did not contain a valid exit code because `detach=True`
starts the exec in the background and returns immediately before completion.
Root cause: With `detach=True`, Docker's exec start() returns an awaitable
that yields output bytes. However, the await only waits for the HTTP/REST
call to complete, NOT for the actual exec command to finish. The command
continues running in the background after the HTTP response is received.
Calling `inspect()` immediately after returns `ExitCode: None` because
the exec hasn't completed yet.
Solution: Use `detach=False` which returns a Stream object that:
- Automatically waits for exec completion by reading from the stream
- Provides actual command output (not just empty bytes)
- Makes exit code immediately available after stream closes
- No polling needed
Changes:
- Switch from `detach=True` to `detach=False` in container_run_inside()
- Read output from stream using async context manager
- Add defensive validation to ensure ExitCode is never None
- Update tests to mock the Stream interface using AsyncMock
- Add debug log showing exit code after command execution
Fixes#6518
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Address review feedback
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
It seems that agents like Claude like to do absolute imports despite it's a
pattern we're trying to avoid. Adjust the instructions to avoid them doing it.
The Supervisor's /core/api proxy previously only supported GET and POST
methods, returning 405 Method Not Allowed for DELETE requests. This
prevented addons from calling Home Assistant Core REST API endpoints
that require DELETE methods, such as deleting automations, scripts,
or scenes.
The underlying proxy implementation already supported passing through
any HTTP method via request.method.lower(), so only the route
registration was needed.
Fixes#6509
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add exception handling for pull progress tracking errors
Wrap progress event processing in try-except blocks to prevent image
pulls from failing due to progress tracking issues. This ensures that
progress updates, which are purely informational, never abort the
actual Docker pull operation.
Catches two categories of exceptions:
- ValueError: Includes "Cannot update a job that is done" errors that
can occur under rare event combinations (similar to #6513)
- All other exceptions: Defensive catch-all for any unexpected errors
in the progress tracking logic
All exceptions are logged with full context (layer ID, status, progress)
and sent to Sentry for tracking and debugging. The pull continues
successfully in all cases.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Apply suggestions from code review
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Apply suggestions from code review
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
The aiodocker 0.25.0 upgrade (PR #6448) changed how DockerError handles
the message parameter. The library now extracts the message string from
Docker API JSON responses before passing it to DockerError, rather than
passing the entire dict.
The port conflict detection tests were written before this change and
incorrectly passed dicts to DockerError. This caused TypeErrors when
the port conflict detection code tried to match err.message with a
regex, expecting a string but receiving a dict.
Update both test_addon_start_port_conflict_error and
test_observer_start_port_conflict to pass message strings directly,
matching the real aiodocker 0.25.0 behavior.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Map port conflict on start error into a known error
* Apply suggestions from code review
* Run ruff format
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Use count-based progress for Docker image pulls
Refactor Docker image pull progress to use a simpler count-based approach
where each layer contributes equally (100% / total_layers) regardless of
size. This replaces the previous size-weighted calculation that was
susceptible to progress regression.
The core issue was that Docker rate-limits concurrent downloads (~3 at a
time) and reports layer sizes only when downloading starts. With size-
weighted progress, large layers appearing late would cause progress to
drop dramatically (e.g., 59% -> 29%) as the total size increased.
The new approach:
- Each layer contributes equally to overall progress
- Per-layer progress: 70% download weight, 30% extraction weight
- Progress only starts after first "Downloading" event (when layer
count is known)
- Always caps at 99% - job completion handles final 100%
This simplifies the code by moving progress tracking to a dedicated
module (pull_progress.py) and removing complex size-based scaling logic
that tried to account for unknown layer sizes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Exclude already-existing layers from pull progress calculation
Layers that already exist locally should not count towards download
progress since there's nothing to download for them. Only layers that
need pulling are included in the progress calculation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add registry manifest fetcher for size-based pull progress
Fetch image manifests directly from container registries before pulling
to get accurate layer sizes upfront. This enables size-weighted progress
tracking where each layer contributes proportionally to its byte size,
rather than equal weight per layer.
Key changes:
- Add RegistryManifestFetcher that handles auth discovery via
WWW-Authenticate headers, token fetching with optional credentials,
and multi-arch manifest list resolution
- Update ImagePullProgress to accept manifest layer sizes via
set_manifest() and calculate size-weighted progress
- Fall back to count-based progress when manifest fetch fails
- Pre-populate layer sizes from manifest when creating layer trackers
The manifest fetcher supports ghcr.io, Docker Hub, and private
registries by using credentials from Docker config when available.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Clamp progress to 100 to prevent floating point precision issues
Floating point arithmetic in weighted progress calculations can produce
values slightly above 100 (e.g., 100.00000000000001). This causes
validation errors when the progress value is checked.
Add min(100, ...) clamping to both size-weighted and count-based
progress calculations to ensure the result never exceeds 100.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Use sys_websession for manifest fetcher instead of creating new session
Reuse the existing CoreSys websession for registry manifest requests
instead of creating a new aiohttp session. This improves performance
and follows the established pattern used throughout the codebase.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Make platform parameter required and warn on missing platform
- Make platform a required parameter in get_manifest() and _fetch_manifest()
since it's always provided by the calling code
- Return None and log warning when requested platform is not found in
multi-arch manifest list, instead of falling back to first manifest
which could be the wrong architecture
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Log manifest fetch failures at warning level
Users will notice degraded progress tracking when manifest fetch fails,
so log at warning level to help diagnose issues.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add pylint disable comments for protected access in manifest tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Separate download_current and total_size updates in pull progress
Update download_current and total_size independently in the DOWNLOADING
handler. This ensures download_current is updated even when total is
not yet available.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Reject invalid platform format in manifest selection
---------
Co-authored-by: Claude <noreply@anthropic.com>
During system shutdown (reboot/poweroff), the watchdog was incorrectly
detecting the Home Assistant Core container as failed and attempting to
restart it. This occurred because Docker was stopping all containers in
parallel with Supervisor's own shutdown sequence, causing the watchdog
to trigger while add-ons were still being stopped.
This led to an abrupt termination of Core before it could cleanly shut
down its SQLite database, resulting in a warning on the next startup:
"The system could not validate that the sqlite3 database was shutdown
cleanly".
The fix registers a supervisor state change listener that unregisters
the watchdog when entering any shutdown state (SHUTDOWN, STOPPING, or
CLOSE). This prevents restart attempts during both user-initiated
reboots (via API) and external shutdown signals (Docker SIGTERM,
console reboot commands).
Since SHUTDOWN, STOPPING, and CLOSE are terminal states with no reverse
transition back to RUNNING, no re-registration logic is needed.
Fixes#6511
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Catch ValueError exceptions with "Cannot update a job that is done"
during image pull progress updates. This error occurs intermittently
when progress events arrive after a job has completed. It is not clear
why this happens, maybe the job gets prematurely marked as done or
the pull events arrive in a different order than expected.
Rather than failing the entire pull operation, we now:
- Log a warning with context (layer ID, status, progress)
- Send the error to Sentry for tracking and investigation
- Continue with the pull operation
This prevents pull failures while gathering information to help
identify and fix the root cause of the race condition.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The test was failing intermittently in CI because concurrent async
operations in asyncio.gather() were getting slightly different
timestamps (microseconds apart) despite being inside a time_machine
context.
When test2.execute() calls were timestamped at start+2ms due to async
scheduling delays, they weren't cleaned up in the final test block
(cutoff = start+1ms), causing a false rate limit error.
Fix by using tick=False to completely freeze time during the gather,
ensuring all 4 calls get the exact same timestamp.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>