* Fix environment variable type errors by converting IP addresses to strings
Environment variables must be strings, but IPv4Address and IPv4Network
objects were being passed directly to container environment dictionaries,
causing typeguard validation errors.
Changes:
- Convert IPv4Address objects to strings in homeassistant.py for
SUPERVISOR and HASSIO environment variables
- Convert IPv4Network object to string in observer.py for
NETWORK_MASK environment variable
- Update tests to expect string values instead of IP objects in
environment dictionaries
- Remove unused ip_network import from test_observer.py
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Use explicit string conversion for extra_hosts IP addresses
Use the !s format specifier in the f-string to explicitly convert
IPv4Address objects to strings when building the ExtraHosts list.
While f-strings implicitly convert objects to strings, using !s makes
the conversion explicit and consistent with the environment variable
fixes in the previous commit.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Add the Docker storage driver (e.g., overlay2, vfs) to the context
information sent with Sentry error reports. This helps correlate
issues with specific storage backends and improves debugging of
Docker-related problems.
The storage driver is now included in both SETUP and RUNNING state
error reports under contexts.docker.storage_driver.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Fix MCP API proxy support for streaming and headers
This commit fixes two issues with using the core API core/api/mcp through
the API proxy:
1. **Streaming support**: The proxy now detects text/event-stream responses
and properly streams them instead of buffering all data. This is required
for MCP's Server-Sent Events (SSE) transport.
2. **Header forwarding**: Added MCP-required headers to the forwarded headers:
- Accept: Required for content negotiation
- Last-Event-ID: Required for resuming broken SSE connections
- Mcp-Session-Id: Required for session management across requests
The proxy now also preserves MCP-related response headers (Mcp-Session-Id)
and sets X-Accel-Buffering to "no" for streaming responses to prevent
buffering by intermediate proxies.
Tests added to verify:
- MCP headers are properly forwarded to Home Assistant
- Streaming responses (text/event-stream) are handled correctly
- Response headers are preserved
* Refactor: reuse stream logic for SSE responses (#3)
* Fix ruff format + cover streaming payload error
* Fix merge error
* Address review comments (headers / streaming proxy) (#4)
* Address review: header handling for streaming/non-streaming
* Forward MCP-Protocol-Version and Origin headers
* Do not forward Origin header through API proxy (#5)
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
The CpuArch enum was being used inconsistently throughout the codebase,
with some code expecting enum values and other code expecting strings.
This caused type checking issues and potential runtime errors.
Changes:
- Fix match_base() to return CpuArch enum instead of str
- Add explicit string conversions using !s formatting where arch values
are used in f-strings (build.py, model.py)
- Convert CpuArch to str explicitly in contexts requiring strings
(docker/addon.py, misc/filter.py)
- Update all tests to use CpuArch enum values instead of strings
- Update test mocks to return CpuArch enum values
This ensures type consistency and improves MyPy type checking accuracy
across the architecture detection and management code.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Update type hints throughout backup/restore code to match actual types:
- Change tarfile.TarFile to SecureTarFile for backup/restore methods
- Add None union types for Backup properties that can return None
- Fix exclude_database parameter to accept None in restore method
- Update API backup methods to handle None return from backup tasks
- Fix condition check for exclude_database to explicitly compare with True
- Add assertion to help type checker with indexed assignment
These changes improve type safety and resolve type checking issues
discovered by runtime type validation.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The manifest fetcher was using docker.io as the registry API endpoint,
but Docker Hub's actual registry API is at registry-1.docker.io. When
trying to access https://docker.io/v2/..., requests were being redirected
to https://www.docker.com/ (the marketing site), which returned HTML
instead of JSON, causing manifest fetching to fail.
This matches exactly what Docker itself does internally - see
daemon/pkg/registry/config.go:49 where Docker hardcodes
DefaultRegistryHost = "registry-1.docker.io" for registry operations.
Changes:
- Add DOCKER_HUB_API constant for the actual API endpoint
- Add _get_api_endpoint() helper to translate docker.io to
registry-1.docker.io for HTTP API calls
- Update _get_auth_token() and _fetch_manifest() to use the API endpoint
- Keep docker.io as the registry identifier for naming and credentials
- Add tests to verify the API endpoint translation
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Migrate info and events to aiodocker
* Migrate container logs to aiodocker
* Fix dns plugin loop test
* Fix mocking for docker info
* Fixes from feedback
* Harden monitor error handling
* Deleted failing tests because they were not useful
* Add frontend_development_pr to backup exclusion list
* update frontend dev pr folder name to frontend_development_artifacts
* Update backup exclusion list to replace frontend_development_artifacts with .cache
* Fix Docker exec exit code handling by using detach=False
When executing commands inside containers using `container_run_inside()`,
the exec metadata did not contain a valid exit code because `detach=True`
starts the exec in the background and returns immediately before completion.
Root cause: With `detach=True`, Docker's exec start() returns an awaitable
that yields output bytes. However, the await only waits for the HTTP/REST
call to complete, NOT for the actual exec command to finish. The command
continues running in the background after the HTTP response is received.
Calling `inspect()` immediately after returns `ExitCode: None` because
the exec hasn't completed yet.
Solution: Use `detach=False` which returns a Stream object that:
- Automatically waits for exec completion by reading from the stream
- Provides actual command output (not just empty bytes)
- Makes exit code immediately available after stream closes
- No polling needed
Changes:
- Switch from `detach=True` to `detach=False` in container_run_inside()
- Read output from stream using async context manager
- Add defensive validation to ensure ExitCode is never None
- Update tests to mock the Stream interface using AsyncMock
- Add debug log showing exit code after command execution
Fixes#6518
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Address review feedback
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
It seems that agents like Claude like to do absolute imports despite it's a
pattern we're trying to avoid. Adjust the instructions to avoid them doing it.
The Supervisor's /core/api proxy previously only supported GET and POST
methods, returning 405 Method Not Allowed for DELETE requests. This
prevented addons from calling Home Assistant Core REST API endpoints
that require DELETE methods, such as deleting automations, scripts,
or scenes.
The underlying proxy implementation already supported passing through
any HTTP method via request.method.lower(), so only the route
registration was needed.
Fixes#6509
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Add exception handling for pull progress tracking errors
Wrap progress event processing in try-except blocks to prevent image
pulls from failing due to progress tracking issues. This ensures that
progress updates, which are purely informational, never abort the
actual Docker pull operation.
Catches two categories of exceptions:
- ValueError: Includes "Cannot update a job that is done" errors that
can occur under rare event combinations (similar to #6513)
- All other exceptions: Defensive catch-all for any unexpected errors
in the progress tracking logic
All exceptions are logged with full context (layer ID, status, progress)
and sent to Sentry for tracking and debugging. The pull continues
successfully in all cases.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Apply suggestions from code review
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
* Apply suggestions from code review
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
The aiodocker 0.25.0 upgrade (PR #6448) changed how DockerError handles
the message parameter. The library now extracts the message string from
Docker API JSON responses before passing it to DockerError, rather than
passing the entire dict.
The port conflict detection tests were written before this change and
incorrectly passed dicts to DockerError. This caused TypeErrors when
the port conflict detection code tried to match err.message with a
regex, expecting a string but receiving a dict.
Update both test_addon_start_port_conflict_error and
test_observer_start_port_conflict to pass message strings directly,
matching the real aiodocker 0.25.0 behavior.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Map port conflict on start error into a known error
* Apply suggestions from code review
* Run ruff format
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Use count-based progress for Docker image pulls
Refactor Docker image pull progress to use a simpler count-based approach
where each layer contributes equally (100% / total_layers) regardless of
size. This replaces the previous size-weighted calculation that was
susceptible to progress regression.
The core issue was that Docker rate-limits concurrent downloads (~3 at a
time) and reports layer sizes only when downloading starts. With size-
weighted progress, large layers appearing late would cause progress to
drop dramatically (e.g., 59% -> 29%) as the total size increased.
The new approach:
- Each layer contributes equally to overall progress
- Per-layer progress: 70% download weight, 30% extraction weight
- Progress only starts after first "Downloading" event (when layer
count is known)
- Always caps at 99% - job completion handles final 100%
This simplifies the code by moving progress tracking to a dedicated
module (pull_progress.py) and removing complex size-based scaling logic
that tried to account for unknown layer sizes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Exclude already-existing layers from pull progress calculation
Layers that already exist locally should not count towards download
progress since there's nothing to download for them. Only layers that
need pulling are included in the progress calculation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add registry manifest fetcher for size-based pull progress
Fetch image manifests directly from container registries before pulling
to get accurate layer sizes upfront. This enables size-weighted progress
tracking where each layer contributes proportionally to its byte size,
rather than equal weight per layer.
Key changes:
- Add RegistryManifestFetcher that handles auth discovery via
WWW-Authenticate headers, token fetching with optional credentials,
and multi-arch manifest list resolution
- Update ImagePullProgress to accept manifest layer sizes via
set_manifest() and calculate size-weighted progress
- Fall back to count-based progress when manifest fetch fails
- Pre-populate layer sizes from manifest when creating layer trackers
The manifest fetcher supports ghcr.io, Docker Hub, and private
registries by using credentials from Docker config when available.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Clamp progress to 100 to prevent floating point precision issues
Floating point arithmetic in weighted progress calculations can produce
values slightly above 100 (e.g., 100.00000000000001). This causes
validation errors when the progress value is checked.
Add min(100, ...) clamping to both size-weighted and count-based
progress calculations to ensure the result never exceeds 100.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Use sys_websession for manifest fetcher instead of creating new session
Reuse the existing CoreSys websession for registry manifest requests
instead of creating a new aiohttp session. This improves performance
and follows the established pattern used throughout the codebase.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Make platform parameter required and warn on missing platform
- Make platform a required parameter in get_manifest() and _fetch_manifest()
since it's always provided by the calling code
- Return None and log warning when requested platform is not found in
multi-arch manifest list, instead of falling back to first manifest
which could be the wrong architecture
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Log manifest fetch failures at warning level
Users will notice degraded progress tracking when manifest fetch fails,
so log at warning level to help diagnose issues.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add pylint disable comments for protected access in manifest tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Separate download_current and total_size updates in pull progress
Update download_current and total_size independently in the DOWNLOADING
handler. This ensures download_current is updated even when total is
not yet available.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Reject invalid platform format in manifest selection
---------
Co-authored-by: Claude <noreply@anthropic.com>
During system shutdown (reboot/poweroff), the watchdog was incorrectly
detecting the Home Assistant Core container as failed and attempting to
restart it. This occurred because Docker was stopping all containers in
parallel with Supervisor's own shutdown sequence, causing the watchdog
to trigger while add-ons were still being stopped.
This led to an abrupt termination of Core before it could cleanly shut
down its SQLite database, resulting in a warning on the next startup:
"The system could not validate that the sqlite3 database was shutdown
cleanly".
The fix registers a supervisor state change listener that unregisters
the watchdog when entering any shutdown state (SHUTDOWN, STOPPING, or
CLOSE). This prevents restart attempts during both user-initiated
reboots (via API) and external shutdown signals (Docker SIGTERM,
console reboot commands).
Since SHUTDOWN, STOPPING, and CLOSE are terminal states with no reverse
transition back to RUNNING, no re-registration logic is needed.
Fixes#6511
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Catch ValueError exceptions with "Cannot update a job that is done"
during image pull progress updates. This error occurs intermittently
when progress events arrive after a job has completed. It is not clear
why this happens, maybe the job gets prematurely marked as done or
the pull events arrive in a different order than expected.
Rather than failing the entire pull operation, we now:
- Log a warning with context (layer ID, status, progress)
- Send the error to Sentry for tracking and investigation
- Continue with the pull operation
This prevents pull failures while gathering information to help
identify and fix the root cause of the race condition.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
The test was failing intermittently in CI because concurrent async
operations in asyncio.gather() were getting slightly different
timestamps (microseconds apart) despite being inside a time_machine
context.
When test2.execute() calls were timestamped at start+2ms due to async
scheduling delays, they weren't cleaned up in the final test block
(cutoff = start+1ms), causing a false rate limit error.
Fix by using tick=False to completely freeze time during the gather,
ensuring all 4 calls get the exact same timestamp.
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* Check frontend availability after Home Assistant Core updates
Add verification that the frontend is actually accessible at "/" after core
updates to ensure the web interface is serving properly, not just that the
API endpoints respond.
Previously, the update verification only checked API endpoints and whether
the frontend component was loaded. This could miss cases where the API is
responsive but the frontend fails to serve the UI.
Changes:
- Add check_frontend_available() method to HomeAssistantAPI that fetches
the root path and verifies it returns HTML content
- Integrate frontend check into core update verification flow after
confirming the frontend component is loaded
- Trigger automatic rollback if frontend is inaccessible after update
- Fix blocking I/O calls in rollback log file handling to use async
executor
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Avoid checking frontend if config data is None
* Improve pytest tests
* Make sure Core returns a valid config
* Remove Core version check in frontend availability test
The call site already makes sure that an actual Home Assistant Core
instance is running before calling the frontend availability test.
So this is rather redundant. Simplify the code by removing the version
check and update tests accordingly.
* Add test coverage for get_config
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Add route_metric attribute to IpProperties class
Signed-off-by: David Rapan <david@rapan.cz>
* Refactor dbus setting IP constants
Signed-off-by: David Rapan <david@rapan.cz>
* Add route metric
Signed-off-by: David Rapan <david@rapan.cz>
* Merge test_api_network_interface_info
Signed-off-by: David Rapan <david@rapan.cz>
* Add test case for route metric update
Signed-off-by: David Rapan <david@rapan.cz>
---------
Signed-off-by: David Rapan <david@rapan.cz>
* Migrate all docker container interactions to aiodocker
* Remove containers_legacy since its no longer used
* Add back remove color logic
* Revert accidental invert of conditional in setup_network
* Fix typos found by copilot
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Revert "Apply suggestions from code review"
This reverts commit 0a475433ea.
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Improve Supervisor startup wait logic in CI workflow
The 'Wait for Supervisor to come up' step was failing intermittently when
the Supervisor API wasn't immediately available. The original script relied
on bash's lenient error handling in command substitution, which could fail
unpredictably.
Changes:
- Use curl -f flag to properly handle HTTP errors
- Use jq -e for robust JSON validation and exit code handling
- Add explicit 5-minute timeout with elapsed time tracking
- Reduce log noise by only reporting progress every 15 seconds
- Add comprehensive error diagnostics on timeout:
* Show last API response received
* Dump last 50 lines of Supervisor logs
- Show startup time on success for performance monitoring
This makes the CI workflow more reliable and easier to debug when the
Supervisor fails to start.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* Use YAML anchor to deduplicate wait step in CI workflow
The 'Wait for Supervisor to come up' step appears twice in the
run_supervisor job - once after starting and once after restarting.
Use a YAML anchor to define the step once and reference it on the
second occurrence.
This reduces duplication by 28 lines and makes future maintenance
easier by ensuring both wait steps remain identical.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Add more syslog identifiers (most importantly containerd) extracted from real
systems that were missing in the list. This should make the host logs contain
the same events as journalctl logs, minus audit logs and Docker container logs.