1
0
mirror of https://github.com/home-assistant/operating-system.git synced 2026-02-15 07:29:08 +00:00
Files
operating-system/buildroot-external/rootfs-overlay/usr/lib/sysctl.d/20-network.conf
@RubenKelevra d59053301e sysctl: disable TCP slow start after idle (#4239)
This knob controls whether Linux throws away its congestion
window (cwnd) after a connection has been idle for at least one
retransmission timeout (RTO). With a value of 0, Linux keeps the
cwnd it had before the idle period and can send that amount
immediately when the application resumes writing (still bounded
by the receiver's advertised window and by pacing).

With slow start after idle enabled (the default), Linux allows
only about 10 MSS (~14 KiB) in the first burst after idle. Even
when a connection stays open to web clients, a short idle forces
multiple round trips to ramp back up.

On Wi-Fi, local connections often have very low RTTs, which drives
the RTO down. Between page navigations the connection is considered
idle by Linux. If the next request happens during a transient
latency spike on the Wi-Fi link, the sender starts with a tiny
cwnd and must grow it over many RTTs, so the spike causes outsized
and visible loading delays.

For devices behind typical Internet uplinks, the higher RTT makes
the initial ramp-up feel even slower until the window regains size.
However, here the connection does take longer to drop to idle, for
Linux standards. So the connection is less likely to be considered
idle between navigations.

This change does not affect flows with very small receive windows
(e.g. many microcontrollers), which are limited by the peer's
advertised window rather than the sender's cwnd.

Example RTOs on low jitter, low loss connections:

Defaults:
TCP_RTO_MIN = 200 ms
TCP_RTO_MAX = 120 s
low-jitter path so rttvar_us = 200 ms
HZ = 1000 or 250 or 100 (depending on the kernel settings)

*31 ms average RTT*

- SRTT ≈ 31 ms; RTTVAR ≈ 200 ms → Sum = 231 ms
- 'usecs_to_jiffies(231000)' = 231 jiffies (HZ 1000) -> RTO ≈ 231 ms
- If 'HZ = 250' (4 ms tick), ceil(231/4)=58 jiffies -> 232 ms RTO
- If 'HZ = 100' (10 ms tick), ceil(231/10)=23 jiffies -> 240 ms RTO

*178 ms average RTT*

- HZ=1000 (1 ms tick): 378 ms RTO
- HZ=250 (4 ms tick): ceil(378/4)=95 -> 380 ms RTO
- HZ=100 (10 ms tick): ceil(378/10)=38 -> 380 ms RTO

*292 ms average RTT*

- HZ=1000 (1 ms tick): 492 ms RTO
- HZ=250 (4 ms tick): ceil(492/4)=123 -> 492 ms RTO
- HZ=100 (10 ms tick): ceil(492/10)=50 -> 500 ms RTO

Any loss or jitter will increase those RTO values.
2025-08-26 19:37:48 +02:00

17 lines
673 B
Plaintext

# Since multicast is rather popular and we have many integrations running,
# more than the default of 20 memberships might be required.
net.ipv4.igmp_max_memberships = 1024
# Increase maximum receive and send buffer size
net.core.rmem_max = 4194304
net.core.wmem_max = 4194304
# Disable TCP slow start after idle to keep cwnd across idle
net.ipv4.tcp_slow_start_after_idle = 0
# Enable linear RTO backoff for "thin" TCP streams.
# "Thin" per kernel: tp->packets_out < 4 and not in initial slow start.
# Ref: tcp_stream_is_thin(tp) in include/net/tcp.h
# Rationale: reduce tail latency for API-style connections after sporadic loss.
net.ipv4.tcp_thin_linear_timeouts = 1