At any time, dnsmasq will have a set of sockets open, bound to
random ports, on which it sends queries to upstream nameservers.
This patch fixes the existing problem that a reply for ANY in-flight
query would be accepted via ANY open port, which increases the
chances of an attacker flooding answers "in the blind" in an
attempt to poison the DNS cache. CERT VU#434904 refers.
A call to get_new_frec() for a DNSSEC query could manage to
free the original frec that we're doing the DNSSEC query to validate.
Bad things then happen.
This requires that the original frec is old, so it doesn't happen
in practice. I found it when running under gdb, and there have been
reports of SEGV associated with large system-clock warps which are
probably the same thing.
Hello,
My home network has a DNS search domain of home.arpa and my machine's dnsmasq
instance is configured with:
server=/home.arpa/192.168.0.1
server=//192.168.0.1
stop-dns-rebind
rebind-domain-ok=home.arpa
rebind-domain-ok=// # Match unqualified domains
Querying my router's FQDN works as expected:
dnsmasq: query[A] gateway.home.arpa from 127.0.0.1
dnsmasq: forwarded gateway.home.arpa to 192.168.0.1
dnsmasq: reply gateway.home.arpa is 192.168.0.1
But using an unqualified domain name does not:
dnsmasq: query[A] gateway from 127.0.0.1
dnsmasq: forwarded gateway to 192.168.0.1
dnsmasq: possible DNS-rebind attack detected: gateway
The attached patch addresses this issue by checking for SERV_NO_REBIND when
handling dotless domains.
>From 0460b07108b009cff06e29eac54910ec2e7fafce Mon Sep 17 00:00:00 2001
From: guns <self@sungpae.com>
Date: Mon, 30 Dec 2019 16:34:23 -0600
Subject: [PATCH] Check for SERV_NO_REBIND on unqualified domains
Some REFUSED answers to DNSSEC-originated queries would
bypass the DNSSEC code entirely, and be returned as answers
to the original query. In the process, they'd mess up datastructures
so that a retry of the original query would crash dnsmasq.
In a reply proving that a DS doesn't exist, it doesn't matter if RRs
in the auth section _other_ than NSEC/NSEC3 are not signed. We can't
set the AD flag when returning the query, but it still proves
that the DS doesn't exist for internal use.
As one of the RRs which may not be signed is the SOA record, use the
TTL of the NSEC record to cache the negative result, not one
derived from the SOA.
Thanks to Tore Anderson for spotting and diagnosing the bug.
msg_controllen should be set using CMSG_SPACE() to account for padding.
RFC3542 provides more details:
While sending an application may or may not include padding at the end
of last ancillary data in msg_controllen and implementations must
accept both as valid.
At least OpenBSD rejects control messages if msg_controllen doesn't
account for padding, so use CMSG_SPACE() for maximal portability. This
is consistent with the example provided in the Linux cmsg(3) manpage.
The above is intended to increase robustness, but actually does the
opposite. The problem is that by ignoring SERVFAIL messages and hoping
for a better answer from another of the servers we've forwarded to,
we become vulnerable in the case that one or more of the configured
servers is down or not responding.
Consider the case that a domain is indeed BOGUS, and we've send the
query to n servers. With 68f6312d4b
we ignore the first n-1 SERVFAIL replies, and only return the
final n'th answer to the client. Now, if one of the servers we are
forwarding to is down, then we won't get all n replies, and the
client will never get an answer! This is a far more likely scenario
than a temporary SERVFAIL from only one of a set of notionally identical
servers, so, on the ground of robustness, we have to believe
any SERVFAIL answers we get, and return them to the client.
The client could be using the same recursive servers we are,
so it should, in theory, retry on SERVFAIL anyway.
This was the source of a large number of #ifdefs, originally
included for use with old embedded libc versions. I'm
sure no-one wants or needs IPv6-free code these days, so this
is a move towards more maintainable code.
If all configured dns servers return refused in
response to a query; dnsmasq will end up in an infinite loop
retransmitting the dns query resulting into high CPU load.
Problem is caused by the dns refuse retransmission logic which does
not check for the end of a dns server list iteration in strict mode.
Having one configured dns server returning a refused reply easily
triggers this problem in strict order mode. This was introduced in
9396752c11
Thanks to Hans Dedecker <dedeckeh@gmail.com> for spotting this
and the initial patch.
The current logic is naive in the case that there is more than
one RRset in an answer (Typically, when a non-CNAME query is answered
by one or more CNAME RRs, and then then an answer RRset.)
If all the RRsets validate, then they are cached and marked as validated,
but if any RRset doesn't validate, then the AD flag is not set (good) and
ALL the RRsets are cached marked as not validated.
This breaks when, eg, the answer contains a validated CNAME, pointing
to a non-validated answer. A subsequent query for the CNAME without do
will get an answer with the AD flag wrongly reset, and worse, the same
query with do will get a cached answer without RRSIGS, rather than
being forwarded.
The code now records the validation of individual RRsets and that
is used to correctly set the "validated" bits in the cache entries.
The logic to determine is an EDNS0 header was added was wrong. It compared
the packet length before and after the operations on the EDNS0 header,
but these can include adding options to an existing EDNS0 header. So
a query may have an existing EDNS0 header, which is extended, and logic
thinks that it had a header added de-novo.
Replace this with a simpler system. Check if the packet has an EDSN0 header,
do the updates/additions, and then check again. If it didn't have one
initially, but it has one laterly, that's the correct condition
to strip the header from a reply, and to assume that the client
cannot handle packets larger than 512 bytes.
dnsmasq allows to specify a interface for each name server passed with
the -S option or pushed through D-Bus; when an interface is set,
queries to the server will be forced via that interface.
Currently dnsmasq uses SO_BINDTODEVICE to enforce that traffic goes
through the given interface; SO_BINDTODEVICE also guarantees that any
response coming from other interfaces is ignored.
This can cause problems in some scenarios: consider the case where
eth0 and eth1 are in the same subnet and eth0 has a name server ns0
associated. There is no guarantee that the response to a query sent
via eth0 to ns0 will be received on eth0 because the local router may
have in the ARP table the MAC address of eth1 for the IP of eth0. This
can happen because Linux sends ARP responses for all the IPs of the
machine through all interfaces. The response packet on the wrong
interface will be dropped because of SO_BINDTODEVICE and the
resolution will fail.
To avoid this situation, dnsmasq should only restrict queries, but not
responses, to the given interface. A way to do this on Linux is with
the IP_UNICAST_IF and IPV6_UNICAST_IF socket options which were added
in kernel 3.4 and, respectively, glibc versions 2.16 and 2.26.
Reported-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Beniamino Galvani <bgalvani@redhat.com>
If a DNS server replies REFUSED for a given DNS query in strict order mode
no failover to the next DNS server is triggered as the failover logic only
covers non strict mode.
As a result the client will be returned the REFUSED reply without first
falling back to the secondary DNS server(s).
Make failover support work as well for strict mode config in case REFUSED is
replied by deleting the strict order check and rely only on forwardall being
equal to 0 which is the case in non strict mode when a single server has been
contacted or when strict order mode has been configured.
This was causing confusion: DNSSEC queries would be sent to
servers for domains that don't do DNSSEC, but because of that status
the answers would be treated as answers to ordinary queries,
sometimes resulting in a crash.
The man page says that we don't do DNSSEC on forwarded domains, but if
you turn on dnssec_check_signatures this turns out to be untrue,
because we try to build up a DS chain to them. Since forwarded domains
are usually used for split DNS to hidden domains, they're unlikely to
verify to the DNS root anyway, so the way to do DNSSEC for them (as the
manual says) is to provide a trust anchor for each forwarder.
The problem I've run into is a split DNS setup where I want DNSSEC to
work mostly, but one of the forwarding domains doesn't have an internal
DNSSEC capable resolver. Without this patch the entire domain goes
unresolvable because the DS record query to the internal resolver
returns a failure which is interpreted as the domain being BOGUS.
The fix is not to do the DS record chase for forwarded domains.
This effectively reverts most of 51967f9807 ("SERVFAIL is an expected
error return, don't try all servers.") and 4ace25c5d6 ("Treat REFUSED (not
SERVFAIL) as an unsuccessful upstream response").
With the current behaviour, as soon as dnsmasq receives a SERVFAIL from an
upstream server, it stops trying to resolve the query and simply returns
SERVFAIL to the client. With this commit, dnsmasq will instead try to
query other upstream servers upon receiving a SERVFAIL response.
According to RFC 1034 and 1035, the semantic of SERVFAIL is that of a
temporary error condition. Recursive resolvers are expected to encounter
network or resources issues from time to time, and will respond with
SERVFAIL in this case. Similarly, if a validating DNSSEC resolver [RFC
4033] encounters issues when checking signatures (unknown signing
algorithm, missing signatures, expired signatures because of a wrong
system clock, etc), it will respond with SERVFAIL.
Note that all those behaviours are entirely different from a negative
response, which would provide a definite indication that the requested
name does not exist. In our case, if an upstream server responds with
SERVFAIL, another upstream server may well provide a positive answer for
the same query.
Thus, this commit will increase robustness whenever some upstream servers
encounter temporary issues or are misconfigured.
Quoting RFC 1034, Section 4.3.1. "Queries and responses":
If recursive service is requested and available, the recursive response
to a query will be one of the following:
- The answer to the query, possibly preface by one or more CNAME
RRs that specify aliases encountered on the way to an answer.
- A name error indicating that the name does not exist. This
may include CNAME RRs that indicate that the original query
name was an alias for a name which does not exist.
- A temporary error indication.
Here is Section 5.2.3. of RFC 1034, "Temporary failures":
In a less than perfect world, all resolvers will occasionally be unable
to resolve a particular request. This condition can be caused by a
resolver which becomes separated from the rest of the network due to a
link failure or gateway problem, or less often by coincident failure or
unavailability of all servers for a particular domain.
And finally, RFC 1035 specifies RRCODE 2 for this usage, which is now more
widely known as SERVFAIL (RFC 1035, Section 4.1.1. "Header section format"):
RCODE Response code - this 4 bit field is set as part of
responses. The values have the following
interpretation:
(...)
2 Server failure - The name server was
unable to process this query due to a
problem with the name server.
For the DNSSEC-related usage of SERVFAIL, here is RFC 4033
Section 5. "Scope of the DNSSEC Document Set and Last Hop Issues":
A validating resolver can determine the following 4 states:
(...)
Insecure: The validating resolver has a trust anchor, a chain of
trust, and, at some delegation point, signed proof of the
non-existence of a DS record. This indicates that subsequent
branches in the tree are provably insecure. A validating resolver
may have a local policy to mark parts of the domain space as
insecure.
Bogus: The validating resolver has a trust anchor and a secure
delegation indicating that subsidiary data is signed, but the
response fails to validate for some reason: missing signatures,
expired signatures, signatures with unsupported algorithms, data
missing that the relevant NSEC RR says should be present, and so
forth.
(...)
This specification only defines how security-aware name servers can
signal non-validating stub resolvers that data was found to be bogus
(using RCODE=2, "Server Failure"; see [RFC4035]).
Notice the difference between a definite negative answer ("Insecure"
state), and an indefinite error condition ("Bogus" state). The second
type of error may be specific to a recursive resolver, for instance
because its system clock has been incorrectly set, or because it does not
implement newer cryptographic primitives. Another recursive resolver may
succeed for the same query.
There are other similar situations in which the specified behaviour is
similar to the one implemented by this commit.
For instance, RFC 2136 specifies the behaviour of a "requestor" that wants
to update a zone using the DNS UPDATE mechanism. The requestor tries to
contact all authoritative name servers for the zone, with the following
behaviour specified in RFC 2136, Section 4:
4.6. If a response is received whose RCODE is SERVFAIL or NOTIMP, or
if no response is received within an implementation dependent timeout
period, or if an ICMP error is received indicating that the server's
port is unreachable, then the requestor will delete the unusable
server from its internal name server list and try the next one,
repeating until the name server list is empty. If the requestor runs
out of servers to try, an appropriate error will be returned to the
requestor's caller.