Age | Commit message (Collapse) | Author |
|
Let's increase a number of timeouts as they apparently are too short for
some real-world lookups.
See:
https://github.com/systemd/systemd/issues/4003#issuecomment-279842616
In particular we change the following timeouts:
1) The first UDP retry we increase 500ms → 750ms. This is a good idea,
since some servers need relatively long responses for trivial lookups,
and giving up our first attempt also has the effect of trying a
different server for the next attempt which has the side effect that
we'll run two down-grade iterations in parallel, on both servers.
Hence, let's give servers a bit more time in the first iteration.
2) Permit 24 retries instead of just 16 per transactions. If we end up
downgrading all the way down to UDP for a lookup we already need 5
iterations for that. If we want permit a couple of lost packages for
each (let's say 4), then we already need 20 iterations.
3) Increase the overall query timeout on the service side to 60s (from
45s), simply because very long and slow DNSSEC + CNAME chains (such as
us.ynuf.alipay.com) hit this boundary too easily. The client side
timeout for the bus method call is increased to 90s, in order to have
room for the dbus reply to go through
|
|
There's no point in talking to a server in DNSSEC mode when we don't
actually want to verify anything.
See: #5352
|
|
The cache might contain all kinds of unauthenticated data that we really
shouldn't be using if we upgrade our feature level and suddenly are able
to get authenticated data again.
Might fix: #4866
|
|
Embedding sd_id128_t's in constant strings was rather cumbersome. We had
SD_ID128_CONST_STR which returned a const char[], but it had two problems:
- it wasn't possible to statically concatanate this array with a normal string
- gcc wasn't really able to optimize this, and generated code to perform the
"conversion" at runtime.
Because of this, even our own code in coredumpctl wasn't using
SD_ID128_CONST_STR.
Add a new macro to generate a constant string: SD_ID128_MAKE_STR.
It is not as elegant as SD_ID128_CONST_STR, because it requires a repetition
of the numbers, but in practice it is more convenient to use, and allows gcc
to generate smarter code:
$ size .libs/systemd{,-logind,-journald}{.old,}
text data bss dec hex filename
1265204 149564 4808 1419576 15a938 .libs/systemd.old
1260268 149564 4808 1414640 1595f0 .libs/systemd
246805 13852 209 260866 3fb02 .libs/systemd-logind.old
240973 13852 209 255034 3e43a .libs/systemd-logind
146839 4984 34 151857 25131 .libs/systemd-journald.old
146391 4984 34 151409 24f71 .libs/systemd-journald
It is also much easier to check if a certain binary uses a certain MESSAGE_ID:
$ strings .libs/systemd.old|grep MESSAGE_ID
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
$ strings .libs/systemd|grep MESSAGE_ID
MESSAGE_ID=c7a787079b354eaaa9e77b371893cd27
MESSAGE_ID=b07a249cd024414a82dd00cd181378ff
MESSAGE_ID=641257651c1b4ec9a8624d7a40a9e1e7
MESSAGE_ID=de5b426a63be47a7b6ac3eaac82e2f6f
MESSAGE_ID=d34d037fff1847e6ae669a370e694725
MESSAGE_ID=7d4958e842da4a758f6c1cdc7b36dcc5
MESSAGE_ID=1dee0369c7fc4736b7099b38ecb46ee7
MESSAGE_ID=39f53479d3a045ac8e11786248231fbf
MESSAGE_ID=be02cf6855d2428ba40df7e9d022f03d
MESSAGE_ID=7b05ebc668384222baa8881179cfda54
MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
|
|
server feature level due to packet loss
Fixes: #4315
|
|
Fix-up for #4164
|
|
|
|
DNS servers which have route-only domains should only be used for
the specified domains. Routing queries about other domains there is a privacy
violation, prone to fail (as that DNS server was not meant to be used for other
domains), and puts unnecessary load onto that server.
Introduce a new helper function dns_server_limited_domains() that checks if the
DNS server should only be used for some selected domains, i. e. has some
route-only domains without "~.". Use that when determining whether to query it
in the scope, and when writing resolv.conf.
Extend the test_route_only_dns() case to ensure that the DNS server limited to
~company does not appear in resolv.conf. Add test_route_only_dns_all_domains()
to ensure that a server that also has ~. does appear in resolv.conf as global
name server. These reproduce #3420.
Add a new test_resolved_domain_restricted_dns() test case that verifies that
domain-limited DNS servers are only being used for those domains. This
reproduces #3421.
Clarify what a "routing domain" is in the manpage.
Fixes #3420
Fixes #3421
|
|
resolved fixes for handling SERVFAIL errors from servers
|
|
There might be two reasons why we get a SERVFAIL response from our selected DNS
server: because this DNS server itself is bad, or because the DNS server
actually serving the zone upstream is bad. So far we immediately downgraded our
server feature level when getting SERVFAIL, under the assumption that the first
case is the only possible case. However, this meant we'd downgrade immediately
even if we encountered the second case described above.
With this commit handling of SERVFAIL is reworked. As soon as we get a SERVFAIL
on a transaction we retry the transaction with a lower feature level, without
changing the feature level tracked for the DNS server itself. If that fails
too, we downgrade further, and so on. If during this downgrading the SERVFAIL
goes away we assume that the DNS server we are talking to is bad, but the zone
is fine and propagate the detected feature level to the information we track
about the DNS server. Should the SERVFAIL not go away this way we let the
transaction fail and accept the SERVFAIL.
|
|
In order to improve compatibility with local clients that speak DNS directly
(and do not use NSS or our bus API) listen locally on 127.0.0.53:53 and process
any queries made that way.
Note that resolved does not implement a full DNS server on this port, but
simply enough to allow normal, local clients to resolve RRs through resolved.
Specifically it does not implement queries without the RD bit set (these are
requests where recursive lookups are explicitly disabled), and neither queries
with DNSSEC DO set in combination with DNSSEC CD (i.e. DNSSEC lookups with
validation turned off). It also refuses zone transfers and obsolete RR types.
All lookups done this way will be rejected with a clean error code, so that the
client side can repeat the query with a reduced feature set.
The code will set the DNSSEC AD flag however, depending on whether the data
resolved has been validated (or comes from a local, trusted source).
Lookups made via this mechanisms are propagated to LLMNR and mDNS as necessary,
but this is only partially useful as DNS packets cannot carry IP scope data
(i.e. the ifindex), and hence link-local addresses returned cannot be used
properly (and given that LLMNR/mDNS are mostly about link-local communication
this is quite a limitation). Also, given that DNS tends to use IDNA for
non-ASCII names, while LLMNR/mDNS uses UTF-8 lookups cannot be mapped 1:1.
In general this should improve compatibility with clients bypassing NSS but
it is highly recommended for clients to instead use NSS or our native bus API.
This patch also beefs up the DnsStream logic, as it reuses the code for local
TCP listening. DnsStream now provides proper reference counting for its
objects.
In order to avoid feedback loops resolved will no silently ignore 127.0.0.53
specified as DNS server when reading configuration.
resolved listens on 127.0.0.53:53 instead of 127.0.0.1:53 in order to leave
the latter free for local, external DNS servers or forwarders.
This also changes the "etc.conf" tmpfiles snippet to create a symlink from
/etc/resolv.conf to /usr/lib/systemd/resolv.conf by default, thus making this
stub the default mode of operation if /etc is not populated.
|
|
We don't make use of this yet, but later work will.
|
|
Make sure we can parse DNS server addresses that use the "zone id" syntax for
local link addresses, i.e. "fe80::c256:27ff:febb:12f%wlp3s0", when reading
/etc/resolv.conf.
Also make sure we spit this out correctly again when writing /etc/resolv.conf
and via the bus.
Fixes: #3359
|
|
After a few link up/down events I got this warning:
May 17 22:05:10 laptop systemd-resolved[2983]: Failed to read DNS servers for interface wlp3s0, ignoring: Argument list too long
|
|
Throughout the tree there's spurious use of spaces separating ++ and --
operators from their respective operands. Make ++ and -- operator
consistent with the majority of existing uses; discard the spaces.
|
|
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
|
|
I'm not defining _DNS_SERVER_TYPE_MAX/INVALID as usual in the enum,
because it wouldn't be used, and then gcc would complain that
various enums don't test for _DNS_SERVER_TYPE_MAX. It seems better
to define the macro rather than add assert_not_reached() in multiple
places.
|
|
If we downgrade from DNSSEC to non-DNSSEC mode, let's log about this in a recognizable way (i.e. with a message ID),
after all, this is of major importance.
|
|
Certain Belkin routers appear to implement a broken DNS cache for A RRs and some others, but implement a pass-thru for
AAAA RRs. This has the effect that we quickly recognize the broken logic of the router when we do an A lookup, but for
AAAA everything works fine until we actually try to validate the request. Given that the validation will necessarily
fail ultimately let's make sure we remember even when downgrading a feature level that OPT or RRSIG was missing.
|
|
downgrade what we verified
If we receive a reply that lacks the OPT RR, then this is reason to downgrade what was verified before, as it's
apparently no longer true, and the previous OPT RR we saw was only superficially OK.
Similar, if we realize that RRSIGs are not augmented, then also downgrade the feature level that was verified, as
DNSSEC is after all not supported. This check is in particular necessary, as we might notice the fact that RRSIG is not
augmented only very late, when verifying the root domain.
Also, when verifying a successful response, actually take in consideration that it might have been reported already
that RRSIG or OPT are missing in the response.
|
|
reason to
This adds logic to downgrade the feature level more aggressively when we have reason to. Specifically:
- When we get a response packet that lacks an OPT RR for a query that had it. If so, downgrade immediately to UDP mode,
i.e. don't generate EDNS0 packets anymore.
- When we get a response which we are sure should be signed, but lacks RRSIG RRs, we downgrade to EDNS0 mode, i.e.
below DO mode, since DO is apparently not really supported.
This should increase compatibility with servers that generate non-sensical responses if they messages with OPT RRs and
suchlike, for example the situation described here:
https://open.nlnetlabs.nl/pipermail/dnssec-trigger/2014-November/000376.html
This also changes the downgrade code to explain in a debug log message why a specific downgrade happened.
|
|
its own
A suggested by Vito Caputo:
https://github.com/systemd/systemd/pull/2289#discussion-diff-49276220
|
|
DNSSEC
Move detection into a set of new functions, that check whether one specific server can do DNSSEC, whether a server and
a specific transaction can do DNSSEC, or whether a transaction and all its auxiliary transactions could do so.
Also, do these checks both before we acquire additional RRs for the validation (so that we can skip them if the server
doesn't do DNSSEC anyway), and after we acquired them all (to see if any of the lookups changed our opinion about the
servers).
THis also tightens the checks a bit: a server that lacks TCP support is considered incompatible with DNSSEC too.
|
|
This makes it easier to log information about a specific DnsServer object.
|
|
This changes the DnsServer logic to count failed UDP and TCP failures separately. This is useful so that we don't end
up downgrading the feature level from one UDP level to a lower UDP level just because a TCP connection we did because
of a TC response failed.
This also adds accounting of truncated packets. If we detect incoming truncated packets, and count too many failed TCP
connections (which is the normal fall back if we get a trucnated UDP packet) we downgrade the feature level, given that
the responses at the current levels don't get through, and we somehow need to make sure they become smaller, which they
will do if we don't request DNSSEC or EDNS support.
This makes resolved work much better with crappy DNS servers that do not implement TCP and only limited UDP packet
sizes, but otherwise support DNSSEC RRs. They end up choking on the generally larger DNSSEC RRs and there's no way to
retrieve the full data.
|
|
TCP or vice versa
Under the assumption that packet failures (i.e. FORMERR, SERVFAIL, NOTIMP) are caused by packet contents, not used
transport, we shouldn't switch between UDP and TCP when we get them, but only downgrade the higher levels down to UDP.
|
|
If we failed to contact a DNS server via TCP, bump of the feature level to UDP again. This way we'll switch back
between UDP and TCP if we fail to contact a host.
Generally, we prefer UDP over TCP, which is why UDP is a higher feature level. But some servers only support UDP but
not TCP hence when reaching the lowest feature level of TCP and want to downgrade from there, pick UDP again. We this
keep downgrading until we reach TCP and then we cycle through UDP and TCP.
|
|
|
|
The name "features" suggests an orthogonal bitmap or suchlike, but the
variables really encode only a linear set of feature levels. The type
used is already called DnsServerFeatureLevel, hence fix up the variables
accordingly, too.
|
|
This moves management of the OPT RR out of the scope management and into
the server and packet management. There are now explicit calls for
appending and truncating the OPT RR from a packet
(dns_packet_append_opt() and dns_packet_truncate_opt()) as well as a
call to do the right thing depending on a DnsServer's feature level
(dns_server_adjust_opt()).
This also unifies the code to pick a server between the TCP and UDP code
paths, and makes sure the feature level used for the transaction is
selected at the time the server is picked, and not changed until the
next time we pick a server. The server selction code is now unified in
dns_transaction_pick_server().
This all fixes problems when changing between UDP and TCP communication
for the same server, and makes sure the UDP and TCP codepaths are more
alike. It also makes sure we never keep the UDP port open when switchung
to TCP, so that we don't have to handle incoming datagrams on the latter
we don't expect.
As the new code picks the DNS server at the time we make a connection,
we don't need to invalidate the DNS server anymore when changing to the
next one, thus dns_transaction_next_dns_server() has been removed.
|
|
This adds a mode that makes resolved automatically downgrade from DNSSEC
support to classic non-DNSSEC resolving if the configured DNS server is
not capable of DNSSEC. Enabling this mode increases compatibility with
crappy network equipment, but of course opens up the system to
downgrading attacks.
The new mode can be enabled by setting DNSSEC=downgrade-ok in
resolved.conf. DNSSEC=yes otoh remains a "strict" mode, where DNS
resolving rather fails then allow downgrading.
Downgrading is done:
- when the server does not support EDNS0+DO
- or when the server supports it but does not augment returned RRs with
RRSIGs. The latter is detected when requesting DS or SOA RRs for the
root domain (which is necessary to do proofs for unsigned data)
|
|
This is often needed for proper DNSSEC support, and even to handle AAAA records
without falling back to TCP.
If the path between the client and server is fully compliant, this should always
work, however, that is not the case, and overlarge packets will get mysteriously
lost in some cases.
For that reason, we use a similar fallback mechanism as we do for palin EDNS0,
EDNS0+DO, etc.:
The large UDP size feature is different from the other supported feature, as we
cannot simply verify that it works based on receiving a reply (as the server
will usually send us much smaller packets than what we claim to support, so
simply receiving a reply does not mean much).
For that reason, we keep track of the largest UDP packet we ever received, as this
is the smallest known good size (defaulting to the standard 512 bytes). If
announcing the default large size of 4096 fails (in the same way as the other
features), we fall back to the known good size. The same logic of retrying after a
grace-period applies.
|
|
This indicates that we can handle DNSSEC records (per RFC3225), even if
all we do is silently drop them. This feature requires EDNS0 support.
As we do not yet support larger UDP packets, this feature increases the
risk of getting truncated packets.
Similarly to how we fall back to plain UDP if EDNS0 fails, we will fall
back to plain EDNS0 if EDNS0+DO fails (with the same logic of remembering
success and retrying after a grace period after failure).
|
|
This is a minimal implementation of RFC6891. Only default values
are used, so in reality this will be a noop.
EDNS0 support is dependent on the current server's feature level,
so appending the OPT pseudo RR is done when the packet is emitted,
rather than when it is assembled. To handle different feature
levels on retransmission, we strip off the OPT RR again after
sending the packet.
Similarly, to how we fall back to TCP if UDP fails, we fall back
to plain UDP if EDNS0 fails (but if EDNS0 ever succeeded we never
fall back again, and after a timeout we will retry EDNS0).
|
|
Previously, we would only degrade on packet loss, but when adding EDNS0 support,
we also have to handle the case where the server replies with an explicit error.
|
|
This is inspired by the logic in BIND [0], follow-up patches
will implement the reset of that scheme.
If we get a server error back, or if after several attempts we don't
get a reply at all, we switch from UDP to TCP for the given
server for the current and all subsequent requests. However, if
we ever successfully received a reply over UDP, we never fall
back to TCP, and once a grace-period has passed, we try to upgrade
again to using UDP. The grace-period starts off at five minutes
after the current feature level was verified and then grows
exponentially to six hours. This is to mitigate problems due
to temporary lack of network connectivity, but at the same time
avoid flooding the network with retries when the feature attempted
feature level genuinely does not work.
Note that UDP is likely much more commonly supported than TCP,
but depending on the path between the client and the server, we
may have more luck with TCP in case something is wrong. We really
do prefer UDP though, as that is much more lightweight, that is
why TCP is only the last resort.
[0]: <https://kb.isc.org/article/AA-01219/0/Refinements-to-EDNS-fallback-behavior-can-cause-different-outcomes-in-Recursive-Servers.html>
|
|
|
|
This copies concepts we introduced for the DnsSearchDomain stuff, and
reworks the operations on lists of dns servers to be reusable and
generic for use both with the Link and the Manager object.
|
|
Previously, we'd keep adding new dns servers we discover to the end of
our linked list of servers. When we encountered a pre-existing server,
we'd just leave it where it was. In essence that meant that old servers
ended up at the front, and new servers at the end, but not in an order
that would reflect the configuration.
With this change we ensure that every pre-existing server we want to add
again we move to the back of the linked list, so that the order is
stable and in sync with the requested configuration.
|
|
Previously, there was a chance of memory corruption, because when
switching to the next DNS server we didn't care whether they linked list
of DNS servers was still valid.
Clean up lifecycle of the dns server logic:
- When a DnsServer object is still in the linked list of DnsServers for
a link or the manager, indicate so with a "linked" boolean field, and
never follow the linked list if that boolean is not set.
- When picking a DnsServer to use for a link ot manager, always
explicitly take a reference.
This also rearranges some logic, to make the tracking of dns servers by
link and globally more alike.
|
|
resolved-dns-server.c
|
|
|
|
Let's use the same parser when parsing dns server information from
/etc/resolv.conf and our native configuration file.
Also, move all code that manages lists of dns servers to a single place.
resolved-dns-server.c
|
|
|
|
All our hash functions are based on siphash24(), factor out
siphash_init() and siphash24_finalize() and pass the siphash
state to the hash functions rather than the hash key.
This simplifies the hash functions, and in particular makes
composition simpler as calling siphash24_compress() repeatedly
on separate chunks of input has the same effect as first
concatenating the input and then calling siphash23_compress()
on the result.
|
|
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
|
|
We want to reference the servers from their active transactions, so make sure
they stay around as long as the transaction does.
|
|
Reported by Cristian Rodríguez
http://lists.freedesktop.org/archives/systemd-devel/2015-May/031626.html
|
|
It is redundant to store 'hash' and 'compare' function pointers in
struct Hashmap separately. The functions always comprise a pair.
Store a single pointer to struct hash_ops instead.
systemd keeps hundreds of hashmaps, so this saves a little bit of
memory.
|
|
|