Age | Commit message (Collapse) | Author |
|
The setting controls which kind of DNSSEC validation is done: none at
all, trusting the AD bit, or client-side validation.
For now, no validation is implemented, hence the setting doesn't do much
yet, except of toggling the CD bit in the generated messages if full
client-side validation is requested.
|
|
When doing DNSSEC lookups we need to know one or more DS or DNSKEY RRs
as trust anchors to validate lookups. With this change we add a
compiled-in trust anchor database, serving the root DS key as of today,
retrieved from:
https://data.iana.org/root-anchors/root-anchors.xml
The interface is kept generic, so that additional DS or DNSKEY RRs may
be served via the same interface, for example by provisioning them
locally in external files to support "islands" of security.
The trust anchor database becomes the fourth source of RRs we maintain,
besides, the network, the local cache, and the local zone.
|
|
This is often needed for proper DNSSEC support, and even to handle AAAA records
without falling back to TCP.
If the path between the client and server is fully compliant, this should always
work, however, that is not the case, and overlarge packets will get mysteriously
lost in some cases.
For that reason, we use a similar fallback mechanism as we do for palin EDNS0,
EDNS0+DO, etc.:
The large UDP size feature is different from the other supported feature, as we
cannot simply verify that it works based on receiving a reply (as the server
will usually send us much smaller packets than what we claim to support, so
simply receiving a reply does not mean much).
For that reason, we keep track of the largest UDP packet we ever received, as this
is the smallest known good size (defaulting to the standard 512 bytes). If
announcing the default large size of 4096 fails (in the same way as the other
features), we fall back to the known good size. The same logic of retrying after a
grace-period applies.
|
|
This is a minimal implementation of RFC6891. Only default values
are used, so in reality this will be a noop.
EDNS0 support is dependent on the current server's feature level,
so appending the OPT pseudo RR is done when the packet is emitted,
rather than when it is assembled. To handle different feature
levels on retransmission, we strip off the OPT RR again after
sending the packet.
Similarly, to how we fall back to TCP if UDP fails, we fall back
to plain UDP if EDNS0 fails (but if EDNS0 ever succeeded we never
fall back again, and after a timeout we will retry EDNS0).
|
|
Previously, we would only degrade on packet loss, but when adding EDNS0 support,
we also have to handle the case where the server replies with an explicit error.
|
|
This is inspired by the logic in BIND [0], follow-up patches
will implement the reset of that scheme.
If we get a server error back, or if after several attempts we don't
get a reply at all, we switch from UDP to TCP for the given
server for the current and all subsequent requests. However, if
we ever successfully received a reply over UDP, we never fall
back to TCP, and once a grace-period has passed, we try to upgrade
again to using UDP. The grace-period starts off at five minutes
after the current feature level was verified and then grows
exponentially to six hours. This is to mitigate problems due
to temporary lack of network connectivity, but at the same time
avoid flooding the network with retries when the feature attempted
feature level genuinely does not work.
Note that UDP is likely much more commonly supported than TCP,
but depending on the path between the client and the server, we
may have more luck with TCP in case something is wrong. We really
do prefer UDP though, as that is much more lightweight, that is
why TCP is only the last resort.
[0]: <https://kb.isc.org/article/AA-01219/0/Refinements-to-EDNS-fallback-behavior-can-cause-different-outcomes-in-Recursive-Servers.html>
|
|
After all, this is likely a local DNS forwarder that caches anyway,
hence there's no point in caching twice.
Fixes #2038.
|
|
key per scope
When the zone probing code looks for a transaction to reuse it will
refuse to look at transactions that have been answered from cache or the
zone itself, but insist on the network. This has the effect that there
might be multiple transactions around for the same key on the same
scope. Previously we'd track all transactions in a hashmap, indexed by
the key, which implied that there would be only one transaction per key,
per scope. With this change the hashmap will only store the most recent
transaction per key, and a linked list will be used to track all
transactions per scope, allowing multiple per-key per-scope.
Note that the linked list fields for this actually already existed in
the DnsTransaction structure, but were previously unused.
|
|
Let's track where the data came from: from the network, the cache or the
local zone. This is not only useful for debugging purposes, but is also
useful when the zone probing wants to ensure it's not reusing
transactions that were answered from the cache or the zone itself.
|
|
DnsTransaction objects
Previously we'd only store the DnsPacket in the DnsTransaction, and the
DnsQuery would then take the DnsPacket's DnsAnswer and return it. With
this change we already pull the DnsAnswer out inside the transaction.
We still store the DnsPacket in the transaction, if we have it, since we
still need to determine from which peer a response originates, to
implement caching properly. However, the DnsQuery logic doesn't care
anymore for the packet, it now only looks at answers and rcodes from the
successfuly candidate.
This also has the benefit of unifying how we propagate incoming packets,
data from the local zone or the local cache.
|
|
This adds support for searching single-label hostnames in a set of
configured search domains.
A new object DnsQueryCandidate is added that links queries to scopes.
It keeps track of the search domain last used for a query on a specific
link. Whenever a host name was unsuccessfuly resolved on a scope all its
transactions are flushed out and replaced by a new set, with the next
search domain appended.
This also adds a new flag SD_RESOLVED_NO_SEARCH to disable search domain
behaviour. The "systemd-resolve-host" tool is updated to make this
configurable via --search=.
Fixes #1697
|
|
Previously, we'd always generate a packet on the wire, even for names
that are within our local zone. Shortcut this, and always check the
local zone first. This should minimize generated traffic and improve
security.
|
|
|
|
|
|
|
|
There are more than enough to deserve their own .c file, hence move them
over.
|
|
Only one key is allowed per transaction now, so let's simplify things and only allow putting
one question key into the cache at a time.
|
|
This hopefully makes this a bit more expressive and clarifies that the
fd is not used for the DNS TCP socket. This also mimics how the LLMNR
UDP fd is named in the manager object.
|
|
Make a scope with invalid protocol state fail as soon as possible.
|
|
With more protocols to come, switch repetitive if-else blocks with a
switch-case statements.
|
|
|
|
If we try to resoolve an LLMNR PTR RR we shall connect via TCP directly
to the specified IP address. We already refuse to do this if the address
to resolve is of a different address family as the transaction's scope.
The error returned was EAFNOSUPPORT. Let's change this to ESRCH which is
how we indicate "not server available" when connecting for LLMNR or DNS,
since that's what this really is: we have no server we could connect to
in this address family.
This allows us to ensure that no server errors are always handled the same
way.
|
|
Right now we keep track of ongoing transactions in a linked listed for
each scope. Replace this by a hashmap that is indexed by the RR key.
Given that all ongoing transactions will be placed in pretty much the
same scopes usually this should optimize behaviour.
We used to require a list here, since we wanted to do "superset" query
checks, but this became obsolete since transactions are now single-key
instead of multi-key.
|
|
Let's simplify things and only maintain a single RR key per transaction
object, instead of a full DnsQuestion. Unicast DNS and LLMNR don't
support multiple questions per packet anway, and Multicast DNS suggests
coalescing questions beyond a single dns query, across the whole system.
|
|
It shouldn't happen that we try to resolve IPv4 addresses via LLMNR on
IPv6 and vice versa, but let's explicitly verify that we don't turn an
IPv4 LLMNR lookup into an IPv6 TCP connection.
|
|
|
|
|
|
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
|
|
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
|
|
Let's optimize things a bit and properly compare DNS question arrays,
instead of checking if they are mutual supersets. This also makes ANY
query handling more accurate.
|
|
This is handled by the kernel now that the socket is connect()ed.
|
|
This was a bug.
|
|
This function emits the UDP packet via the scope, but first it will
determine the current server (and connect to it) and store the
server in the transaction.
This should not change the behavior, but simplifies the code.
|
|
No functional change, but makes follow-up patch clearer.
|
|
With access to the server when creating the socket, we can connect()
to the server and hence simplify message sending and receiving in
follow-up patches.
|
|
Close the socket when changing the server in a transaction, in
order for it to be reopened with the right server when we send
the next packet.
This fixes a regression where we could get stuck with a failing
server.
|
|
This was only ever used by LLMNR, so don't request this for unicast DNS packets.
|
|
A transaction can only have one socket at a time, so no need to distinguish these.
|
|
We were stopping the transaction, but we need to stop processing the packet alltogether.
|
|
We used to have one global socket, use one per transaction instead. This
has the side-effect of giving us a random UDP port per transaction, and
hence increasing the entropy and making cache poisoining significantly
harder to achieve.
We still reuse the same port number for packets belonging to the same
transaction (resent packets).
|
|
This improves the resilience against cache poisoning by being stricter
about only accepting responses that match precisely the requst they
are in reply to.
It should be noted that we still only use one port (which is picked
at random), rather than one port for each transaction. Port
randomization would improve things further, but is not required by
the RFC.
|
|
We want to discover information about the server and use that in when crafting
packets to be resent.
|
|
The C and T bits in the DNS packet header definitions are specific to LLMNR.
In regular DNS, they are called AA and RD instead. Reflect that by calling
the macros accordingly, and alias LLMNR specific macros.
While at it, define RA, AD and CD getters as well.
|
|
De-duplicate some magic numbers.
|
|
|
|
like:
src/shared/install.c: In function ‘unit_file_lookup_state’:
src/shared/install.c:1861:16: warning: ‘r’ may be used uninitialized in
this function [-Wmaybe-uninitialized]
return r < 0 ? r : state;
^
src/shared/install.c:1796:13: note: ‘r’ was declared here
int r;
^
|
|
It is redundant to store 'hash' and 'compare' function pointers in
struct Hashmap separately. The functions always comprise a pair.
Store a single pointer to struct hash_ops instead.
systemd keeps hundreds of hashmaps, so this saves a little bit of
memory.
|
|
|
|
|
|
|