Age | Commit message (Collapse) | Author |
|
OK to be unsigned
This large patch adds a couple of mechanisms to ensure we get NSEC3 and
proof-of-unsigned support into place. Specifically:
- Each item in an DnsAnswer gets two bit flags now:
DNS_ANSWER_AUTHENTICATED and DNS_ANSWER_CACHEABLE. The former is
necessary since DNS responses might contain signed as well as unsigned
RRsets in one, and we need to remember which ones are signed and which
ones aren't. The latter is necessary, since not we need to keep track
which RRsets may be cached and which ones may not be, even while
manipulating DnsAnswer objects.
- The .n_answer_cachable of DnsTransaction is dropped now (it used to
store how many of the first DnsAnswer entries are cachable), and
replaced by the DNS_ANSWER_CACHABLE flag instead.
- NSEC3 proofs are implemented now (lacking support for the wildcard
part, to be added in a later commit).
- Support for the "AD" bit has been dropped. It's unsafe, and now that
we have end-to-end authentication we don't need it anymore.
- An auxiliary DnsTransaction of a DnsTransactions is now kept around as
least as long as the latter stays around. We no longer remove the
auxiliary DnsTransaction as soon as it completed. THis is necessary,
as we now are interested not only in the RRsets it acquired but also
in its authentication status.
|
|
Let's be safe and explicitly avoid that we add an auxiliary transaction
dependency on ourselves.
|
|
We need no separate timeout anymore as soon as we received a reply, as
the auxiliary transactions have their own timeouts.
|
|
they are destroyed
A failing transaction might cause other transactions to fail too, and
thus the set of transactions to notify for a transaction might change
while we are notifying them. Protect against that.
|
|
We end up needing the stringified transaction key in many log messages,
hence let's simplify the logic and cache it inside of the transaction:
generate it the first time we need it, and reuse it afterwards. Free it
when the transaction goes away.
This also updated a couple of log messages to make use of this.
|
|
Note that this is not complete yet, as we don't handle wildcard domains
correctly, nor handle domains correctly that use empty non-terminals.
|
|
It's not OK to drop these for our proof of non-existance checks.
|
|
|
|
This changes answer validation to be more accepting to unordered RRs in
responses. The agorithm we now implement goes something like this:
1. populate validated keys list for this transaction from DS RRs
2. as long as the following changes the unvalidated answer list:
2a. try to validate the first RRset we find in unvalidated answer
list
2b. if that worked: add to validated answer; if DNSKEY also add to
validated keys list; remove from unvalidated answer.
2c. continue at 2a, with the next RRset, or restart from the
beginning when we hit the end
3. as long as the following changes the unvalidated answer list:
3a. try to validate the first RRset again. This will necessarily
fail, but we learn the precise error
3b. If this was a "primary" response to the question, fail the
entire transaction. "Primary" in this context means that it is
directly a response to the query, or a CNAME/DNAME for it.
3c. Otherwise, remove the RRset from the unvalidated answer list.
Note that we the too loops in 2 + 3 are actually coded as a single one,
but the dnskeys_finalized bool indicates which loop we are currently
processing.
Note that loop 2 does not drop any invalidated RRsets yet, that's
something only loop 3 does. This is because loop 2 might still encounter
additional DNSKEYS which might validate more stuff, and if we'd already
have dropped those RRsets we couldn't validate those anymore. The first
loop is hence a "constructive" loop, the second loop a "destructive"
one: the first one validates whatever is possible, the second one then
deletes whatever still isn't.
|
|
Instead of figuring out how many RRs to cache right before we do so,
determine this at the time we install the answer RRs, so that we can
still alter this as we manipulate the answer during validation.
The primary purpose of this is to pave the way so that we can drop
unsigned RRsets from the answer and invalidate the number of RRs to
cache at the same time.
|
|
Check the validity of RR types as we parse or receive data from IPC
clients, and use the same code for all of them.
|
|
Misc resolved cache fixes
|
|
Apart from dropping redundant information, this fixes an issue
where, due to broken DNS servers, we can only be certain of whether
an apparent NODATA response is in fact an NXDOMAIN response after
explicitly resolving the canonical name. This issue is outlined in
RFC2308. Moreover, by caching NXDOMAIN for an existing name, we
would mistakenly return NXDOMAIN for types which should not be
redirected. I.e., a query for AAAA on test-nx-1.jklm.no correctly
returns NXDOMAIN, but a query for CNAME should return the record
and a query for DNAME should return NODATA.
Note that this means we will not cache an NXDOMAIN response in the
presence of redirection, meaning one redundant roundtrip in case the
name is queried again.
|
|
resolved: more mDNS specific bits (3)
|
|
RFC6762, 18.1:
In multicast query messages, the Query Identifier SHOULD be set to
zero on transmission.
|
|
Let's simply call it dns_transaction_prepare(), so that we have the nice
cycle for prepare() → go() → emit() → process().
After all it's pretty clear that what we prepare there, and we dont call
the others go_next_attempt(), emit_next_attempt() or
process_next_attempt().
|
|
|
|
This adds initial support for validating RRSIG/DNSKEY/DS chains when
doing lookups. Proof-of-non-existance, or proof-of-unsigned-zones is not
implemented yet.
With this change DnsTransaction objects will generate additional
DnsTransaction objects when looking for DNSKEY or DS RRs to validate an
RRSIG on a response. DnsTransaction objects are thus created for three
reasons now:
1) Because a user asked for something to be resolved, i.e. requested by
a DnsQuery/DnsQueryCandidate object.
2) As result of LLMNR RR probing, requested by a DnsZoneItem.
3) Because another DnsTransaction requires the requested RRs for
validation of its own response.
DnsTransactions are shared between all these users, and are GC
automatically as soon as all of these users don't need a specific
transaction anymore.
To unify the handling of these three reasons for existance for a
DnsTransaction, a new common naming is introduced: each DnsTransaction
now tracks its "owners" via a Set* object named "notify_xyz", containing
all owners to notify on completion.
A new DnsTransaction state is introduced called "VALIDATING" that is
entered after a response has been receieved which needs to be validated,
as long as we are still waiting for the DNSKEY/DS RRs from other
DnsTransactions.
This patch will request the DNSKEY/DS RRs bottom-up, and then validate
them top-down.
Caching of RRs is now only done after verification, so that the cache is
not poisoned with known invalid data.
The "DnsAnswer" object gained a substantial number of new calls, since
we need to add/remove RRs to it dynamically now.
|
|
This got borked in 547493c5ad5c82032e247609970f96be76c2d661.
|
|
It's complicated enough, it deserves its own call.
(Also contains some unrelated whitespace, comment and assertion changes)
|
|
This new functions exports cached records of type PTR, SRV and TXT into
an existing DnsPacket. This is used in order to fill in known records
to mDNS queries, for known answer supression.
|
|
Implement dns_transaction_make_packet_mdns(), a special version of
dns_transaction_make_packet() for mDNS which differs in many ways:
a) We coalesce queries of currently active transaction on the scope.
This is possible because mDNS actually allows many questions in a
to be sent in a single packet and it takes some burden from the
network.
b) Both A and AAAA query keys are broadcast on both IPv4 and IPv6
scopes, because other hosts might only respond on one of their
addresses but resolve both types.
c) We discard previously sent packages (t->sent) so we can start over
and coalesce pending transactions again.
|
|
For each transaction, record when the earliest point in time when the
query packet may hit the wire. This is the same time stamp for which
the timer is scheduled in retries, except for the initial query packets
which are delayed by a random jitter. In this case, we denote that the
packet may actually be sent at the nominal time, without the jitter.
Transactions that share the same timestamp will also have identical
values in this field. It is used to coalesce pending queries in a later
patch.
|
|
Split some code out of dns_transaction_go() so we can re-use it later from
different context. The new function dns_transaction_prepare_next_attempt()
takes care of preparing everything so that a new packet can conditionally
be formulated for a transaction.
This patch shouldn't cause any functional change.
|
|
|
|
|
|
mDNS packet timeouts need to be handled per transaction, not per link.
Re-use the n_attempts field for this purpose, as packets timeouts should be
determined by starting at 1 second, and doubling the value on each try.
|
|
When a jitter callback is issued instead of sending a DNS packet directly,
on_transaction_timeout() is invoked to 'retry' the transaction. However,
this function has side effects. For once, it increases the packet loss
counter on the scope, and it also unrefs/refs the server instances.
Fix this by tracking the jitter with two bool variables. One saying that
the initial jitter has been scheduled in the first place, and one that
tells us the delay packet has been sent.
|
|
The logic is to kick off mDNS packets in a delayed way is mostly identical
to what LLMNR needs, except that the constants are different.
|
|
Validate mDNS queries and responses by looking at some header fields,
add mDNS flags.
|
|
This adds a new SD_RESOLVED_AUTHENTICATED flag for responses we return
on the bus. When set, then the data has been authenticated. For now this
mostly reflects the DNSSEC AD bit, if DNSSEC=trust is set. As soon as
the client-side validation is complete it will be hooked up to this flag
too.
We also set this bit whenver we generated the data ourselves, for
example, because it originates in our local LLMNR zone, or from the
built-in trust anchor database.
The "systemd-resolve-host" tool has been updated to show the flag state
for the data it shows.
|
|
The setting controls which kind of DNSSEC validation is done: none at
all, trusting the AD bit, or client-side validation.
For now, no validation is implemented, hence the setting doesn't do much
yet, except of toggling the CD bit in the generated messages if full
client-side validation is requested.
|
|
When doing DNSSEC lookups we need to know one or more DS or DNSKEY RRs
as trust anchors to validate lookups. With this change we add a
compiled-in trust anchor database, serving the root DS key as of today,
retrieved from:
https://data.iana.org/root-anchors/root-anchors.xml
The interface is kept generic, so that additional DS or DNSKEY RRs may
be served via the same interface, for example by provisioning them
locally in external files to support "islands" of security.
The trust anchor database becomes the fourth source of RRs we maintain,
besides, the network, the local cache, and the local zone.
|
|
This is often needed for proper DNSSEC support, and even to handle AAAA records
without falling back to TCP.
If the path between the client and server is fully compliant, this should always
work, however, that is not the case, and overlarge packets will get mysteriously
lost in some cases.
For that reason, we use a similar fallback mechanism as we do for palin EDNS0,
EDNS0+DO, etc.:
The large UDP size feature is different from the other supported feature, as we
cannot simply verify that it works based on receiving a reply (as the server
will usually send us much smaller packets than what we claim to support, so
simply receiving a reply does not mean much).
For that reason, we keep track of the largest UDP packet we ever received, as this
is the smallest known good size (defaulting to the standard 512 bytes). If
announcing the default large size of 4096 fails (in the same way as the other
features), we fall back to the known good size. The same logic of retrying after a
grace-period applies.
|
|
This is a minimal implementation of RFC6891. Only default values
are used, so in reality this will be a noop.
EDNS0 support is dependent on the current server's feature level,
so appending the OPT pseudo RR is done when the packet is emitted,
rather than when it is assembled. To handle different feature
levels on retransmission, we strip off the OPT RR again after
sending the packet.
Similarly, to how we fall back to TCP if UDP fails, we fall back
to plain UDP if EDNS0 fails (but if EDNS0 ever succeeded we never
fall back again, and after a timeout we will retry EDNS0).
|
|
Previously, we would only degrade on packet loss, but when adding EDNS0 support,
we also have to handle the case where the server replies with an explicit error.
|
|
This is inspired by the logic in BIND [0], follow-up patches
will implement the reset of that scheme.
If we get a server error back, or if after several attempts we don't
get a reply at all, we switch from UDP to TCP for the given
server for the current and all subsequent requests. However, if
we ever successfully received a reply over UDP, we never fall
back to TCP, and once a grace-period has passed, we try to upgrade
again to using UDP. The grace-period starts off at five minutes
after the current feature level was verified and then grows
exponentially to six hours. This is to mitigate problems due
to temporary lack of network connectivity, but at the same time
avoid flooding the network with retries when the feature attempted
feature level genuinely does not work.
Note that UDP is likely much more commonly supported than TCP,
but depending on the path between the client and the server, we
may have more luck with TCP in case something is wrong. We really
do prefer UDP though, as that is much more lightweight, that is
why TCP is only the last resort.
[0]: <https://kb.isc.org/article/AA-01219/0/Refinements-to-EDNS-fallback-behavior-can-cause-different-outcomes-in-Recursive-Servers.html>
|
|
After all, this is likely a local DNS forwarder that caches anyway,
hence there's no point in caching twice.
Fixes #2038.
|
|
key per scope
When the zone probing code looks for a transaction to reuse it will
refuse to look at transactions that have been answered from cache or the
zone itself, but insist on the network. This has the effect that there
might be multiple transactions around for the same key on the same
scope. Previously we'd track all transactions in a hashmap, indexed by
the key, which implied that there would be only one transaction per key,
per scope. With this change the hashmap will only store the most recent
transaction per key, and a linked list will be used to track all
transactions per scope, allowing multiple per-key per-scope.
Note that the linked list fields for this actually already existed in
the DnsTransaction structure, but were previously unused.
|
|
Let's track where the data came from: from the network, the cache or the
local zone. This is not only useful for debugging purposes, but is also
useful when the zone probing wants to ensure it's not reusing
transactions that were answered from the cache or the zone itself.
|
|
DnsTransaction objects
Previously we'd only store the DnsPacket in the DnsTransaction, and the
DnsQuery would then take the DnsPacket's DnsAnswer and return it. With
this change we already pull the DnsAnswer out inside the transaction.
We still store the DnsPacket in the transaction, if we have it, since we
still need to determine from which peer a response originates, to
implement caching properly. However, the DnsQuery logic doesn't care
anymore for the packet, it now only looks at answers and rcodes from the
successfuly candidate.
This also has the benefit of unifying how we propagate incoming packets,
data from the local zone or the local cache.
|
|
This adds support for searching single-label hostnames in a set of
configured search domains.
A new object DnsQueryCandidate is added that links queries to scopes.
It keeps track of the search domain last used for a query on a specific
link. Whenever a host name was unsuccessfuly resolved on a scope all its
transactions are flushed out and replaced by a new set, with the next
search domain appended.
This also adds a new flag SD_RESOLVED_NO_SEARCH to disable search domain
behaviour. The "systemd-resolve-host" tool is updated to make this
configurable via --search=.
Fixes #1697
|
|
Previously, we'd always generate a packet on the wire, even for names
that are within our local zone. Shortcut this, and always check the
local zone first. This should minimize generated traffic and improve
security.
|
|
|
|
|
|
|
|
There are more than enough to deserve their own .c file, hence move them
over.
|
|
Only one key is allowed per transaction now, so let's simplify things and only allow putting
one question key into the cache at a time.
|
|
This hopefully makes this a bit more expressive and clarifies that the
fd is not used for the DNS TCP socket. This also mimics how the LLMNR
UDP fd is named in the manager object.
|
|
Make a scope with invalid protocol state fail as soon as possible.
|