Age | Commit message (Collapse) | Author |
|
Previously, after checking the local zone for a reply and finding one we'd not
initialize the answer ifindex from that. Let's fix that.
|
|
Make sure we can parse DNS server addresses that use the "zone id" syntax for
local link addresses, i.e. "fe80::c256:27ff:febb:12f%wlp3s0", when reading
/etc/resolv.conf.
Also make sure we spit this out correctly again when writing /etc/resolv.conf
and via the bus.
Fixes: #3359
|
|
This fixes formatting of root domain in debug messages:
Old:
systemd-resolved[10049]: Requesting DS to validate transaction 19313 (., DNSKEY with key tag: 19036).
New:
systemd-resolved[10049]: Requesting DS to validate transaction 19313 (, DNSKEY with key tag: 19036).
|
|
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
|
|
|
|
Previously, if a hostanem is resolved with AF_UNSPEC specified, this would be used as indication to resolve both an
AF_INET and an AF_INET6 address. With this change this logic is altered: an AF_INET address is only resolved if there's
actually a routable IPv4 address on the specific interface, and similar an AF_INET6 address is only resolved if there's
a routable IPv6 address. With this in place, it's ensured that the returned data is actually connectable by
applications. This logic mimics glibc's resolver behaviour.
Note that if the client asks explicitly for AF_INET or AF_INET6 it will get what it asked for.
This also simplifies the logic how it is determined whether a specific lookup shall take place on a scope.
Specifically, the checks with dns_scope_good_key() are now moved out of the transaction code and into the query code,
so that we don't even create a transaction object on a specific scope if we cannot execute the resolution on it anyway.
|
|
the network is down
|
|
Not only report whether the server actually supports DNSSEC, but also first check whether DNSSEC is actually enabled
for it in our local configuration.
Also, export a per-link DNSSECSupported property in addition to the existing manager-wide property.
|
|
|
|
This adds a DNSSEC= setting to .network files, and makes resolved honour
them.
|
|
This moves management of the OPT RR out of the scope management and into
the server and packet management. There are now explicit calls for
appending and truncating the OPT RR from a packet
(dns_packet_append_opt() and dns_packet_truncate_opt()) as well as a
call to do the right thing depending on a DnsServer's feature level
(dns_server_adjust_opt()).
This also unifies the code to pick a server between the TCP and UDP code
paths, and makes sure the feature level used for the transaction is
selected at the time the server is picked, and not changed until the
next time we pick a server. The server selction code is now unified in
dns_transaction_pick_server().
This all fixes problems when changing between UDP and TCP communication
for the same server, and makes sure the UDP and TCP codepaths are more
alike. It also makes sure we never keep the UDP port open when switchung
to TCP, so that we don't have to handle incoming datagrams on the latter
we don't expect.
As the new code picks the DNS server at the time we make a connection,
we don't need to invalidate the DNS server anymore when changing to the
next one, thus dns_transaction_next_dns_server() has been removed.
|
|
Previously the calls for emitting DNS UDP packets were just called
dns_{transacion|scope}_emit(), but the one to establish a DNS TCP
connection was called dns_transaction_open_tcp(). Clean this up, and
rename them dns_{transaction|scope}_emit_udp() and
dns_transaction_open_tcp().
|
|
The call already updates possible_features, it's pointless doing this in
the caller a second time.
|
|
We have many types of failure for a transaction, and
DNS_TRANSACTION_FAILURE was just one specific one of them, if the server
responded with a non-zero RCODE. Hence let's rename this, to indicate
which kind of failure this actually refers to.
|
|
|
|
|
|
resolved: more mDNS specific bits (3)
|
|
RFC6762, 18.1:
In multicast query messages, the Query Identifier SHOULD be set to
zero on transmission.
|
|
|
|
For mDNS, if we're unable to stuff all known answers into the given packet,
allocate a new one, push the RR into that one and link it to the current
one.
|
|
In dns_scope_emit(), walk the list of additional packets and emit all of
them. Set the TC bit in all but the last of them.
This is specific to mDNS, so an assertion is triggered if used with other
protocols.
|
|
DNS names ending with .local are specific to mDNS, so don't use them
on DNS scopes.
|
|
Follow what LLMNR does, and create per-link DnsScope objects.
|
|
Per link, join the mDNS multicast groups when the scope is created, and
leave it again when the scope goes away.
|
|
This adds a new SD_RESOLVED_AUTHENTICATED flag for responses we return
on the bus. When set, then the data has been authenticated. For now this
mostly reflects the DNSSEC AD bit, if DNSSEC=trust is set. As soon as
the client-side validation is complete it will be hooked up to this flag
too.
We also set this bit whenver we generated the data ourselves, for
example, because it originates in our local LLMNR zone, or from the
built-in trust anchor database.
The "systemd-resolve-host" tool has been updated to show the flag state
for the data it shows.
|
|
Previously, we'd never do any single-label or root domain lookups via
DNS, thus leaving single-label lookups to LLMNR and the search path
logic in order that single-label names don't leak too easily onto the
internet. With this change we open things up a bit, and only prohibit
A/AAAA lookups of single-label/root domains, but allow all other
lookups. This should provide similar protection, but allow us to resolve
DNSKEY+DS RRs for the top-level and root domains.
(This also simplifies handling of the search domain detection, and gets
rid of dns_scope_has_search_domains() in favour of
dns_scope_get_search_domains()).
|
|
We already blacklisted a few domains, add more.
|
|
This adds dns_resource_record_to_wire_format() that generates the raw
wire-format of a single DnsResourceRecord object, and caches it in the
object, optionally in DNSSEC canonical form. This call is used later to
generate the RR serialization of RRs to verify.
This adds four new fields to DnsResourceRecord objects:
- wire_format points to the buffer with the wire-format version of the
RR
- wire_format_size stores the size of that buffer
- wire_format_rdata_offset specifies the index into the buffer where the
RDATA of the RR begins (i.e. the size of the key part of the RR).
- wire_format_canonical is a boolean that stores whether the cached wire
format is in DNSSEC canonical form or not.
Note that this patch adds a mode where a DnsPacket is allocated on the
stack (instead of on the heap), so that it is cheaper to reuse the
DnsPacket object for generating this wire format. After all we reuse the
DnsPacket object for this, since it comes with all the dynamic memory
management, and serialization calls we need anyway.
|
|
This is often needed for proper DNSSEC support, and even to handle AAAA records
without falling back to TCP.
If the path between the client and server is fully compliant, this should always
work, however, that is not the case, and overlarge packets will get mysteriously
lost in some cases.
For that reason, we use a similar fallback mechanism as we do for palin EDNS0,
EDNS0+DO, etc.:
The large UDP size feature is different from the other supported feature, as we
cannot simply verify that it works based on receiving a reply (as the server
will usually send us much smaller packets than what we claim to support, so
simply receiving a reply does not mean much).
For that reason, we keep track of the largest UDP packet we ever received, as this
is the smallest known good size (defaulting to the standard 512 bytes). If
announcing the default large size of 4096 fails (in the same way as the other
features), we fall back to the known good size. The same logic of retrying after a
grace-period applies.
|
|
This indicates that we can handle DNSSEC records (per RFC3225), even if
all we do is silently drop them. This feature requires EDNS0 support.
As we do not yet support larger UDP packets, this feature increases the
risk of getting truncated packets.
Similarly to how we fall back to plain UDP if EDNS0 fails, we will fall
back to plain EDNS0 if EDNS0+DO fails (with the same logic of remembering
success and retrying after a grace period after failure).
|
|
This is a minimal implementation of RFC6891. Only default values
are used, so in reality this will be a noop.
EDNS0 support is dependent on the current server's feature level,
so appending the OPT pseudo RR is done when the packet is emitted,
rather than when it is assembled. To handle different feature
levels on retransmission, we strip off the OPT RR again after
sending the packet.
Similarly, to how we fall back to TCP if UDP fails, we fall back
to plain UDP if EDNS0 fails (but if EDNS0 ever succeeded we never
fall back again, and after a timeout we will retry EDNS0).
|
|
This is inspired by the logic in BIND [0], follow-up patches
will implement the reset of that scheme.
If we get a server error back, or if after several attempts we don't
get a reply at all, we switch from UDP to TCP for the given
server for the current and all subsequent requests. However, if
we ever successfully received a reply over UDP, we never fall
back to TCP, and once a grace-period has passed, we try to upgrade
again to using UDP. The grace-period starts off at five minutes
after the current feature level was verified and then grows
exponentially to six hours. This is to mitigate problems due
to temporary lack of network connectivity, but at the same time
avoid flooding the network with retries when the feature attempted
feature level genuinely does not work.
Note that UDP is likely much more commonly supported than TCP,
but depending on the path between the client and the server, we
may have more luck with TCP in case something is wrong. We really
do prefer UDP though, as that is much more lightweight, that is
why TCP is only the last resort.
[0]: <https://kb.isc.org/article/AA-01219/0/Refinements-to-EDNS-fallback-behavior-can-cause-different-outcomes-in-Recursive-Servers.html>
|
|
key per scope
When the zone probing code looks for a transaction to reuse it will
refuse to look at transactions that have been answered from cache or the
zone itself, but insist on the network. This has the effect that there
might be multiple transactions around for the same key on the same
scope. Previously we'd track all transactions in a hashmap, indexed by
the key, which implied that there would be only one transaction per key,
per scope. With this change the hashmap will only store the most recent
transaction per key, and a linked list will be used to track all
transactions per scope, allowing multiple per-key per-scope.
Note that the linked list fields for this actually already existed in
the DnsTransaction structure, but were previously unused.
|
|
Let's track where the data came from: from the network, the cache or the
local zone. This is not only useful for debugging purposes, but is also
useful when the zone probing wants to ensure it's not reusing
transactions that were answered from the cache or the zone itself.
|
|
Let's change the return value to bool. If we encounter an error while
parsing, return "false" instead of the actual parsing error, after all
the specified hostname does not qualify for what the function is
supposed to test.
Dealing with the additional error codes was always cumbersome, and
easily misused, like for example in the DHCP code.
Let's also rename the functions from dns_name_root() to
dns_name_is_root(), to indicate that this function checks something and
returns a bool. Similar for dns_name_is_signal_label().
|
|
This adds support for searching single-label hostnames in a set of
configured search domains.
A new object DnsQueryCandidate is added that links queries to scopes.
It keeps track of the search domain last used for a query on a specific
link. Whenever a host name was unsuccessfuly resolved on a scope all its
transactions are flushed out and replaced by a new set, with the next
search domain appended.
This also adds a new flag SD_RESOLVED_NO_SEARCH to disable search domain
behaviour. The "systemd-resolve-host" tool is updated to make this
configurable via --search=.
Fixes #1697
|
|
With this change, we add a new object to resolved, "DnsSearchDomain="
which wraps a search domain. This is then used to introduce a global
search domain list, in addition to the existing per-link search domain
list which is reword to make use of this new object too.
This is preparation for implement proper unicast DNS search domain
support.
|
|
|
|
Instead of taking a DnsQuestion object (i.e. an array of keys) only take
a single key. This simplifies things a bit, and as DNS/LLMNR require a
single question per query message was unnecessary anyway.
This mimics a similar change that was done a while ago for the dns cache
logic.
|
|
|
|
There are more than enough to deserve their own .c file, hence move them
over.
|
|
|
|
Bring back a return statement 106784eb errornously removed.
Thanks to @phomes for reporting.
|
|
With more protocols to come, switch repetitive if-else blocks with a
switch-case statements.
|
|
Right now we keep track of ongoing transactions in a linked listed for
each scope. Replace this by a hashmap that is indexed by the RR key.
Given that all ongoing transactions will be placed in pretty much the
same scopes usually this should optimize behaviour.
We used to require a list here, since we wanted to do "superset" query
checks, but this became obsolete since transactions are now single-key
instead of multi-key.
|
|
Let's simplify things and only maintain a single RR key per transaction
object, instead of a full DnsQuestion. Unicast DNS and LLMNR don't
support multiple questions per packet anway, and Multicast DNS suggests
coalescing questions beyond a single dns query, across the whole system.
|
|
With this change we'll now also generate synthesized RRs for the local
LLMNR hostname (first label of system hostname), the local mDNS hostname
(first label of system hostname suffixed with .local), the "gateway"
hostname and all the reverse PTRs. This hence takes over part of what
nss-myhostname already implemented.
Local hostnames resolve to the set of local IP addresses. Since the
addresses are possibly on different interfaces it is necessary to change
the internal DnsAnswer object to track per-RR interface indexes, and to
change the bus API to always return the interface per-address rather than
per-reply. This change also patches the existing clients for resolved
accordingly (nss-resolve + systemd-resolve-host).
This also changes the routing logic for queries slightly: we now ensure
that the local hostname is never resolved via LLMNR, thus making it
trustable on the local system.
|
|
We should never allow leaking of "localhost" queries onto the network,
even if there's an explicit domain rotue set for this.
|
|
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
|
|
We already refuse to resolve "localhost", hence we should also refuse
resolving "127.0.0.1" and friends.
|