Age | Commit message (Collapse) | Author |
|
We should never allow leaking of "localhost" queries onto the network,
even if there's an explicit domain rotue set for this.
|
|
Let's make sure that clients querying resolved via the bus for A, AAAA
or PTR records for "localhost" get a synthesized, local reply, so that
we do not hit the network.
This makes part of nss-myhostname redundant, if used in conjunction.
However, given that nss-resolve shall be optional we need to keep this
code in both places for now.
|
|
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
|
|
With the exponential backoff, we can perform more requests in the same amount of time,
so bump this a bit.
In case of large RTT this may be necessary in order not to regress, and in case
of large packet-loss it will make us more robust. The latter is particularly
relevant once we start probing for features (and hence may see packet-loss
until we settle on the right feature level).
|
|
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
|
|
We cannot rely on CLOCK_BOOTTIME being supported by the kernel, so fallack
to CLOCK_MONOTONIC if the former is not supported.
|
|
resolved: never attempt to resolve loopback addresses via DNS/LLMNR/mDNS
|
|
We already refuse to resolve "localhost", hence we should also refuse
resolving "127.0.0.1" and friends.
|
|
|
|
The NSEC type itself must at least be in the bitmap, so NSEC records with empty
bitmaps must be bogus.
|
|
We were tracking the bit offset inside each byte, rather than inside the whole bitmap.
|
|
We were counting the number of bits set rather than the number of bytes they occupied.
|
|
Let's optimize things a bit and properly compare DNS question arrays,
instead of checking if they are mutual supersets. This also makes ANY
query handling more accurate.
|
|
This is handled by the kernel now that the socket is connect()ed.
|
|
This was a bug.
|
|
As we have connect()ed to the desired DNS server, we no longer need to pass
control messages manually when sending packets. Simplify the logic accordingly.
|
|
This function emits the UDP packet via the scope, but first it will
determine the current server (and connect to it) and store the
server in the transaction.
This should not change the behavior, but simplifies the code.
|
|
No functional change, but makes follow-up patch clearer.
|
|
With access to the server when creating the socket, we can connect()
to the server and hence simplify message sending and receiving in
follow-up patches.
|
|
Close the socket when changing the server in a transaction, in
order for it to be reopened with the right server when we send
the next packet.
This fixes a regression where we could get stuck with a failing
server.
|
|
This was only ever used by LLMNR, so don't request this for unicast DNS packets.
|
|
A transaction can only have one socket at a time, so no need to distinguish these.
|
|
Assigning a TPYE enum value to a class variable is certainly wrong.
However, they both have the same value, so the result was correct
nevertheless.
|
|
We were stopping the transaction, but we need to stop processing the packet alltogether.
|
|
A size_t was being accessed as a char* due to the order of arguments being inverted.
|
|
|
|
We were appending rather than reading the bitmap.
|
|
We can never read past the end of the packet, so this seems impossible
to exploit, but let's error out early as reading past the end of the
current RR is clearly an error.
Found by Lennart, based on patch by Daniel.
|
|
Rename the field to make this clearer.
|
|
Most blobs (keys, signatures, ...) should have a specific size given by
the relevant algorithm. However, as we don't use/verify the algorithms
yet, let's just ensure that we don't read out zero-length data in cases
where this does not make sense.
The only exceptions, where zero-length data is allowed are in the NSEC3
salt field, and the generic data (which we don't know anything about,
so better not make any assumptions).
|
|
resolve: unify memdup() code when parsing RRs
|
|
resolved: make sure we alway initialize *start in dns_packet_append_t…
|
|
dns_packet_append_type_window()
|
|
Let's make dns_packet_read_public_key() more generic by renaming it to
dns_packet_read_memdup() (which more accurately describes what it
does...). Then, patch all cases where we memdup() RR data to use this
new call.
This specifically checks for zero-length objects, and handles them
gracefully. It will set zero length payload fields as a result.
Special care should be taken to ensure that any code using this call
can handle the returned allocated field to be NULL if the size is
specified as 0!
|
|
resolve: fix two minor memory leaks
|
|
strv_extend() already strdup()s internally, no need to to this twice.
(Also, was missing OOM check...).
Use strv_consume() when we already have a string allocated whose
ownership we want to pass to the strv.
This fixes 50f1e641a93cacfc693b0c3d300bee5df0c8c460.
|
|
It's not used anymore since 29815b6c608b836cada5e349d06a96b63eaa65f3,
hence let's remove it from the sources.
|
|
Reuse the Iterator object from hashmap.h and expose a similar API.
This allows us to do
{
Iterator i;
unsigned n;
BITMAP_FOREACH(n, b, i) {
Iterator j;
unsigned m;
BITMAP_FOREACH(m, b, j) {
...
}
}
}
without getting confused. Requested by David.
|
|
Needed for DNSSEC.
|
|
Needed for DNSSEC.
|
|
resolved: minor improvements to RR handling
|
|
This implements the recommendations from RFC3597.
|
|
Needed for DNSSEC.
|
|
|
|
We used to have one global socket, use one per transaction instead. This
has the side-effect of giving us a random UDP port per transaction, and
hence increasing the entropy and making cache poisoining significantly
harder to achieve.
We still reuse the same port number for packets belonging to the same
transaction (resent packets).
|
|
This improves the resilience against cache poisoning by being stricter
about only accepting responses that match precisely the requst they
are in reply to.
It should be noted that we still only use one port (which is picked
at random), rather than one port for each transaction. Port
randomization would improve things further, but is not required by
the RFC.
|
|
We want to discover information about the server and use that in when crafting
packets to be resent.
|
|
We want to reference the servers from their active transactions, so make sure
they stay around as long as the transaction does.
|
|
Currently we only make sure our links can handle the size of the payload witohut
taking the headers into account.
|
|
As mandated by RFC4034.
|