Age | Commit message (Collapse) | Author |
|
It shouldn't happen that we try to resolve IPv4 addresses via LLMNR on
IPv6 and vice versa, but let's explicitly verify that we don't turn an
IPv4 LLMNR lookup into an IPv6 TCP connection.
|
|
|
|
|
|
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
|
|
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
|
|
Let's optimize things a bit and properly compare DNS question arrays,
instead of checking if they are mutual supersets. This also makes ANY
query handling more accurate.
|
|
This is handled by the kernel now that the socket is connect()ed.
|
|
This was a bug.
|
|
This function emits the UDP packet via the scope, but first it will
determine the current server (and connect to it) and store the
server in the transaction.
This should not change the behavior, but simplifies the code.
|
|
No functional change, but makes follow-up patch clearer.
|
|
With access to the server when creating the socket, we can connect()
to the server and hence simplify message sending and receiving in
follow-up patches.
|
|
Close the socket when changing the server in a transaction, in
order for it to be reopened with the right server when we send
the next packet.
This fixes a regression where we could get stuck with a failing
server.
|
|
This was only ever used by LLMNR, so don't request this for unicast DNS packets.
|
|
A transaction can only have one socket at a time, so no need to distinguish these.
|
|
We were stopping the transaction, but we need to stop processing the packet alltogether.
|
|
We used to have one global socket, use one per transaction instead. This
has the side-effect of giving us a random UDP port per transaction, and
hence increasing the entropy and making cache poisoining significantly
harder to achieve.
We still reuse the same port number for packets belonging to the same
transaction (resent packets).
|
|
This improves the resilience against cache poisoning by being stricter
about only accepting responses that match precisely the requst they
are in reply to.
It should be noted that we still only use one port (which is picked
at random), rather than one port for each transaction. Port
randomization would improve things further, but is not required by
the RFC.
|
|
We want to discover information about the server and use that in when crafting
packets to be resent.
|
|
The C and T bits in the DNS packet header definitions are specific to LLMNR.
In regular DNS, they are called AA and RD instead. Reflect that by calling
the macros accordingly, and alias LLMNR specific macros.
While at it, define RA, AD and CD getters as well.
|
|
De-duplicate some magic numbers.
|
|
|
|
like:
src/shared/install.c: In function ‘unit_file_lookup_state’:
src/shared/install.c:1861:16: warning: ‘r’ may be used uninitialized in
this function [-Wmaybe-uninitialized]
return r < 0 ? r : state;
^
src/shared/install.c:1796:13: note: ‘r’ was declared here
int r;
^
|
|
It is redundant to store 'hash' and 'compare' function pointers in
struct Hashmap separately. The functions always comprise a pair.
Store a single pointer to struct hash_ops instead.
systemd keeps hundreds of hashmaps, so this saves a little bit of
memory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
we actually use LLMNR as protocol
|
|
purposes
|
|
|
|
and timeing out transactions
That way the cache doens't get confused when the system is suspended.
|
|
|