Age | Commit message (Collapse) | Author |
|
When we receieve a TTL=0 RR, then let's only flush that specific RR and
not the whole RRset.
On mDNS with RRsets that a shared-owner this is how specific RRs are
removed from the set, hence support this. And on non-mDNS the whole
RRset will already be removed much earlier in dns_cache_put() hence
there's no reason remove it again.
|
|
|
|
|
|
We should never use the TTL of an unauthenticated SOA to cache an
authenticated RR.
|
|
We call it anyway as one of the first calls in dns_cache_put(), hence
there's no reason to do this multiple times.
|
|
Let's keep entries for longer and more of them. After all, due to the
DNSSEC hookup the amount of RRs we need to store is much higher now.
|
|
|
|
|
|
Let's make DNS class helpers more like DNS type helpers, let's move them
from resolved-dns-rr.[ch] into dns-type.[ch].
This also adds two new calls dns_class_is_pseudo() and
dns_class_is_valid_rr() which operate similar to dns_type_is_pseudo()
and dns_type_is_valid_rr() but for classes instead of types.
This should hopefully make handling of DNS classes and DNS types more
alike.
|
|
|
|
OK to be unsigned
This large patch adds a couple of mechanisms to ensure we get NSEC3 and
proof-of-unsigned support into place. Specifically:
- Each item in an DnsAnswer gets two bit flags now:
DNS_ANSWER_AUTHENTICATED and DNS_ANSWER_CACHEABLE. The former is
necessary since DNS responses might contain signed as well as unsigned
RRsets in one, and we need to remember which ones are signed and which
ones aren't. The latter is necessary, since not we need to keep track
which RRsets may be cached and which ones may not be, even while
manipulating DnsAnswer objects.
- The .n_answer_cachable of DnsTransaction is dropped now (it used to
store how many of the first DnsAnswer entries are cachable), and
replaced by the DNS_ANSWER_CACHABLE flag instead.
- NSEC3 proofs are implemented now (lacking support for the wildcard
part, to be added in a later commit).
- Support for the "AD" bit has been dropped. It's unsafe, and now that
we have end-to-end authentication we don't need it anymore.
- An auxiliary DnsTransaction of a DnsTransactions is now kept around as
least as long as the latter stays around. We no longer remove the
auxiliary DnsTransaction as soon as it completed. THis is necessary,
as we now are interested not only in the RRsets it acquired but also
in its authentication status.
|
|
Let's be safe and explicitly avoid that we add an auxiliary transaction
dependency on ourselves.
|
|
The DNS_ANSWER_FOREACH macros do this internally anyway, no need to
duplicate this.
|
|
We need no separate timeout anymore as soon as we received a reply, as
the auxiliary transactions have their own timeouts.
|
|
|
|
they are destroyed
A failing transaction might cause other transactions to fail too, and
thus the set of transactions to notify for a transaction might change
while we are notifying them. Protect against that.
|
|
|
|
|
|
We end up needing the stringified transaction key in many log messages,
hence let's simplify the logic and cache it inside of the transaction:
generate it the first time we need it, and reuse it afterwards. Free it
when the transaction goes away.
This also updated a couple of log messages to make use of this.
|
|
|
|
|
|
Resolve: misc cleanups
|
|
|
|
|
|
Manager status
|
|
man: fix typo in journal-remote.conf(5)
|
|
|
|
Fifth batch of DNSSEC support patches
|
|
journal-remote: add documents in the unit files
|
|
basic: ENABLE_DEBUG_HASHMAP needs <pthread.h>
|
|
this is a follow-up for commit 11c3a36649e5e5e77db499c92f3
|
|
|
|
|
|
Note that this is not complete yet, as we don't handle wildcard domains
correctly, nor handle domains correctly that use empty non-terminals.
|
|
necessary
|
|
|
|
It's not OK to drop these for our proof of non-existance checks.
|
|
candidate state
|
|
|
|
digest ids
Let's move this into a function digest_to_gcrypt() that we can reuse
later on when implementing NSEC3 validation.
|
|
validation
Specifically, it appears as if the NSEC next domain name should be in
the original casing rather than canonical form, when validating.
|
|
treewide: fix typos and indentation
|
|
|
|
When a client connects with follow=1 and then disconnects we can get
stuck in sd_journal_wait indefinitely if no journal messages are logged.
Every time a client does this another thread is allocated and these
continue to stack until either a journal message is logged or we run out
of mapping to put a stack in.
By adding a timeout if we don't see any journal messages in that timeout
we will simply pop back out to microhttpd which will sanity check the
connection for us and if it is still connected pop us back into the wait
for more journal messages.
|
|
Fixes:
$ systemd-analyze verify a@.service
Failed to load a@.service: Invalid argument
|
|
This was the case that caused various problems that were fixed in
preceding patches, so it is good to add a test that uses it directly.
In "may_fail" test cases try again with a bigger buffer.
Instead of allocating various buffers on the stack, malloc them.
This is more reliable in case of big buffers, and allows tools like
valgrind and address sanitizer to find overflows more easily.
|
|
Add a test that LZ4_decompress_safe_partial does (not) work as
expected, so that if it starts to work at some point, we'll catch
this and adjust our code.
|
|
The header is 7 bytes, and this size was not accounted for in
total_out. This means that we could create a file that was 7 bytes
longer than requested, and the debug output was also inconsistent.
|
|
compress_blob took src, src_size, dst and *dst_size, but dst_size
wasn't used as an input parameter with the size of dst, but only as an
output parameter. dst was implicitly assumed to be at least src_size-1.
This code wasn't *wrong*, because the only real caller in
journal-file.c got it right. But it was misleading, and the tests in
test-compress.c got it wrong, and worked only because the output
buffer happened to be the same size as input buffer. So add a seperate
dst_allocated_size parameter to make it explicit what the size of the
buffer is, and to allow test to proceed with different output buffer
sizes.
|
|
lz4 has to decompress a whole "sequence" at a time. When the compressed
data is composed of a repeating pattern, the whole set of repeats has
do be docompressed, and the output buffer has to be big enough.
This is unfortunate, because potentially the slowdown is very big. We
are only interested in the field name, but we might have to decompress
the whole thing. But the full cost will be borne out only when the
full entry is a repeating pattern. In practice this shouldn't happen
(apart from tests and the like). Hopefully lz4 will be fixed to avoid
this problem, or it will grow a new function which we can use [1], so
this fix should be remporary.
[1] https://groups.google.com/d/msg/lz4c/_3kkz5N6n00/oTahzqErCgAJ
|