Age | Commit message (Collapse) | Author |
|
gcc complains that dirs might be unitialized. It cannot, but
we just checked that name has one of three values above, so
no need to check again.
|
|
|
|
sd_event_now() is a public function, so we must check all
arguments for validity. Update man page and add tests.
Sample debug message:
Assertion 'IN_SET(clock, CLOCK_REALTIME, CLOCK_REALTIME_ALARM, CLOCK_MONOTONIC, CLOCK_BOOTTIME, CLOCK_BOOTTIME_ALARM)' failed at src/libsystemd/sd-event/sd-event.c:2719, function sd_event_now(). Ignoring.
|
|
Go over the entries in the map and check that they make sense.
Tests are added. In the future we might want to do additional
checks, e.g. verifying that the error names are in the expected
format.
|
|
errno_from_name used an unusual return convention where 0 meant
"not found". This tripped up config_parse_syscall_errno(),
which would treat that as success. Return -EINVAL instead,
and adjust bus_error_name_to_errno() for the new convention.
Also remove a goto which was used as a simple if and clean
up surroudning code a bit.
|
|
This is not particularly intrusive because it happens in simple
utility functions. It helps gcc understand that error codes
are negative.
This gets a rid of most of the remaining warnings.
|
|
Compare errno with zero in a way that tells gcc that
(if the condition is true) errno is positive.
|
|
gcc is confused by the common idiom of
return errno ? -errno : -ESOMETHING
and thinks a positive value may be returned. Replace this condition
with errno > 0 to help gcc and avoid many spurious warnings. I filed
a gcc rfe a long time ago, but it hard to say if it will ever be
implemented [1].
Both conventions were used in the codebase, this change makes things
more consistent. This is a follow up to bcb161b0230f.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61846
|
|
Add machine-id setting
|
|
Allow for overriding all other machine-ids which may be present on
the system using a kernel command line systemd.machine_id or
--machine-id= option.
This is especially useful for network booted systems where the
machine-id needs to be static, or for containers where a specific
machine-id is wanted.
|
|
|
|
sd-event: instrument sd_event_run() for profiling delays
|
|
Set SD_EVENT_PROFILE_DELAYS to activate accounting and periodic logging
of the distribution of delays between sd_event_run() calls.
Time spent in dispatching as well as time spent outside of
sd_event_run() is measured and accounted for. Every 5 seconds a
logarithmic histogram loop iteration delays since 5 seconds previous is
logged.
This is useful in identifying the frequency and magnitude of latencies
affecting the event loop, which should be kept to a minimum.
|
|
Also add a coccinelle receipt to help with such transitions.
|
|
capabilities: added support for ambient capabilities.
|
|
Fix miscalculated buffer size and uses of size-unlimited sprintf()
|
|
The ambient capability tests are only run if the kernel has support for
ambient capabilities.
|
|
This patch adds support for ambient capabilities in service files. The
idea with ambient capabilities is that the execed processes can run with
non-root user and get some inherited capabilities, without having any
need to add the capabilities to the executable file.
You need at least Linux 4.3 to use ambient capabilities. SecureBit
keep-caps is automatically added when you use ambient capabilities and
wish to change the user.
An example system service file might look like this:
[Unit]
Description=Service for testing caps
[Service]
ExecStart=/usr/bin/sleep 10000
User=nobody
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_RAW
After starting the service it has these capabilities:
CapInh: 0000000000003000
CapPrm: 0000000000003000
CapEff: 0000000000003000
CapBnd: 0000003fffffffff
CapAmb: 0000000000003000
|
|
Change the capability bounding set parser and logic so that the bounding
set is kept as a positive set internally. This means that the set
reflects those capabilities that we want to keep instead of drop.
|
|
journal: normalize priority of logging sources
|
|
function.
Not sure if this results in an exploitable buffer overflow, probably not
since the the int value is likely sanitized somewhere earlier and it's
being put through a bit mask shortly before being used.
|
|
|
|
We wouldn't know how to validate them, since they are the signatures, and hence have no signatures.
|
|
Given how fragile DNS servers are with some DNS types, and given that we really should avoid confusing them with
known-weird lookups, refuse doing lookups for known-obsolete RR types.
|
|
current_feature_level
This is a follow-up for f4461e5641d53f27d6e76e0607bdaa9c0c58c1f6.
|
|
its own
A suggested by Vito Caputo:
https://github.com/systemd/systemd/pull/2289#discussion-diff-49276220
|
|
|
|
DNSSEC
Move detection into a set of new functions, that check whether one specific server can do DNSSEC, whether a server and
a specific transaction can do DNSSEC, or whether a transaction and all its auxiliary transactions could do so.
Also, do these checks both before we acquire additional RRs for the validation (so that we can skip them if the server
doesn't do DNSSEC anyway), and after we acquired them all (to see if any of the lookups changed our opinion about the
servers).
THis also tightens the checks a bit: a server that lacks TCP support is considered incompatible with DNSSEC too.
|
|
This makes it easier to log information about a specific DnsServer object.
|
|
This changes the DnsServer logic to count failed UDP and TCP failures separately. This is useful so that we don't end
up downgrading the feature level from one UDP level to a lower UDP level just because a TCP connection we did because
of a TC response failed.
This also adds accounting of truncated packets. If we detect incoming truncated packets, and count too many failed TCP
connections (which is the normal fall back if we get a trucnated UDP packet) we downgrade the feature level, given that
the responses at the current levels don't get through, and we somehow need to make sure they become smaller, which they
will do if we don't request DNSSEC or EDNS support.
This makes resolved work much better with crappy DNS servers that do not implement TCP and only limited UDP packet
sizes, but otherwise support DNSSEC RRs. They end up choking on the generally larger DNSSEC RRs and there's no way to
retrieve the full data.
|
|
|
|
|
|
|
|
supporting them
If we already degraded the feature level below DO don't bother with sending requests for DS, DNSKEY, RRSIG, NSEC, NSEC3
or NSEC3PARAM RRs. After all, we cannot do DNSSEC validation then anyway, and we better not press a legacy server like
this with such modern concepts.
This also has the benefit that when we try to validate a response we received using DNSSEC, and we detect a limited
server support level while doing so, all further auxiliary DNSSEC queries will fail right-away.
|
|
|
|
TCP or vice versa
Under the assumption that packet failures (i.e. FORMERR, SERVFAIL, NOTIMP) are caused by packet contents, not used
transport, we shouldn't switch between UDP and TCP when we get them, but only downgrade the higher levels down to UDP.
|
|
UDP ICMP errors are reported to us via recvmsg() when we read a reply. Handle this properly, and consider this a lost
packet, and retry the connection.
This also adds some additional logging for invalid incoming packets.
|
|
Previously, when we couldn't connect to a DNS server via TCP we'd abort the whole transaction using a
"connection-failure" state. This change removes that, and counts failed connections as "lost packet" events, so that
we switch back to the UDP protocol again.
|
|
If we failed to contact a DNS server via TCP, bump of the feature level to UDP again. This way we'll switch back
between UDP and TCP if we fail to contact a host.
Generally, we prefer UDP over TCP, which is why UDP is a higher feature level. But some servers only support UDP but
not TCP hence when reaching the lowest feature level of TCP and want to downgrade from there, pick UDP again. We this
keep downgrading until we reach TCP and then we cycle through UDP and TCP.
|
|
The code to retry transactions has been used over and over again, simplify it by replacing it by a new function.
|
|
|
|
|
|
This also introduces a new macro siphash24_compress_byte() which is useful to add a single byte into the hash stream,
and ports one user over to it.
|
|
The hash operations are not really that specific to hashmaps, hence split them into a .c module of their own.
|
|
response
This implements RFC 5155, Section 8.8 and RFC 4035, Section 5.3.4:
When we receive a response with an RRset generated from a wildcard we
need to look for one NSEC/NSEC3 RR that proves that there's no explicit RR
around before we accept the wildcard RRset as response.
This patch does a couple of things: the validation calls will now
identify wildcard signatures for us, and let us know the RRSIG used (so
that the RRSIG's signer field let's us know what the wildcard was that
generate the entry). Moreover, when iterating trough the RRsets of a
response we now employ three phases instead of just two.
a) in the first phase we only look for DNSKEYs RRs
b) in the second phase we only look for NSEC RRs
c) in the third phase we look for all kinds of RRs
Phase a) is necessary, since DNSKEYs "unlock" more signatures for us,
hence we shouldn't assume a key is missing until all DNSKEY RRs have
been processed.
Phase b) is necessary since NSECs need to be validated before we can
validate wildcard RRs due to the logic explained above.
Phase c) validates everything else. This phase also handles RRsets that
cannot be fully validated and removes them or lets the transaction fail.
|
|
There's now nsec3_hashed_domain_format() and nsec3_hashed_domain_make().
The former takes a hash value and formats it as domain, the latter takes
a domain name, hashes it and then invokes nsec3_hashed_domain_format().
This way we can reuse more code, as the formatting logic can be unified
between this call and another place.
|
|
|
|
validated keys list
When validating a transaction we initially collect DNSKEY, DS, SOA RRs
in the "validated_keys" list, that we need for the proofs. This includes
DNSKEY and DS data from our trust anchor database. Quite possibly we
learn that some of these DNSKEY/DS RRs have been revoked between the
time we request and collect those additional RRs and we begin the
validation step. In this case we need to make sure that the respective
DS/DNSKEY RRs are removed again from our list. This patch adds that, and
strips known revoked trust anchor RRs from the validated list before we
begin the actual validation proof, and each time we add more DNSKEY
material to it while we are doing the proof.
|
|
|
|
in labels
|