Age | Commit message (Collapse) | Author |
|
Meson status and conditional simplification
|
|
Using conf.set() with a boolean argument does the right thing:
either #ifdef or #undef. This means that conf.set can be used unconditionally.
Previously I used '1' as the placeholder value, and that needs to be changed to
'true' for consistency (under meson 1 cannot be used in boolean context). All
checks need to be adjusted.
|
|
This is useful on systems like NixOS, where python3 is not in
/usr/bin/python3 as well as for people using alternative ways to
install python such as virtualenv/pyenv.
|
|
as mentioned in https://github.com/systemd/systemd/pull/5811
|
|
|
|
Also detect libgpg-error. Require both to be present for HAVE_CRYPT,
even though libgpg-error is only used in src/resolve. If one is available,
the other should be too, so it doesn't seem worth the trouble to make two
separate conditions.
|
|
The indentation for emacs'es meson-mode is added .dir-locals.
All files are reindented automatically, using the lasest meson-mode from git.
Indentation should now be fairly consistent.
|
|
This simplifies things and leads to a smaller installation footprint.
libsystemd_internal and libsystemd_journal_internal are linked into
libystemd-shared and available to all programs linked to libsystemd-shared.
libsystemd_journal_internal is not needed anymore, and libsystemd-shared
is used everwhere. The few exceptions are: libsystemd.so, test-engine,
test-bus-error, and various loadable modules.
|
|
With mesonbuid/meson#1545, meson does not propagate deps of a library
when linking with that library. That's of course the right thing to do,
but it exposes a bunch of missing deps.
This compiles with both meson-0.39.1 and meson-git + pr/1545.
|
|
Tests can be run with 'ninja-build test' or using 'mesontest'.
'-Dtests=unsafe' can be used to include the "unsafe" tests in the
test suite, same as with autotools.
v2:
- use more conf.get guards are optional components
- declare deps on generated headers for test-{af,arphrd,cap}-list
v3:
- define environment for tests
Most test don't need this, but to be consistent with autotools-based build, and
to avoid questions which tests need it and which don't, set the same environment
for all tests.
v4:
- rework test generation
Use a list of lists to define each test. This way we can reduce the
boilerplate somewhat, although the test listings are still pretty verbose. We
can also move the definitions of the tests to the subdirs. Unfortunately some
subdirs are included earlier than some of the libraries that test binaries
are linked to. So just dump all definitions of all tests that cannot be
defined earlier into src/test. The `executable` definitions are still at the
top level, so the binaries are compiled into the build root.
v5:
- tag test-dnssec-complex as manual
v6:
- fix HAVE_LIBZ typo
- add missing libgobject/libgio defs
- mark test-qcow2 as manual
|
|
It's crucial that we can build systemd using VS2010!
... er, wait, no, that's not the official reason. We need to shed old systems
by requring python 3! Oh, no, it's something else. Maybe we need to throw out
345 years of knowlege accumulated in autotools? Whatever, this new thing is
cool and shiny, let's use it.
This is not complete, I'm throwing it out here for your amusement and critique.
- rules for sd-boot are missing. Those might be quite complicated.
- rules for tests are missing too. Those are probably quite simple and
repetitive, but there's lots of them.
- it's likely that I didn't get all the conditions right, I only tested "full"
compilation where most deps are provided and nothing is disabled.
- busname.target and all .busname units are skipped on purpose.
Otherwise, installation into $DESTDIR has the same list of files and the
autoconf install, except for .la files.
It'd be great if people had a careful look at all the library linking options.
I added stuff until things compiled, and in the end there's much less linking
then in the old system. But it seems that there's still a lot of unnecessary
deps.
meson has a `shared_module` statement, which sounds like something appropriate
for our nss and pam modules. Unfortunately, I couldn't get it to work. For the
nss modules, we need an .so version of '2', but `shared_module` disallows the
version argument. For the pam module, it also didn't work, I forgot the reason.
The handling of .m4 and .in and .m4.in files is rather awkward. It's likely
that this could be simplified. If make support is ever dropped, I think it'd
make sense to switch to a different templating system so that two different
languages and not required, which would make everything simpler yet.
v2:
- use get_pkgconfig_variable
- use sh not bash
- use add_project_arguments
v3:
- drop required:true and fix progs/prog typo
v4:
- use find_library('bz2')
- add TTY_GID definition
- define __SANE_USERSPACE_TYPES__
- use join_paths(prefix, ...) is used on all paths to make them all absolute
v5:
- replace all declare_dependency's with []
- add more conf.get guards around optional components
v6:
- drop -pipe, -Wall which are the default in meson
- use compiler.has_function() and compiler.has_header_symbol instead of the
hand-rolled checks.
- fix duplication in 'liblibsystemd' library name
- use the right .sym file for pam_systemd
- rename 'compiler' to 'cc': shorter, and more idiomatic.
v7:
- use ENABLE_ENVIRONMENT_D not HAVE_ENVIRONMENT_D
- rename prefix to prefixdir, rootprefix to rootprefixdir
("prefix" is too common of a name and too easy to overwrite by mistake)
- wrap more stuff with conf.get('ENABLE...') == 1
- use rootprefix=='/' and rootbindir as install_dir, to fix paths under
split-usr==true.
v8:
- use .split() also for src/coredump. Now everything is consistent ;)
- add rootlibdir option and use it on the libraries that require it
v9:
- indentation
v10:
- fix check for qrencode and libaudit
v11:
- unify handling of executable paths, provide options for all progs
This makes the meson build behave slightly differently than the
autoconf-based one, because we always first try to find the executable in the
filesystem, and fall back to the default. I think different handling of
loadkeys, setfont, and telinit was just a historical accident.
In addition to checking in $PATH, also check /usr/sbin/, /sbin for programs.
In Fedora $PATH includes /usr/sbin, (and /sbin is is a symlink to /usr/sbin),
but in Debian, those directories are not included in the path.
C.f. https://github.com/mesonbuild/meson/issues/1576.
- call all the options 'xxx-path' for clarity.
- sort man/rules/meson.build properly so it's stable
|
|
Various meson-independent cleanups from the meson patchset
|
|
This is our own header, we should include use the local-include syntax
("" not <>), to make it clear we are including the one from the build tree.
All other includes of files from src/systemd/ use this scheme.
|
|
Fixes wrong indent introduced by the commit 43688c49d1fdb585196d94e2e30bb29755fa591b.
|
|
|
|
Previously, `SO_REUSEADDR` is set before `bind`-ing socket, Thus,
even if another LLMNR stack is running, `bind` always success and
we cannot detect the other stack. By this commit, we first try to
`bind` without `SO_REUSEADDR`, and if it fails, show warning and
retry with `SO_REUSEADDR`.
|
|
|
|
Previously, `SO_REUSEADDR` is set before `bind`-ing socket, Thus,
even if another mDNS stack (e.g. avahi) is running, `bind` always
success and we cannot detect the other stack.
By this commit, we first try to `bind` without `SO_REUSEADDR`,
and if it fails, show warning and retry with `SO_REUSEADDR`.
|
|
When no network enables LLMNR or mDNS, it is not necessary to create
LLMNR or mDNS related sockets. So, let's create them only when
LLMNR- or mDNS-enabled network becomes active or at least one network
enables `LLMNR=` or `MulticastDNS=` options.
|
|
|
|
Fixes: #5482
|
|
|
|
Found with:
git grep '"[^"]*[a-z0-9]([0-9]\+p\?)' src/ | grep -vF man:
|
|
more resolved fixes
|
|
For caching negative replies we need the SOA TTL information. Hence,
let's authenticate all auxiliary SOA RRs through DS requests on all
negative requests.
|
|
Let's increase a number of timeouts as they apparently are too short for
some real-world lookups.
See:
https://github.com/systemd/systemd/issues/4003#issuecomment-279842616
In particular we change the following timeouts:
1) The first UDP retry we increase 500ms → 750ms. This is a good idea,
since some servers need relatively long responses for trivial lookups,
and giving up our first attempt also has the effect of trying a
different server for the next attempt which has the side effect that
we'll run two down-grade iterations in parallel, on both servers.
Hence, let's give servers a bit more time in the first iteration.
2) Permit 24 retries instead of just 16 per transactions. If we end up
downgrading all the way down to UDP for a lookup we already need 5
iterations for that. If we want permit a couple of lost packages for
each (let's say 4), then we already need 20 iterations.
3) Increase the overall query timeout on the service side to 60s (from
45s), simply because very long and slow DNSSEC + CNAME chains (such as
us.ynuf.alipay.com) hit this boundary too easily. The client side
timeout for the bus method call is increased to 90s, in order to have
room for the dbus reply to go through
|
|
Following our coding style on success we should initialize all return
parameters of a function. We missed to cases for dns_cache_lookup() (but
covered all others), fix them too.
|
|
This is the most important piece of information of replies, hence show
this in the first log message about it.
(Wireshark shows it too in the short summary, hence this definitely
makes sense...)
|
|
Retrying a transaction via TCP is a good approach for mitigating
packet loss. However, it's not a good away way to fix a bad RCODE if we
already downgraded to UDP level for it. Hence, don't do this.
This is a small tweak only, but shortens the time we spend on
downgrading when a specific domain continously returns a bad rcode.
|
|
Some domains (such as us.ynuf.alipay.com) almost appear as if they actively
want to sabotage our DNSSEC work. Specifically, they unconditionally
return SERVFAIL on SOA lookups and always only after a 1s delay (at
least). This is pretty bad for our validation logic, as we use SOA
lookups to distuingish zones from non-terminal names. Moreover, SERVFAIL
is an error that is typically returned if we send requests a server
doesn't grok, and thus is reason for us to downgrade our protocol and
try again. In case of these zones this means we'll accept the SERVFAIL
response only after a full iterative downgrade to our lowest feature
level: TCP. In combination with the 1s delays this has the effect of
making us hit our transaction timeout way to easily.
As first attempt to improve the situation: let's start caching SERVFAIL
responses in our cache, after the full downgrade for a short period of
time.
Conceptually this is exposed as "weird rcode" caching, but for now we
only consider SERVFAIL a "weird rcode" worthy of caching. Later on we
might want to add more.
|
|
When we are doing a TCP transaction the kernel will automatically resend
all packets for us, there's no need to do that ourselves. Hence:
increase the timeout for TCP transactions substantially, to give the
kernel enough time to connect to the peer, without interrupting it when
we become impatient.
|
|
There's no point in talking to a server in DNSSEC mode when we don't
actually want to verify anything.
See: #5352
|
|
authentication bool even on failure
Let's make sure that if we accept a query candidate, then let's also
propagate the authenticated flag for it, so that we can properly report
back to the clients whether lookups failed due to non-existance that can
be proven.
|
|
When we managed to prove non-existance of a name, then we should
properly propagate this to clients by setting the AD bit on NXDOMAIN.
See: #4621
|
|
Doesn't really change anything, but makes things a bit simpler to read.
|
|
The cache might contain all kinds of unauthenticated data that we really
shouldn't be using if we upgrade our feature level and suddenly are able
to get authenticated data again.
Might fix: #4866
|
|
For the wildcard NSEC check we need to generate an "asterisk" domain, by
prepend the common ancestor with "*.". So far we did that with a simple
strappenda() which is fine for most domains, but doesn't work if the
common ancestor is the root domain as we usually write that as "." in
normalized form, and "*." joined with "." is "*.." and not "*." as it
should be.
Hence, use the clean way out, let's just use dns_name_concat() which
only exists precisely for this reason, to properly concatenate labels.
There's a good chance this actually fixes #5029, as this NSEC proof is
triggered by lookups in the TLD "example", which doesn't exist in the
Internet.
|
|
This ensures that configured NTAs exclude not only the listed domain but
also all domains below it from DNSSEC validation -- except if a positive
trust anchor is defined below (as suggested by RFC7647, section 1.1)
Fixes: #5048
|
|
This changes resolved to use the compile-time fallback hostname the
configured one is not set. Note that if the local hostname is set to
"localhost" then we'll instead default to "linux" here, as for
mDNS/LLMNR exposing "localhost" is actively dangerous.
|
|
Drop the TEST_DATA_DIR macro as this was using alloca() within a
function call which is allegedly unsafe. So add a "suffix" argument to
get_testdata_dir() instead and call that directly.
|
|
Embedding sd_id128_t's in constant strings was rather cumbersome. We had
SD_ID128_CONST_STR which returned a const char[], but it had two problems:
- it wasn't possible to statically concatanate this array with a normal string
- gcc wasn't really able to optimize this, and generated code to perform the
"conversion" at runtime.
Because of this, even our own code in coredumpctl wasn't using
SD_ID128_CONST_STR.
Add a new macro to generate a constant string: SD_ID128_MAKE_STR.
It is not as elegant as SD_ID128_CONST_STR, because it requires a repetition
of the numbers, but in practice it is more convenient to use, and allows gcc
to generate smarter code:
$ size .libs/systemd{,-logind,-journald}{.old,}
text data bss dec hex filename
1265204 149564 4808 1419576 15a938 .libs/systemd.old
1260268 149564 4808 1414640 1595f0 .libs/systemd
246805 13852 209 260866 3fb02 .libs/systemd-logind.old
240973 13852 209 255034 3e43a .libs/systemd-logind
146839 4984 34 151857 25131 .libs/systemd-journald.old
146391 4984 34 151409 24f71 .libs/systemd-journald
It is also much easier to check if a certain binary uses a certain MESSAGE_ID:
$ strings .libs/systemd.old|grep MESSAGE_ID
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
$ strings .libs/systemd|grep MESSAGE_ID
MESSAGE_ID=c7a787079b354eaaa9e77b371893cd27
MESSAGE_ID=b07a249cd024414a82dd00cd181378ff
MESSAGE_ID=641257651c1b4ec9a8624d7a40a9e1e7
MESSAGE_ID=de5b426a63be47a7b6ac3eaac82e2f6f
MESSAGE_ID=d34d037fff1847e6ae669a370e694725
MESSAGE_ID=7d4958e842da4a758f6c1cdc7b36dcc5
MESSAGE_ID=1dee0369c7fc4736b7099b38ecb46ee7
MESSAGE_ID=39f53479d3a045ac8e11786248231fbf
MESSAGE_ID=be02cf6855d2428ba40df7e9d022f03d
MESSAGE_ID=7b05ebc668384222baa8881179cfda54
MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
|
|
some post-mdns fixes for resolved
|
|
This restores behaviour of 53fda2bb933694c9bdb1bbf1f5583e39673b74b2: for
mDNS (and mDNS only) we'll match replies to transactions honouring ANY
matches.
|
|
The array doesn't grow dynamically, hence pick the right size at the
moment of allocation. Let's simply multiply the number of addresses of
this link by 2, as that's how many RRs we maintain for it.
|
|
It is useful to package test-* binaries and run them as root under
autopkgtest or manually on particular machines. They currently have a
built-in hardcoded absolute path to their test data, which does not work
when running the test programs from any other path than the original
build directory.
By default, make the tests look for their data in
<test_exe_directory>/testdata/ so that they can be called from any
directory (provided that the corresponding test data is installed
correctly). As we don't have a fixed static path in the build tree (as
build and source tree are independent), set $TEST_DIR with "make check"
to point to <srcdir>/test/, as we previously did with an automake
variable.
|
|
Moe test-resolve's test data from src/resolve/test-data to
test/test-resolve/ to be consistent with test/test-{execute,path}/. This
will make it easier to make the tests relocatable.
|
|
|
|
We don't actually make use of the return value for now, but it matches
our coding style elsewhere, and it actually shortens our code quite a
bit.
Also, add a missing OOM check after dns_answer_new().
|
|
This becomes handy later on. Moreover, we keep track of similar counters
for other objects like this too, hence adding this here too is obvious.
|
|
This reverts a part of 53fda2bb933694c9bdb1bbf1f5583e39673b74b2:
On classic DNS and LLMNR ANY requests may be replied to with any kind of
RR, and the reply does not have to be comprehensive: these protocols
simply define that if there's an RRset that can answer the question,
then at least one should be sent as reply, but not necessarily all. This
means it's not safe to "merge" transactions for arbitrary RR types into
ANY requests, as the reply might not answer the specific question.
As the merging is primarily an optimization, let's undo this for now.
This logic may be readded later, in a way that only applies to mDNS.
Also, there's an OOM problem with this chunk: dns_resource_key_new()
might fail due to OOM and this is not handled. (This is easily removed
though, by using DNS_RESOURCE_KEY_CONST()).
|