Age | Commit message (Collapse) | Author |
|
gcc at some optimization levels thinks thes variables were used without
initialization. it's wrong, but let's make the message go anyway.
|
|
In the process execution code of PID 1, before
096424d1230e0a0339735c51b43949809e972430 the GID settings where changed before
invoking PAM, and the UID settings after. After the change both changes are
made after the PAM session hooks are run. When invoking PAM we fork once, and
leave a stub process around which will invoke the PAM session end hooks when
the session goes away. This code previously was dropping the remaining privs
(which were precisely the UID). Fix this code to do this correctly again, by
really dropping them else (i.e. the GID as well).
While we are at it, also fix error logging of this code.
Fixes: #4238
|
|
We generate these, hence we should also add errno translations for them.
|
|
Add this new error code (documented in RFC7873) to our list of known errors.
|
|
These were forgotten, let's add some useful mappings for all errors we define.
|
|
As suggested here:
https://github.com/systemd/systemd/pull/4296#issuecomment-251911349
Let's try AF_INET first as socket, but let's fall back to AF_NETLINK, so that
we can use a protocol-independent socket here if possible. This has the benefit
that our code will still work even if AF_INET/AF_INET6 is made unavailable (for
exmple via seccomp), at least on current kernels.
|
|
[RFC] run systemd in an unprivileged container
|
|
hwdb: return conflicts in a well-defined order
|
|
It might be blocked through /proc/PID/setgroups
|
|
|
|
|
|
journal_rate_limit_test() (#4291)
Currently, the ratelimit does not handle the number of suppressed messages accurately.
Even though the number of messages reaches the limit, it still allows to add one extra messages to journal.
This patch fixes the problem.
|
|
|
|
[BridgeFDB] did not apply to bridge ports so far. This patch adds the proper
handling. In case of a bridge interface the correct flag NTF_MASTER is now set
in the netlink call. FDB MAC addresses are now applied in
link_enter_set_addresses to make sure the link is setup.
|
|
Add seccomp support for the s390 architecture (31-bit and 64-bit)
to systemd.
This requires libseccomp >= 2.3.1.
|
|
directory (#4226)
Fixes https://github.com/systemd/systemd/issues/3695
At the same time it adds a protection against userns chown of inodes of
a shared mount point.
|
|
|
|
If the new item is inserted before the first item in the list, then the
head must be updated as well.
Add a test to the list unit test to check for this.
|
|
If the corresponding mount unit is deserialized after the automount unit
then the expire event is set up in automount_trigger_notify(). However, if
the mount unit is deserialized first then the automount unit is still in
state AUTOMOUNT_DEAD and automount_trigger_notify() aborts without setting
up the expire event.
Explicitly call automount_start_expire() during coldplug to make sure that
the expire event is set up as necessary.
Fixes #4249.
|
|
|
|
This prevented systemd-analyze from unprivileged operation on older systemd
installations, which should be possible.
Also, we shouldn't touch the file system in test mode even if we can.
|
|
SYSTEMD_UNIT_PATH=foobar: systemd-analyze verify barbar/unit.service
will load units from barbar/, foobar/, /etc/systemd/system/, etc.
SYSTEMD_UNIT_PATH= systemd-analyze verify barbar/unit.service
will load units only from barbar/, which is useful e.g. when testing
systemd's own units on a system with an older version of systemd installed.
|
|
[Unit]
Before=foobar.device
[Service]
ExecStart=/bin/true
Type=oneshot
$ systemd-analyze verify before-device.service
before-device.service: Dependency Before=foobar.device ignored (.device units cannot be delayed)
|
|
Fixes #3830
|
|
It needs to be possible to tell apart "the nss-resolve module does not exist"
(which can happen when running foreign-architecture programs) from "the queried
DNS name failed DNSSEC validation" or other errors. So return NOTFOUND for these
cases too, and only keep UNAVAIL for the cases where we cannot handle the given
address family.
This makes it possible to configure a fallback to "dns" without breaking
DNSSEC, with "resolve [!UNAVAIL=return] dns". Add this to the manpage.
This does not change behaviour if resolved is not running, as that already
falls back to the "dns" glibc module.
Fixes #4157
|
|
Handle general errors from the resolved call in _nss_resolve_gethostbyaddr2_r()
the same say as in the other variants: Just "goto fail" as that does exactly
the same.
|
|
"closing all" might suggest that _all_ fds received with the notification message
will be closed. Reword the message to clarify that only the "unused" ones will be
closed.
|
|
No functional change.
|
|
|
|
coredump: remove Storage=both support, various fixes for sd-coredump and coredumpctl
|
|
DNS servers which have route-only domains should only be used for
the specified domains. Routing queries about other domains there is a privacy
violation, prone to fail (as that DNS server was not meant to be used for other
domains), and puts unnecessary load onto that server.
Introduce a new helper function dns_server_limited_domains() that checks if the
DNS server should only be used for some selected domains, i. e. has some
route-only domains without "~.". Use that when determining whether to query it
in the scope, and when writing resolv.conf.
Extend the test_route_only_dns() case to ensure that the DNS server limited to
~company does not appear in resolv.conf. Add test_route_only_dns_all_domains()
to ensure that a server that also has ~. does appear in resolv.conf as global
name server. These reproduce #3420.
Add a new test_resolved_domain_restricted_dns() test case that verifies that
domain-limited DNS servers are only being used for those domains. This
reproduces #3421.
Clarify what a "routing domain" is in the manpage.
Fixes #3420
Fixes #3421
|
|
It's probably easier to diagnose a bad notification message if the
contents are printed. But still, do anything only if debugging is on.
|
|
This undoes 531ac2b234. I acked that patch without looking at the code
carefully enough. There are two problems:
- we want to process the fds anyway
- in principle empty notification messages are valid, and we should
process them as usual, including logging using log_unit_debug().
|
|
If manager_dispatch_notify_fd() fails and returns an error then the handling of
service notifications will be disabled entirely leading to a compromised system.
For example pid1 won't be able to receive the WATCHDOG messages anymore and
will kill all services supposed to send such messages.
|
|
Fixes #4234.
Signed-off-by: Jorge Niedbalski <jnr@metaklass.org>
|
|
coredump had code to check if copy_bytes() hit the max_bytes limit,
and refuse further processing in that case.
But in 84ee0960443, the return convention for copy_bytes() was changed
from -EFBIG to 1 for the case when the limit is hit, so the condition
check in coredump couldn't ever trigger.
But it seems that *do* want to process such truncated cores [1].
So change the code to detect truncation properly, but instead of
returning an error, give a nice log entry.
[1] https://github.com/systemd/systemd/issues/3883#issuecomment-239106337
Should fix (or at least alleviate) #3883.
|
|
Another fix for #4161.
|
|
For the user, if the core file is missing or inaccessible, it is
more interesting that the fact that they forgot to pipe to a file.
So delay the failure from the check until after we have verified
that the file or the COREDUMP field are present.
Partially fixes #4161.
Also, error reporting on failure was duplicated. save_core() now
always prints an error message (because it knows the paths involved,
so can the most useful message), and the callers don't have to.
|
|
Propagate errors properly, so that if we hit oom or an error in the
journal, the whole command will fail. This is important when using
the output in scripts.
Support the output of multiple values for the same field with -F.
The journal supports that, and our official commands should too, as
far as it makes sense. -F can be used to print user-defined fields
(e.g. somebody could use a TAG field with multiple occurences), so
we should support that too. That seems better than silently printing
the last value found as was done before.
We would iterate trying to match the same field with all possible
field names. Once we find something, cut the loop short, since we
know that nothing else can match.
|
|
The column for "present" was easy to miss, especially if somebody had no
coredumps present at all, in which case the column of spaces of width one
wasn't visually distinguished from the neighbouring columns. Replace this
with an explicit text, one of: "missing", "journal", "present", "error".
$ coredumpctl
TIME PID UID GID SIG COREFILE EXE
Mon 2016-09-26 22:46:31 CEST 8623 0 0 11 missing /usr/bin/bash
Mon 2016-09-26 22:46:35 CEST 8639 1001 1001 11 missing /usr/bin/bash
Tue 2016-09-27 01:10:46 CEST 16110 1001 1001 11 journal /usr/bin/bash
Tue 2016-09-27 01:13:20 CEST 16290 1001 1001 11 journal /usr/bin/bash
Tue 2016-09-27 01:33:48 CEST 17867 1001 1001 11 present /usr/bin/bash
Tue 2016-09-27 01:37:55 CEST 18549 0 0 11 error /usr/bin/bash
Also, use access(…, R_OK), so that we can report a present but inaccessible
file different than a missing one.
|
|
In 'list', show present also for coredumps stored in the journal.
In 'status', replace "File" with "Storage" line that is always present.
Possible values:
Storage: none
Storage: journal
Storage: /path/to/file (inacessible)
Storage: /path/to/file
Previously the File field be only present if the file was accessible, so users
had to manually extract the file name precisely in the cases where it was
needed, i.e. when coredumpctl couldn't access the file. It's much more friendly
to always show something. This output is designed for human consumption, so
it's better to be a bit verbose.
The call to sd_j_set_data_threshold is moved, so that status is always printed
with the default of 64k, list uses 4k, and coredump retrieval is done with the
limit unset. This should make checking for the presence of the COREDUMP field
not too costly.
|
|
|
|
sd_journal_previous() returns 0 if it didn't do any move, so the
warning was stupidly always printed.
|
|
Added in 9fe13294a9 (by me :[```), and later obfuscated in d0c8806d4ab, if an
uncompressed external file or an internally stored coredump was supposed to be
written to a file descriptor, nothing would be written.
|
|
Back when external storage was initially added in 34c10968cb, this mode of
storage was added. This could have made some sense back when XZ compression was
used, and an uncompressed core on disk could be used as short-lived cache file
which does require costly decompression. But now fast LZ4 compression is used
(by default) both internally and externally, so we have duplicated storage,
using the same compression and same default maximum core size in both cases,
but with different expiration lifetimes. Even the uncompressed-external,
compressed-internal mode is not very useful: for small files, decompression
with LZ4 is fast enough not to matter, and for large files, decompression is
still relatively fast, but the disk-usage penalty is very big.
An additional problem with the two modes of storage is that it complicates
the code and makes it much harder to return a useful error message to the user
if we cannot find the core file, since if we cannot find the file we have to
check the internal storage first.
This patch drops "both" storage mode. Effectively this means that if somebody
configured coredump this way, they will get a warning about an unsupported
value for Storage, and the default of "external" will be used.
I'm pretty sure that this mode is very rarely used anyway.
|
|
When s->length is zero this function doesn't do anything, note that in a
comment.
|
|
core:sandbox: Add new ProtectKernelTunables=, ProtectControlGroups=, ProtectSystem=strict and fixes
|
|
Show and formatting fixes
|
|
Even if
```
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
```
is disabled
cat /proc/net/sockstat6
```
TCP6: inuse 2
UDP6: inuse 1
UDPLITE6: inuse 0
RAW6: inuse 0
FRAG6: inuse 0 memory 0
```
Looking for /proc/net/if_inet6 is the right choice.
|
|
propagation
Better safe.
|