Age | Commit message (Collapse) | Author |
|
error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
|
|
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
|
|
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
|
|
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
|
|
This way the user service will have a loginuid, and it will be inherited by
child services. This shouldn't change anything as far as systemd itself is
concerned, but is nice for various services spawned from by systemd --user
that expect a loginuid.
pam_loginuid(8) says that it should be enabled for "..., crond and atd".
user@.service should behave similarly to those two as far as audit is
concerned.
https://bugzilla.redhat.com/show_bug.cgi?id=1328947#c28
|
|
rework "journalctl -M"
|
|
Fix endless loops in journalctl --list-boots (closes #617).
|
|
non-btrfs file systems (#3117)
Fixes: #2060
(Of course, in the long run, we should probably add a copy-based fall-back. But
given how slow that is, this probably requires some asynchronous forking logic
like the CopyFrom() and CopyTo() method calls already implement.)
|
|
The "resources" error is really just the generic error we return when
we hit some kind of error and we have no more appropriate error for the case to
return, for example because of some OS error.
Hence, reword the explanation and don't claim any relation to resource limits.
Admittedly, the "resources" service error is a bit of a misnomer, but I figure
it's kind of API now.
Fixes: #2716
|
|
netwotkd: fix address and route conf
|
|
Early in journal_file_set_offline() f->header->state is tested to see if
it's != STATE_ONLINE, and since there's no need to do anything if the
journal isn't online, the function simply returned here.
Since moving part of the offlining process to a separate thread, there
are two problems here:
1. We can't simply check f->header->state, because if there is an
offline thread active it may modify f->header->state.
2. Even if the journal is deemed offline, the thread responsible may
still need joining, so a bare return may leak the thread's resources
like its stack.
To address #1, the helper journal_file_is_offlining() is called prior to
accessing f->header->state.
If journal_file_is_offlining() returns true, f->header->state isn't even
checked, because an offlining journal is obviously online, and we'll
just continue with the normal set offline code path.
If journal_file_is_offlining() returns false, then it's safe to check
f->header->state, because the offline_state is beyond the point of
modifying f->header->state, and there's a memory barrier in the helper.
If we find f->header->state is != STATE_ONLINE, then we call the
idempotent journal_file_set_offline_thread_join() on the way out of the
function, to join a potential lingering offline thread.
|
|
Let's be nice to users, and let's turn the nonsensical "--unit=… --user" into
"--user-unit=…" which the user more likely meant.
Fixes #1621
|
|
Let's document the call as deprecated, since it doesn't cover containers with
directories that aren#t visible to the host properly.
|
|
This way, the switch becomes compatible with nspawn containers using --image=,
and those which only store journal data in /run (i.e. have persistant logs
off).
Fixes: #49
|
|
When appending to a journal file, journald will:
a) first, append the actual entry to the end of the journal file
b) second, add an offset reference to it to the global entry array stored at
the beginning of the file
c) third, add offset references to it to the per-field entry array stored at
various places of the file
The global entry array, maintained by b) is used when iterating through the
journal without matches applied.
The per-field entry array maintained by c) is used when iterating through the
journal with a match for that specific field applied.
In the wild, there are journal files where a) and b) were completed, but c)
was not before the files were abandoned. This means, that in some cases log
entries are at the end of these files that appear in the global entry array,
but not in the per-field entry array of the _BOOT_ID= field. Now, the
"journalctl --list-boots" command alternatingly uses the global entry array
and the per-field entry array of the _BOOT_ID= field. It seeks to the last
entry of a specific _BOOT_ID=field by having the right match installed, and
then jumps to the next following entry with no match installed anymore, under
the assumption this would bring it to the next boot ID. However, if the
per-field entry wasn't written fully, it might actually turn out that the
global entry array might know one more entry with the same _BOOT_ID, thus
resulting in a indefinite loop around the same _BOOT_ID.
This patch fixes that, by updating the boot search logic to always continue
reading entries until the boot ID actually changed from the previous. Thus, the
per-field entry array is used as quick jump index (i.e. as an optimization),
but not trusted otherwise. Only the global entry array is trusted.
This replaces PR #1904, which is actually very similar to this one. However,
this one actually reads the boot ID directly from the entry header, and doesn't
try to read it at all until the read pointer is actually really located on the
first item to read.
Fixes: #617
Replaces: #1904
|
|
Show the various timestamps in hexadecimal too. This is useful for matching the
timestamps included in cursor strings (which are encoded in hex, too), with the
references in the journal header.
|
|
* sd-netlink: permit RTM_DELLINK messages with no ifindex
This is useful for removing network interfaces by name.
* nspawn: explicitly remove veth links we created after use
Sometimes the kernel keeps veth links pinned after the namespace they have been
joined to died. Let's hence explicitly remove veth links after use.
Fixes: #2173
|
|
Drop the "read_realtime" parameter. Getting the realtime timestamp from an
entry is cheap, as it is a normal header field, hence let's just get this
unconditionally, and simplify our code a bit.
|
|
Let's store the reference as simple sd_id128_t, since we don't actually need a
BootId for it.
|
|
|
|
With this change a new flag SD_JOURNAL_OS_ROOT is introduced. If specified
while opening the journal with the per-directory calls (specifically:
sd_journal_open_directory() and sd_journal_open_directory_fd()) the passed
directory is assumed to be the root directory of an OS tree, and the journal
files are searched for in /var/log/journal, /run/log/journal relative to it.
This is useful to allow usage of sd-journal on file descriptors returned by the
OpenRootDirectory() call of machined.
|
|
This new call returns a file descriptor for the root directory of a container.
This file descriptor may then be used to access the rest of the container's
file system, via openat() and similar calls. Since the file descriptor returned
is for the file system namespace inside of the container it may be used to
access all files of the container exactly the way the container itself would
see them. This is particularly useful for containers run directly from
loopback media, for example via systemd-nspawn's --image= switch. It also
provides access to directories such as /run of a container that are normally
not accessible to the outside of a container.
This replaces PR #2870.
Fixes: #2870
|
|
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
|
|
In case a file is on a networked filesystem, we may tag the fstab record with
_netdev option, however, corrrect dependencies will be created for this mount.
|
|
Dependencies were not created for _netdev mountpoints, the reasoning for this
is in the commit fc676b00, i.e. to avoid adding dependencies for network
mountpoints where What= appears like a path. Thus proposing this semantically
more correct condition when dependencies are added for _actual_ bind mounts
irrespectively of network flag.
Consequently it allows to add _netdev option to bind mounts, which includes
them in remote-fs.target, which simplifies configuration.
|
|
This should allow tools like rkt to pre-mount read-only subtrees in the OS
tree, without breaking the patching code.
Note that the code will still fail, if the top-level directory is already
read-only.
|
|
|
|
With this change -U will turn on user namespacing only if the kernel actually
supports it and otherwise gracefully degrade to non-userns mode.
|
|
In order to implement this we change the bool arg_userns into an enum
UserNamespaceMode, which can take one of NO, PICK or FIXED, and replace the
arg_uid_range_pick bool with it.
|
|
Given that user namespacing is pretty useful now, let's add a shortcut command
line switch for the logic.
|
|
This adds the new value "pick" to --private-users=. When specified a new
UID/GID range of 65536 users is automatically and randomly allocated from the
host range 0x00080000-0xDFFF0000 and used for the container. The setting
implies --private-users-chown, so that container directory is recursively
chown()ed to the newly allocated UID/GID range, if that's necessary. As an
optimization before picking a randomized UID/GID the UID of the container's
root directory is used as starting point and used if currently not used
otherwise.
To protect against using the same UID/GID range multiple times a few mechanisms
are in place:
- The first and the last UID and GID of the range are checked with getpwuid()
and getgrgid(). If an entry already exists a different range is picked. Note
that by "last" UID the user 65534 is used, as 65535 is the 16bit (uid_t) -1.
- A lock file for the range is taken in /run/systemd/nspawn-uid/. Since the
ranges are taken in a non-overlapping fashion, and always start on 64K
boundaries this allows us to maintain a single lock file for each range that
can be randomly picked. This protects nspawn from picking the same range in
two parallel instances.
- If possible the /etc/passwd lock file is taken while a new range is selected
until the container is up. This means adduser/addgroup should safely avoid
the range as long as nss-mymachines is used, since the allocated range will
then show up in the user database.
The UID/GID range nspawn picks from is compiled in and not configurable at the
moment. That should probably stay that way, since we already provide ways how
users can pick their own ranges manually if they don't like the automatic
logic.
The new --private-users=pick logic makes user namespacing pretty useful now, as
it relieves the user from managing UID/GID ranges.
|
|
This adds a new --private-userns-chown switch that may be used in combination
with --private-userns. If it is passed a recursive chmod() operation is run on
the OS tree, fixing all file owner UID/GIDs to the right ranges. This should
make user namespacing pretty workable, as the OS trees don't need to be
prepared manually anymore.
|
|
In nspawn we invoke copy_bytes() on a TTY fd. copy_file_range() returns EBADF
on a TTY and this error is considered fatal by copy_bytes() so far. Correct
that, so that nspawn's copy_bytes() operation works again.
This is a follow-up for a44202e98b638024c45e50ad404c7069c7835c04.
|
|
|
|
Let's output the actual error code encountered, and let's not claim this was
purely triggered by files, because it can also be triggered by directories.
|
|
Let's also collect errors returned by readdir() into our set of errors, like we
do this for all other errors from journal files.
|
|
This is slightly nicer, since we actually watch the directories we opened and
enumerate. However, primarily this is preparation for adding support for
opening journal files by fd without specifying any path, to be added in a later
commit.
|
|
It make more sense to initalize the node first then
we add to the list.
|
|
We are not able to add multiple properties.
wlp3s0.network:
[Match]
Name=wlp3s0
[Route]
Gateway=10.68.5.26
Metric=10
sudo ./systemd-networkd
Failed to parse file '/usr/lib/systemd/network/wlp3s0.network': File
exists
Could not load configuration files: File exists
This patch fixes it.
|
|
Fixes: #2420
|
|
|
|
systemd-run: fix --slice= in conjunction with --scope
|
|
source->size < source->filled (#3086)
While the function journal-remote-parse.c:get_line() enforces an assertion that source->filled <= source->size, in function journal-remote-parse.c:process_source() there is a chance that source->size will be decreased to a lower value than source->filled, when source->buf is reallocated. Therefore a check is added that ensures that source->buf is reallocated only when source->filled is smaller than target / 2.
|
|
Fixes: #2991
|
|
Let's be more careful when setting up the Slice= property of transient units:
let's use manager_load_unit_prepare() instead of manager_load_unit(), so that
the load queue isn't dispatched right away, because our own transient unit is
in it, and we don#t want to have it loaded until we finished initializing it.
|
|
|
|
This suppresses output of the hostname for messages from the local system.
Fixes: #2342
|
|
1970 UTC
aka "UNIX time".
Fixes: #2120
|
|
After all, the enum definition is in output-mode.h
|
|
This moves the O_TMPFILE handling from the coredumping code into common library
code, and generalizes it as open_tmpfile_linkable() + link_tmpfile(). The
existing open_tmpfile() function (which creates an unlinked temporary file that
cannot be linked into the fs) is renamed to open_tmpfile_unlinkable(), to make
the distinction clear. Thus, code may now choose between:
a) open_tmpfile_linkable() + link_tmpfile()
b) open_tmpfile_unlinkable()
Depending on whether they want a file that may be linked back into the fs later
on or not.
In a later commit we should probably convert fopen_temporary() to make use of
open_tmpfile_linkable().
Followup for: #3065
|