Age | Commit message (Collapse) | Author |
|
dirent_is_file_with_suffix
If the file name is supposed to end in a suffix, there's not need to check the
name against a list of "special" file names, which is slow. Instead, just check
that the name doens't start with a period.
|
|
|
|
It's better to avoid having the option string duplicated, lest we forget
to modify them in sync in the future.
|
|
The parse_pid() function doesn't succeed if we don't zero-terminate after the
last digit in the buffer.
|
|
ucf is a standard Debian helper for managing configuration file upgrades which
need more interaction or elaborate merging than conffiles managed by dpkg.
Ignore its temporary and backup files similarly to the *.dpkg-* ones to avoid
creating units for them in generators.
https://bugs.debian.org/775903
|
|
The only code path which makes a journal durable is via
journal_file_set_offline().
When we perform a rotate the journal's header->state is being set to
STATE_ARCHIVED prior to journal_file_set_offline() being called.
In journal_file_set_offline(), we short-circuit the entire offline when
f->header->state != STATE_ONLINE.
This all results in none of the journal_file_set_offline() fsync() calls
being reached when rotate archives a journal, so archived journals are
never explicitly made durable.
What we do now is instead of setting the f->header->state to
STATE_ARCHIVED directly in journal_file_rotate() prior to
journal_file_close(), we set an archive flag in f->archive for the
journal_file_set_offline() machinery to honor by committing
STATE_ARCHIVED instead of STATE_OFFLINE when set.
Prior to this, rotated journals were never getting fsync() explicitly
performed on them, since journal_file_set_offline() short-circuited.
Obviously this is undesirable, and depends entirely on the underlying
filesystem as to how much durability was achieved when simply closing
the file.
Note that this problem existed prior to the recent asynchronous fsync
changes, but those changes do facilitate our performing this durable
offline on rotate without blocking, regardless of the underlying
filesystem sync-on-close semantics.
|
|
Add the boot parameter: systemd.default_timeout_start_sec to allow modification
of the default start job timeout at boot time.
|
|
|
|
This reverts commit 6e3930c40f3379b7123e505a71ba4cd6db6c372f.
Merge got squashed by mistake.
|
|
nspawn automatic user namespaces
|
|
* sd-journal: detect earlier if we try to read an object from an invalid offset
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
* journal: when dumping journal contents, react nicer to lines we can't read
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
* journal-file: always generate the same error when encountering corrupted files
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
* journal-file: when iterating through a partly corruped journal file, treat error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
* journal-file: make seeking in corrupted files work
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
* man: elaborate on the automatic systemd-journald.socket service dependencies
Fixes: #1603
|
|
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
|
|
error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
|
|
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
|
|
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
|
|
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
|
|
This way the user service will have a loginuid, and it will be inherited by
child services. This shouldn't change anything as far as systemd itself is
concerned, but is nice for various services spawned from by systemd --user
that expect a loginuid.
pam_loginuid(8) says that it should be enabled for "..., crond and atd".
user@.service should behave similarly to those two as far as audit is
concerned.
https://bugzilla.redhat.com/show_bug.cgi?id=1328947#c28
|
|
rework "journalctl -M"
|
|
Fix endless loops in journalctl --list-boots (closes #617).
|
|
non-btrfs file systems (#3117)
Fixes: #2060
(Of course, in the long run, we should probably add a copy-based fall-back. But
given how slow that is, this probably requires some asynchronous forking logic
like the CopyFrom() and CopyTo() method calls already implement.)
|
|
The "resources" error is really just the generic error we return when
we hit some kind of error and we have no more appropriate error for the case to
return, for example because of some OS error.
Hence, reword the explanation and don't claim any relation to resource limits.
Admittedly, the "resources" service error is a bit of a misnomer, but I figure
it's kind of API now.
Fixes: #2716
|
|
netwotkd: fix address and route conf
|
|
Early in journal_file_set_offline() f->header->state is tested to see if
it's != STATE_ONLINE, and since there's no need to do anything if the
journal isn't online, the function simply returned here.
Since moving part of the offlining process to a separate thread, there
are two problems here:
1. We can't simply check f->header->state, because if there is an
offline thread active it may modify f->header->state.
2. Even if the journal is deemed offline, the thread responsible may
still need joining, so a bare return may leak the thread's resources
like its stack.
To address #1, the helper journal_file_is_offlining() is called prior to
accessing f->header->state.
If journal_file_is_offlining() returns true, f->header->state isn't even
checked, because an offlining journal is obviously online, and we'll
just continue with the normal set offline code path.
If journal_file_is_offlining() returns false, then it's safe to check
f->header->state, because the offline_state is beyond the point of
modifying f->header->state, and there's a memory barrier in the helper.
If we find f->header->state is != STATE_ONLINE, then we call the
idempotent journal_file_set_offline_thread_join() on the way out of the
function, to join a potential lingering offline thread.
|
|
Let's be nice to users, and let's turn the nonsensical "--unit=… --user" into
"--user-unit=…" which the user more likely meant.
Fixes #1621
|
|
Let's document the call as deprecated, since it doesn't cover containers with
directories that aren#t visible to the host properly.
|
|
This way, the switch becomes compatible with nspawn containers using --image=,
and those which only store journal data in /run (i.e. have persistant logs
off).
Fixes: #49
|
|
When appending to a journal file, journald will:
a) first, append the actual entry to the end of the journal file
b) second, add an offset reference to it to the global entry array stored at
the beginning of the file
c) third, add offset references to it to the per-field entry array stored at
various places of the file
The global entry array, maintained by b) is used when iterating through the
journal without matches applied.
The per-field entry array maintained by c) is used when iterating through the
journal with a match for that specific field applied.
In the wild, there are journal files where a) and b) were completed, but c)
was not before the files were abandoned. This means, that in some cases log
entries are at the end of these files that appear in the global entry array,
but not in the per-field entry array of the _BOOT_ID= field. Now, the
"journalctl --list-boots" command alternatingly uses the global entry array
and the per-field entry array of the _BOOT_ID= field. It seeks to the last
entry of a specific _BOOT_ID=field by having the right match installed, and
then jumps to the next following entry with no match installed anymore, under
the assumption this would bring it to the next boot ID. However, if the
per-field entry wasn't written fully, it might actually turn out that the
global entry array might know one more entry with the same _BOOT_ID, thus
resulting in a indefinite loop around the same _BOOT_ID.
This patch fixes that, by updating the boot search logic to always continue
reading entries until the boot ID actually changed from the previous. Thus, the
per-field entry array is used as quick jump index (i.e. as an optimization),
but not trusted otherwise. Only the global entry array is trusted.
This replaces PR #1904, which is actually very similar to this one. However,
this one actually reads the boot ID directly from the entry header, and doesn't
try to read it at all until the read pointer is actually really located on the
first item to read.
Fixes: #617
Replaces: #1904
|
|
Show the various timestamps in hexadecimal too. This is useful for matching the
timestamps included in cursor strings (which are encoded in hex, too), with the
references in the journal header.
|
|
* sd-netlink: permit RTM_DELLINK messages with no ifindex
This is useful for removing network interfaces by name.
* nspawn: explicitly remove veth links we created after use
Sometimes the kernel keeps veth links pinned after the namespace they have been
joined to died. Let's hence explicitly remove veth links after use.
Fixes: #2173
|
|
Drop the "read_realtime" parameter. Getting the realtime timestamp from an
entry is cheap, as it is a normal header field, hence let's just get this
unconditionally, and simplify our code a bit.
|
|
Let's store the reference as simple sd_id128_t, since we don't actually need a
BootId for it.
|
|
|
|
With this change a new flag SD_JOURNAL_OS_ROOT is introduced. If specified
while opening the journal with the per-directory calls (specifically:
sd_journal_open_directory() and sd_journal_open_directory_fd()) the passed
directory is assumed to be the root directory of an OS tree, and the journal
files are searched for in /var/log/journal, /run/log/journal relative to it.
This is useful to allow usage of sd-journal on file descriptors returned by the
OpenRootDirectory() call of machined.
|
|
This new call returns a file descriptor for the root directory of a container.
This file descriptor may then be used to access the rest of the container's
file system, via openat() and similar calls. Since the file descriptor returned
is for the file system namespace inside of the container it may be used to
access all files of the container exactly the way the container itself would
see them. This is particularly useful for containers run directly from
loopback media, for example via systemd-nspawn's --image= switch. It also
provides access to directories such as /run of a container that are normally
not accessible to the outside of a container.
This replaces PR #2870.
Fixes: #2870
|
|
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
|
|
This should allow tools like rkt to pre-mount read-only subtrees in the OS
tree, without breaking the patching code.
Note that the code will still fail, if the top-level directory is already
read-only.
|
|
|
|
With this change -U will turn on user namespacing only if the kernel actually
supports it and otherwise gracefully degrade to non-userns mode.
|
|
In order to implement this we change the bool arg_userns into an enum
UserNamespaceMode, which can take one of NO, PICK or FIXED, and replace the
arg_uid_range_pick bool with it.
|
|
Given that user namespacing is pretty useful now, let's add a shortcut command
line switch for the logic.
|
|
This adds the new value "pick" to --private-users=. When specified a new
UID/GID range of 65536 users is automatically and randomly allocated from the
host range 0x00080000-0xDFFF0000 and used for the container. The setting
implies --private-users-chown, so that container directory is recursively
chown()ed to the newly allocated UID/GID range, if that's necessary. As an
optimization before picking a randomized UID/GID the UID of the container's
root directory is used as starting point and used if currently not used
otherwise.
To protect against using the same UID/GID range multiple times a few mechanisms
are in place:
- The first and the last UID and GID of the range are checked with getpwuid()
and getgrgid(). If an entry already exists a different range is picked. Note
that by "last" UID the user 65534 is used, as 65535 is the 16bit (uid_t) -1.
- A lock file for the range is taken in /run/systemd/nspawn-uid/. Since the
ranges are taken in a non-overlapping fashion, and always start on 64K
boundaries this allows us to maintain a single lock file for each range that
can be randomly picked. This protects nspawn from picking the same range in
two parallel instances.
- If possible the /etc/passwd lock file is taken while a new range is selected
until the container is up. This means adduser/addgroup should safely avoid
the range as long as nss-mymachines is used, since the allocated range will
then show up in the user database.
The UID/GID range nspawn picks from is compiled in and not configurable at the
moment. That should probably stay that way, since we already provide ways how
users can pick their own ranges manually if they don't like the automatic
logic.
The new --private-users=pick logic makes user namespacing pretty useful now, as
it relieves the user from managing UID/GID ranges.
|
|
This adds a new --private-userns-chown switch that may be used in combination
with --private-userns. If it is passed a recursive chmod() operation is run on
the OS tree, fixing all file owner UID/GIDs to the right ranges. This should
make user namespacing pretty workable, as the OS trees don't need to be
prepared manually anymore.
|
|
In nspawn we invoke copy_bytes() on a TTY fd. copy_file_range() returns EBADF
on a TTY and this error is considered fatal by copy_bytes() so far. Correct
that, so that nspawn's copy_bytes() operation works again.
This is a follow-up for a44202e98b638024c45e50ad404c7069c7835c04.
|
|
|
|
Let's output the actual error code encountered, and let's not claim this was
purely triggered by files, because it can also be triggered by directories.
|
|
Let's also collect errors returned by readdir() into our set of errors, like we
do this for all other errors from journal files.
|
|
This is slightly nicer, since we actually watch the directories we opened and
enumerate. However, primarily this is preparation for adding support for
opening journal files by fd without specifying any path, to be added in a later
commit.
|
|
It make more sense to initalize the node first then
we add to the list.
|
|
We are not able to add multiple properties.
wlp3s0.network:
[Match]
Name=wlp3s0
[Route]
Gateway=10.68.5.26
Metric=10
sudo ./systemd-networkd
Failed to parse file '/usr/lib/systemd/network/wlp3s0.network': File
exists
Could not load configuration files: File exists
This patch fixes it.
|
|
Fixes: #2420
|