Age | Commit message (Collapse) | Author |
|
We always call one after the other anyway, and this way service_set_socket_fd()
and service_close_socket_fd() nicely match each other as one undoes the effect
of the other.
|
|
Let's make sure when we drop a reference to a unit, that we run the GC queue on
it again.
This (together with the previous commit) should deal with the GC issues pointed
out in:
https://github.com/systemd/systemd/pull/2993#issuecomment-215331189
|
|
There's no need to set the no_gc bit for service units that socket units
prepare, as we always keep a proper reference (as maintained by unit_ref_set())
on them, and such references are honoured by the GC logic anyway. Moreover,
explicitly setting the no_gc bit is problematic if the socket gets GC'ed for a
reason, as the service might then leak with the bit set.
|
|
|
|
|
|
per-connection service
Fixes: #2993 #2691
|
|
In service_set_socket_fd(), let's make sure that if we can't add the requested
dependencies we take no possession of the passed connection fd.
This way, we follow the strict rule: we take possession of the passed fd on
success, but on failure we don't, and the fd remains in possession of the
caller.
|
|
We generally follow the rule that for time settings we suffix the setting name
with "Sec" to indicate the default unit if none is specified. The only
exception was the rate limiting interval settings. Fix this, and keep the old
names for compatibility.
Do the same for journald's RateLimitInterval= setting
|
|
With #2564 unit start rate limiting was moved from after the condition checks
are to before they are made, in an attempt to fix #2467. This however resulted
in #2684. However, with a previous commit a concept of per socket unit trigger
rate limiting has been added, to fix #2467 more comprehensively, hence the
start limit can be moved after the condition checks again, thus fixing #2684.
Fixes: #2684
|
|
This adds two new settings TriggerLimitIntervalSec= and TriggerLimitBurst= that
define a rate limit for activation of socket units. When the limit is hit, the
socket is is put into a failure mode. This is an alternative fix for #2467,
since the original fix resulted in issue #2684.
In a later commit the StartLimitInterval=/StartLimitBurst= rate limiter will be
changed to be applied after any start conditions checks are made. This way,
there are two separate rate limiters enforced: one at triggering time, before
any jobs are queued with this patch, as well as the start limit that is moved
again to be run immediately before the unit is activated. Condition checks are
done in between the two, and thus no longer affect the start limit.
|
|
In 4.2 kernel headers, some netlink defines are missing that we need. missing.h
already can add them in, but currently makes this dependent on a definition
that these kernels already have. Change the check hence to check for the newest
definition in the table, so that the whole bunch of definitions as added in on
all kernels lacking this.
|
|
This commit improves systemd performance on the systems which have
thousands of units.
|
|
fsync directory when creating or rotating journal files and other small fixes,
most importantly for the DHCP DUID code.
|
|
Always create dependencies for bind mounts
|
|
As suggested by:
https://github.com/systemd/systemd/pull/3126#discussion_r61125474
|
|
Let's move DUID configuration into the [DHCP] section, since it only makes
sense in a DHCP context, and should be close to the configuration of
ClientIdentifier= and suchlike.
This really shouldn't be a section of its own, we don't have any for any of our
other per-protocol specific identifiers...
Follow-up for #2890 #2943
|
|
created in too
Fixes: #2831
|
|
On s390 size_t is an unsigned long, nor an unsigned int. They both are
of the same size and can be cast to each other safely, but the compiler
still seems unhappy about incompatible pointers.
Fixes: 7c2da2ca8
|
|
Various small cleanups in shared code
|
|
Fixes:
cp /etc/machine-id /var/tmp/systemd-test.HccKPa/nspawn-root/etc
systemd-nspawn -D /var/tmp/systemd-test.HccKPa/nspawn-root --link-journal host -b
...
Host and machine ids are equal (P�S!V): refusing to link journals
|
|
Now we are not setting static address, start dhcp6 client and
discovering IPv6 routers after link gained carrier.
This fixes #2912.
|
|
Added to kernel 4.6.
|
|
Fixes:
$ systemd-nspawn -h
...
Failed to remove veth interface ����: Operation not permitted
This is a follow-up for d2773e59de3dd970d861
|
|
Running cgtop on a system, which lacks expecting stat file, results in a
segfault. For example, a system with blkio tree but without cfq io scheduler,
lacks "blkio.io_service_bytes".
When the targeting cgroup's file does not exist, process() returns 0 and
also does not modify `*ret' value (which is `*ours'). As a result,
callers of refresh_one() can have bogus pointer, which result in SEGV.
This patch just properly initialize the variable to NULL.
|
|
|
|
In standard linux parlance, "hidden" usually means that the file name starts
with ".", and nothing else. Rename the function to convey what the function does
better to casual readers.
Stop exposing hidden_file_allow_backup which is rather ugly and rewrite
hidden_file to extract the suffix first. Note that hidden_file_allow_backup
excluded files with "~" at the end, which is quite confusing. Let's get
rid of it before it gets used in the wrong place.
|
|
dirent_is_file_with_suffix
If the file name is supposed to end in a suffix, there's not need to check the
name against a list of "special" file names, which is slow. Instead, just check
that the name doens't start with a period.
|
|
|
|
It's better to avoid having the option string duplicated, lest we forget
to modify them in sync in the future.
|
|
The parse_pid() function doesn't succeed if we don't zero-terminate after the
last digit in the buffer.
|
|
ucf is a standard Debian helper for managing configuration file upgrades which
need more interaction or elaborate merging than conffiles managed by dpkg.
Ignore its temporary and backup files similarly to the *.dpkg-* ones to avoid
creating units for them in generators.
https://bugs.debian.org/775903
|
|
The only code path which makes a journal durable is via
journal_file_set_offline().
When we perform a rotate the journal's header->state is being set to
STATE_ARCHIVED prior to journal_file_set_offline() being called.
In journal_file_set_offline(), we short-circuit the entire offline when
f->header->state != STATE_ONLINE.
This all results in none of the journal_file_set_offline() fsync() calls
being reached when rotate archives a journal, so archived journals are
never explicitly made durable.
What we do now is instead of setting the f->header->state to
STATE_ARCHIVED directly in journal_file_rotate() prior to
journal_file_close(), we set an archive flag in f->archive for the
journal_file_set_offline() machinery to honor by committing
STATE_ARCHIVED instead of STATE_OFFLINE when set.
Prior to this, rotated journals were never getting fsync() explicitly
performed on them, since journal_file_set_offline() short-circuited.
Obviously this is undesirable, and depends entirely on the underlying
filesystem as to how much durability was achieved when simply closing
the file.
Note that this problem existed prior to the recent asynchronous fsync
changes, but those changes do facilitate our performing this durable
offline on rotate without blocking, regardless of the underlying
filesystem sync-on-close semantics.
|
|
Add the boot parameter: systemd.default_timeout_start_sec to allow modification
of the default start job timeout at boot time.
|
|
|
|
This reverts commit 6e3930c40f3379b7123e505a71ba4cd6db6c372f.
Merge got squashed by mistake.
|
|
nspawn automatic user namespaces
|
|
* sd-journal: detect earlier if we try to read an object from an invalid offset
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
* journal: when dumping journal contents, react nicer to lines we can't read
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
* journal-file: always generate the same error when encountering corrupted files
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
* journal-file: when iterating through a partly corruped journal file, treat error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
* journal-file: make seeking in corrupted files work
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
* man: elaborate on the automatic systemd-journald.socket service dependencies
Fixes: #1603
|
|
Previously, when we used a bisection table for seeking through a corrupted
file, and the end of the bisection table was corrupted we'd most likely fail
the entire seek operation. Improve the situation: if we encounter invalid
entries in a bisection table, linearly go backwards until we find a working
entry again.
|
|
error like EOF
When we linearly iterate through a corrupted journal file, and we encounter a
read error, don't consider this fatal, but merely as EOF condition (and log
about it).
|
|
Let's make sure EBADMSG is the one error we throw when we encounter corrupted
data, so that we can neatly test for it.
|
|
If journal files are not cleanly closed it might happen that intermediaery
journal entries cannot be read. Handle this nicely, skip over the unreadable
entries, and log a debug message about it; after all we generally follow the
logic that we try to make the best of corrupted files.
|
|
Specifically, detect early if we try to read from offset 0, i.e. are using
uninitialized offset data.
|
|
This way the user service will have a loginuid, and it will be inherited by
child services. This shouldn't change anything as far as systemd itself is
concerned, but is nice for various services spawned from by systemd --user
that expect a loginuid.
pam_loginuid(8) says that it should be enabled for "..., crond and atd".
user@.service should behave similarly to those two as far as audit is
concerned.
https://bugzilla.redhat.com/show_bug.cgi?id=1328947#c28
|
|
rework "journalctl -M"
|
|
Fix endless loops in journalctl --list-boots (closes #617).
|
|
non-btrfs file systems (#3117)
Fixes: #2060
(Of course, in the long run, we should probably add a copy-based fall-back. But
given how slow that is, this probably requires some asynchronous forking logic
like the CopyFrom() and CopyTo() method calls already implement.)
|
|
The "resources" error is really just the generic error we return when
we hit some kind of error and we have no more appropriate error for the case to
return, for example because of some OS error.
Hence, reword the explanation and don't claim any relation to resource limits.
Admittedly, the "resources" service error is a bit of a misnomer, but I figure
it's kind of API now.
Fixes: #2716
|
|
netwotkd: fix address and route conf
|
|
Early in journal_file_set_offline() f->header->state is tested to see if
it's != STATE_ONLINE, and since there's no need to do anything if the
journal isn't online, the function simply returned here.
Since moving part of the offlining process to a separate thread, there
are two problems here:
1. We can't simply check f->header->state, because if there is an
offline thread active it may modify f->header->state.
2. Even if the journal is deemed offline, the thread responsible may
still need joining, so a bare return may leak the thread's resources
like its stack.
To address #1, the helper journal_file_is_offlining() is called prior to
accessing f->header->state.
If journal_file_is_offlining() returns true, f->header->state isn't even
checked, because an offlining journal is obviously online, and we'll
just continue with the normal set offline code path.
If journal_file_is_offlining() returns false, then it's safe to check
f->header->state, because the offline_state is beyond the point of
modifying f->header->state, and there's a memory barrier in the helper.
If we find f->header->state is != STATE_ONLINE, then we call the
idempotent journal_file_set_offline_thread_join() on the way out of the
function, to join a potential lingering offline thread.
|
|
Let's be nice to users, and let's turn the nonsensical "--unit=… --user" into
"--user-unit=…" which the user more likely meant.
Fixes #1621
|