Age | Commit message (Collapse) | Author |
|
|
|
The log forward levels can be configured through kernel command line.
|
|
Just to make sure the next one reading this isn't surprised that the fd isn't
kept open. SAK and stuff...
Fix suggested:
https://github.com/systemd/systemd/pull/4366#issuecomment-253659162
|
|
Now that determine_space_for() only deals with storage space (cached) values,
rename it so it reflects the fact that only the cached storage space values are
updated.
|
|
Updating min_use is rather an unusual operation that is limited when we first
open the journal files, therefore extracts it from determine_space_for() and
create a function of its own and call this new function when needed.
determine_space_for() is now dealing with storage space (cached) values only.
There should be no functional changes.
|
|
Introduce a dedicated helper in order to reset the storage space cache.
|
|
The set of storage space values we cache are calculated according to a couple
of filesystem statistics (free blocks, block size).
This patch caches the vfs stats we're interested in so these values are
available later and coherent with the rest of the space cached values.
|
|
This patch makes system_journal_open() stop emitting the space usage
message. The caller is now free to emit this message when appropriate.
When restarting the journal, we can now emit the message *after*
flushing the journal (if required) so that all flushed log entries are
written in the persistent journal *before* the status message.
This is required since the status message is always younger than the
flushed entries.
Fixes #4190.
|
|
This commit simply extracts from determine_space_for() the code which emits the
storage usage message and put it into a function of its own so it can be reused
by others paths later.
No functional changes.
|
|
This structure keeps track of specificities for a given journal type
(persistent or volatile) such as metrics, name, etc...
The cached space values are now moved in this structure so that each
journal has its own set of cached values.
Previously only one set existed and we didn't know if the cached
values were for the runtime journal or the persistent one.
When doing:
determine_space_for(s, runtime_metrics, ...);
determine_space_for(s, system_metrics, ...);
the second call returned the cached values for the runtime metrics.
|
|
This commit simply extracts from determine_space_for() the code which
determines the FS usage where the passed path lives (statvfs(3)) and put it
into a function of its own so it can be reused by others paths later.
No functional changes.
|
|
|
|
|
|
Fixes: #4060
|
|
Let's just say that the journal takes up space in the file system, not on disk,
as tmpfs is definitely a file system, but not a disk.
Fixes: #4059
|
|
Never permit that we write to journal files that have newer timestamps than our
local wallclock has. If we'd accept that, then the entries in the file might
end up not being ordered strictly.
Let's refuse this with ETXTBSY, and then immediately rotate to use a new file,
so that each file remains strictly ordered also be wallclock internally.
|
|
As soon as we notice that the clock jumps backwards, rotate journal files. This
is beneficial, as this makes sure that the entries in journal files remain
strictly ordered internally, and thus the bisection algorithm applied on it is
not confused.
This should help avoiding borked wallclock-based bisection on journal files as
witnessed in #4278.
|
|
Let's use the earliest linearized event timestamp for journal entries we have:
the event dispatch timestamp from the event loop, instead of requerying the
timestamp at the time of writing.
This makes the time a bit more accurate, allows us to query the kernel time one
time less per event loop, and also makes sure we always use the same timestamp
for both attempts to write an entry to a journal file.
|
|
going
When iterating through partially synced journal files we need to be prepared
for hitting with invalid entries (specifically: non-initialized). Instead of
generated an error and giving up, let's simply try to preceed with the next one
that is valid (and debug log about this).
This reworks the logic introduced with caeab8f626e709569cc492b75eb7e119076059e7
to iteration in both directions, and tries to look for valid entries located
after the invalid one. It also extends the behaviour to both iterating through
the global entry array and per-data object entry arrays.
Fixes: #4088
|
|
Let's make dissecting of borked journal files more expressive: if we encounter
an object whose first 8 bytes are all zeroes, then let's assume the object was
simply never initialized, and say so.
Previously, this would be detected as "overly short object", which is true too
in a away, but it's a lot more helpful printing different debug options for the
case where the size is not initialized at all and where the size is initialized
to some bogus value.
No function behaviour change, only a different log messages for both cases.
|
|
Let's and extra check, reusing check_properly_ordered() also for
journal_file_next_entry_for_data().
|
|
This adds a new call check_properly_ordered(), which we can reuse later, and
makes the code a bit more readable.
|
|
This allows us to share a bit more code between journal_file_next_entry() and
journal_file_next_entry_for_data().
|
|
Let's make it easier to figure out when we see an invalid journal file, why we
consider it invalid, and add some minimal debug logging for it.
This log output is normally not seen (after all, this all is library code),
unless debug logging is exlicitly turned on.
|
|
When date is changed in system to future and normal user logs to new journal file, and then date is changed back to present time, the "journalctl --list-boot" command goes to forever loop. This commit tries to fix this problem by checking first the boot id list if the found boot id was already in that list. If it is found, then stopping the boot id find loop.
|
|
This adds a new invocation ID concept to the service manager. The invocation ID
identifies each runtime cycle of a unit uniquely. A new randomized 128bit ID is
generated each time a unit moves from and inactive to an activating or active
state.
The primary usecase for this concept is to connect the runtime data PID 1
maintains about a service with the offline data the journal stores about it.
Previously we'd use the unit name plus start/stop times, which however is
highly racy since the journal will generally process log data after the service
already ended.
The "invocation ID" kinda matches the "boot ID" concept of the Linux kernel,
except that it applies to an individual unit instead of the whole system.
The invocation ID is passed to the activated processes as environment variable.
It is additionally stored as extended attribute on the cgroup of the unit. The
latter is used by journald to automatically retrieve it for each log logged
message and attach it to the log entry. The environment variable is very easily
accessible, even for unprivileged services. OTOH the extended attribute is only
accessible to privileged processes (this is because cgroupfs only supports the
"trusted." xattr namespace, not "user."). The environment variable may be
altered by services, the extended attribute may not be, hence is the better
choice for the journal.
Note that reading the invocation ID off the extended attribute from journald is
racy, similar to the way reading the unit name for a logging process is.
This patch adds APIs to read the invocation ID to sd-id128:
sd_id128_get_invocation() may be used in a similar fashion to
sd_id128_get_boot().
PID1's own logging is updated to always include the invocation ID when it logs
information about a unit.
A new bus call GetUnitByInvocationID() is added that allows retrieving a bus
path to a unit by its invocation ID. The bus path is built using the invocation
ID, thus providing a path for referring to a unit that is valid only for the
current runtime cycleof it.
Outlook for the future: should the kernel eventually allow passing of cgroup
information along AF_UNIX/SOCK_DGRAM messages via a unique cgroup id, then we
can alter the invocation ID to be generated as hash from that rather than
entirely randomly. This way we can derive the invocation race-freely from the
messages.
|
|
|
|
We are already attaching the system slice information to log messages, now add
theuser slice info too, as well as the object slice info.
|
|
journal_rate_limit_test() (#4291)
Currently, the ratelimit does not handle the number of suppressed messages accurately.
Even though the number of messages reaches the limit, it still allows to add one extra messages to journal.
This patch fixes the problem.
|
|
When s->length is zero this function doesn't do anything, note that in a
comment.
|
|
This patch fixes wrong calculation of burst_modulate(), which now calculates
the values smaller than really expected ones if available disk space is
strictly more than 1MB.
In particular, if available disk space is strictly more than 1MB and strictly
less than 16MB, the resulted value becomes smaller than its original one.
>>> (math.log2(1*1024**2)-16) / 4
1.0
>>> (math.log2(16*1024**2)-16) / 4
2.0
>>> (math.log2(256*1024**2)-16) / 4
3.0
→ This matches the comment in the function.
|
|
Since commit 5996c7c295e073ce21d41305169132c8aa993ad0 (v190 !), the
calculation of the HMAC is broken because the hash for a data object
including a field is done in the wrong order: the field object is
hashed before the data object is.
However during verification, the hash is done in the opposite order as
objects are scanned sequentially.
|
|
We shouldn't silently fail when appending the tag to a journal file
since FSS protection will simply be disabled in this case.
|
|
|
|
Network file dropins
|
|
In preparation for adding a version which takes a strv.
|
|
The key used to be jammed next to the local file path. Based on the format string on line 1675, I determined that the order of arguments was written incorrectly, and updated the function based on that assumption.
Before:
```
Please write down the following secret verification key. It should be stored
at a safe location and should not be saved locally on disk.
/var/log/journal/9b47c1a5b339412887a197b7654673a7/fss8f66d6-f0a998-f782d0-1fe522/18fdb8-35a4e900
The sealing key is automatically changed every 15min.
```
After:
```
Please write down the following secret verification key. It should be stored
at a safe location and should not be saved locally on disk.
d53ed4-cc43d6-284e10-8f0324/18fdb8-35a4e900
The sealing key is automatically changed every 15min.
```
|
|
Strerror removal and other janitorial cleanups
|
|
|
|
|
|
According to its manual page, flags given to mkostemp(3) shouldn't include
O_RDWR, O_CREAT or O_EXCL flags as these are always included. Beyond
those, the only flag that all callers (except a few tests where it
probably doesn't matter) use is O_CLOEXEC, so set that unconditionally.
|
|
Minor cleanup suggested by Lennart.
|
|
When the system journal becomes re-opened post-flush with the runtime
journal open, it implies we've recovered from something like an ENOSPC
situation where the system journal rotate had failed, leaving the system
journal closed, causing the runtime journal to be opened post-flush.
For the duration of the unavailable system journal, we log to the
runtime journal. But when the system journal gets opened (space made
available, for example), we need to close the runtime journal before new
journal writes will go to the system journal. Calling
server_flush_to_var() after opening the system journal with a runtime
journal present, post-flush, achieves this while preserving the runtime
journal's contents in the system journal.
The combination of the present flushed flag file and the runtime journal
being open is a state where we should be logging to the system journal,
so it's appropriate to resume doing so once we've successfully opened
the system journal.
|
|
Dynamic users should be treated like system users, and their logs
should end up in the main system journal.
|
|
Make journalctl more flexible
|
|
If journals get into a closed state like when rotate fails due to
ENOSPC, when space is made available it currently goes unnoticed leaving
the journals in a closed state indefinitely.
By calling system_journal_open() on entry to find_journal() we ensure
the journal has been opened/created if possible.
Also moved system_journal_open() up to after open_journal(), before
find_journal().
Fixes https://github.com/systemd/systemd/issues/3968
|
|
It is useful to look at a (possibly inactive) container or other os tree
with --root=/path/to/container. This is similar to specifying
--directory=/path/to/container/var/log/journal --directory=/path/to/container/run/systemd/journal
(if using --directory multiple times was allowed), but doesn't require
as much typing.
|
|
The directory argument that is given to sd_j_o_d was ignored when
SD_JOURNAL_OS_ROOT was given, and directories relative to the root of the host
file system were used. With that flag, sd_j_o_d should do the same as
sd_j_open_container: use the path as "prefix", i.e. the directory relative to
which everything happens.
Instead of touching sd_j_o_d, journal_new is fixed to do what sd_j_o_c
was doing, and treat the specified path as prefix when SD_JOURNAL_OS_ROOT is
specified.
|
|
There is no reason not to. This makes journalctl -D ... --system work,
useful for example when viewing files from a deactivated container.
|
|
… in preparation for future changes.
|