Age | Commit message (Collapse) | Author |
|
The only code path which makes a journal durable is via
journal_file_set_offline().
When we perform a rotate the journal's header->state is being set to
STATE_ARCHIVED prior to journal_file_set_offline() being called.
In journal_file_set_offline(), we short-circuit the entire offline when
f->header->state != STATE_ONLINE.
This all results in none of the journal_file_set_offline() fsync() calls
being reached when rotate archives a journal, so archived journals are
never explicitly made durable.
What we do now is instead of setting the f->header->state to
STATE_ARCHIVED directly in journal_file_rotate() prior to
journal_file_close(), we set an archive flag in f->archive for the
journal_file_set_offline() machinery to honor by committing
STATE_ARCHIVED instead of STATE_OFFLINE when set.
Prior to this, rotated journals were never getting fsync() explicitly
performed on them, since journal_file_set_offline() short-circuited.
Obviously this is undesirable, and depends entirely on the underlying
filesystem as to how much durability was achieved when simply closing
the file.
Note that this problem existed prior to the recent asynchronous fsync
changes, but those changes do facilitate our performing this durable
offline on rotate without blocking, regardless of the underlying
filesystem sync-on-close semantics.
|
|
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
|
|
When we rotate journals, we must set offline and close the current one,
but don't generally need to wait for this to complete.
Instead, we'll initiate an asynchronous offline via
journal_file_set_offline(oldfile, false), and add the file to a
per-server set of deferred closes to be closed later when they
won't block.
There's one complication however; journal_file_open() via
journal_file_verify_header() assumes that any writable journal in the
online state is the product of an unclean shutdown or other form of
corruption.
Thus there's a need for journal_file_open() to be aware of deferred
closes and synchronize with their completion when opening preexisting
journals for writing. To facilitate this the deferred closes set is
supplied to the journal_file_open() function where the deferred closes
may be closed synchronously before verifying the header in such
circumstances.
|
|
This adds a wait flag to journal_file_set_offline(), when false the offline is
performed asynchronously in a separate thread.
When wait is true, if an asynchronous offline is already in-progress it is
restarted and waited for. Otherwise the offline is performed synchronously
without the use of a thread.
journal_file_set_online() cancels or waits for the asynchronous offline to
complete if in-flight, depending on where in the offline process the thread
happens to be. If the thread is in the fsync() phase, it is cancelled and
waiting is unnecessary. Otherwise, the thread is joined before proceeding.
A new offline_state member is added to JournalFile which is used via
atomic operations for communicating between the offline thread and the
journal_file_set_{offline,online}() functions.
|
|
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
|
|
Prior to this change every journal append causes an ftruncate() for the
sake of inotify propagation of the mmap-based writes.
With this change the notification is deferred up to ~250ms, coalescing
any repeated journal writes during the deferred period into a single
ftruncate(). The ftruncate() call isn't free and doing it on every
append adds unnecessary overhead and latency in the journald event loop.
Introduces journal_file_enable_post_change_timer() which manages a
timer on the provided sd-event instance for scheduling coalesced
ftruncates. The ftruncate() behavior is unchanged unless
journal_file_enable_post_change_timer() is called on the JournalFile.
While not a tremendous improvement, profiling systemd-journald event loop
latencies using instrumentation as introduced by 34b8751 it was observed that
coalescing the ftruncates was low-hanging fruit worth pursuing.
Note orders 12 and 13 shifting left into order 11 and order 6 dipping into
order 5:
Unmodified:
log2(us) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
-----------------------------------------------------------
[10685.414572] 0 0 0 0 38 602 61 2 290 60 1643 2554 13 1 4 1 0 0 1
[10690.415114] 0 0 0 0 0 646 54 7 309 44 2073 2148 17 1 3 0 0 0 1
[10695.415509] 0 0 0 0 1 650 73 3 324 37 2071 2270 9 0 0 1 0 1 0
[10700.416297] 0 0 0 0 0 659 50 4 318 38 2111 2152 6 0 1 0 0 1 1
[10705.417136] 0 0 0 0 2 660 48 4 320 38 2129 2146 12 1 1 0 0 1 1
[10710.489114] 0 0 0 0 0 673 38 3 321 37 1925 2339 7 0 0 0 0 1 1
[10715.489613] 0 0 0 0 3 656 64 8 317 48 2365 2007 7 0 0 0 0 0 1
Coalesced:
log2(us) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
-----------------------------------------------------------
[ 6169.161360] 0 0 0 1 24 786 54 11 389 24 4192 771 6 4 0 0 1 0 1
[ 6174.161705] 0 0 0 1 18 800 35 6 380 27 3977 893 3 1 0 0 1 0 1
[ 6179.162741] 0 0 0 1 28 768 51 4 391 16 3998 831 5 3 0 0 0 0 2
[ 6184.162856] 0 0 0 0 19 770 60 2 376 26 3795 1004 9 5 1 0 1 0 1
[ 6189.163279] 0 0 0 0 28 761 49 7 372 27 3729 1056 3 2 0 0 1 0 1
[ 6194.164255] 0 0 0 0 25 785 49 7 394 19 3996 908 6 3 2 0 0 0 1
[ 6199.164658] 0 0 0 0 29 797 35 5 389 18 3995 898 3 4 1 1 1 0 1
The remaining high-order delays are a result of the synchronous fsyncs in
systemd-journald, beyond the scope of this commit.
|
|
This is a continuation of the previous include sort patch, which
only sorted for .c files.
|
|
Let's make sure we handle compression errors properly, and don't
misunderstand an error for success.
Also, let's actually compress things if lz4 is enabled.
Fixes #1662.
|
|
Implement a maximum limit on number of journal files to keep around.
Enforcing a limit is useful on this since our performance when viewing
pays a heavy penalty for each journal file to interleve. This setting is
turned on now by default, and set to 100.
Also, actully implement what 348ced909724a1331b85d57aede80a102a00e428
promised: use whatever we find on disk at startup as lower bound on how
much disk space we can use. That commit introduced some provisions to
implement this, but actually never did.
This also adds "journalctl --vacuum-files=" to vacuum files on disk by
their number explicitly.
|
|
The way it is customary everywhere else in our sources.
|
|
When a new journal file is created we write the header first, then sync
and only then create the data and field hash tables in them. That means
to other processes it might appear that the files have a valid header
but not data and field hash tables. Our reader code should be able to
deal with this.
With this change we'll not map the two hash tables right-away after
opening a file for reading anymore (because that will of course fail if
the objects are missing), but delay this until the first time we access
them. On top of that, when we want to look something up in the hash
tables and we notice they aren't initialized yet, we consider them
empty.
This improves handling of some journal files reported in #487.
|
|
Commit 668c965af "journal: skipping of exhausted journal files is bad if
direction changed" fixed a correctness issue, but it also significantly
limited the cases where the optimization that skips exhausted journal
files could apply.
As a result, some journalctl queries are much slower in v219 than in v218.
(e.g. queries where a "--since" cutoff should have quickly eliminated
older journal files from consideration, but didn't.)
If already in the initial iteration find_location_with_matches() finds
no entry, the journal file's location is not updated. This is fine,
except that:
- We must update at least f->last_direction. The optimization relies on
it. Let's separate that from journal_file_save_location() and update
it immediately after the direction checks.
- The optimization was conditional on "f->current_offset > 0", but it
would always be 0 in this scenario. This check is unnecessary for the
optimization.
|
|
This remove the need for various header files to include the
(relatively heavyweight) util.h.
|
|
Our write pattern is quite awful for CoW file systems (btrfs...), as we
keep updating file parts in the beginning of the file. This results in
fragmented journal files. Hence: when rotating files, defragment them,
since at that point we know that no further write accesses will be made.
|
|
deleted, rotate
https://bugzilla.redhat.com/show_bug.cgi?id=1171719
|
|
Even though we use fallocate() it appears that file systems like btrfs
will trigger SIGBUS on certain low-disk-space situation. We should
handle that, hence catch the signal, add it to a list of invalidated
pages, and replace the page with an empty memory area. After each write
check if SIGBUS was triggered, and consider the write invalid if it was.
This should make journald a lot more robust with file systems where
fallocate() is not reliable, for example all CoW file systems
(btrfs...), where changing written data can fail with disk full errors.
https://bugzilla.redhat.com/show_bug.cgi?id=1045810
|
|
The current offset is sufficient information.
|
|
If from a previous iteration we know we are at the end of a journal
file, don't bother looking into the file again. This is complicated by
the fact that the EOF does not have to be permanent (think of
"journalctl -f"). So we also check if the number of entries in the
journal file changed.
This optimization has a similar effect as "journal: optimize iteration:
skip whole files behind current location" had.
|
|
When comparing the locations of candidate entries, we can rely on the
location information stored in struct JournalFile.
|
|
In next_beyond_location() when we find a candidate entry in a journal
file, save its location information in struct JournalFile.
The purpose of remembering the locations of candidate entries is to be
able to save work in the next iteration. This patch does only the
remembering part.
LOCATION_SEEK means the location identifies a candidate entry.
When a winner is picked from among candidates, it becomes
LOCATION_DISCRETE.
LOCATION_TAIL here signifies we've iterated the file to the end (or the
beginning in the case of reversed direction).
|
|
|
|
In preparation for individual JournalFiles maintaining a location
of their own.
|
|
Its only caller is a test.
|
|
|
|
|
|
It has no other callers. It does not need to be in the header file.
|
|
The only user is sd_journal_enumerate_unique() and, as explained in
the previous commit (fed67c38e3 "journal: map objects to context set by
caller, not by actual object type"), the use of them there is now
superfluous. Let's remove them.
This reverts major parts of commits:
ae97089d49 journal: fix access to munmapped memory in
sd_journal_enumerate_unique
06cc69d44c sd-journal: fix sd_journal_enumerate_unique skipping values
Tested with an "--enable-debug" build and "journalctl --list-boots".
It gives the expected number of results. Additionally, if I then revert
the previous commit ("journal: map objects to context set by caller, not
to actual object type"), it crashes with SIGSEGV, as expected.
|
|
The order of entries may matter here. Oldest entries are evicted first
when the cache is full.
(Though I don't see anything to rejuvenate entries on cache hits.)
|
|
sd_journal_enumerate_unique will lock its mmap window to prevent it
from being released by calling mmap_cache_get with keep_always=true.
This call may return windows that are wider, but compatible with the
parameters provided to it.
This can result in a mismatch where the window to be released cannot
properly be selected, because we have more than one window matching the
parameters of mmap_cache_release. Therefore, introduce a release_cookie
to be used when releasing the window.
https://bugs.freedesktop.org/show_bug.cgi?id=79380
|
|
|
|
mmap code crashes when attempting to map an object of zero size.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=758392
https://bugs.freedesktop.org/show_bug.cgi?id=82894
|
|
They have different size on 32 bit, so they are really not interchangable.
|
|
In practice this shouldn't make much difference, but
sometimes our headers might be newer, and we want to
test them.
|
|
Add liblz4 as an optional dependency when requested with --enable-lz4,
and use it in preference to liblzma for journal blob and coredump
compression. To retain backwards compatibility, XZ is used to
decompress old blobs.
Things will function correctly only with lz4-119.
Based on the benchmarks found on the web, lz4 seems to be the best
choice for "quick" compressors atm.
For pkg-config status, see http://code.google.com/p/lz4/issues/detail?id=135.
|
|
Before, journald would remove journal files until both MaxUse= and
KeepFree= settings would be satisfied. The first one depends (if set
automatically) on the size of the file system and is constant. But
the second one depends on current use of the file system, and a spike
in disk usage would cause journald to delete journal files, trying to
reach usage which would leave 15% of the disk free. This behaviour is
surprising for the user who doesn't expect his logs to be purged when
disk usage goes above 85%, which on a large disk could be some
gigabytes from being full. In addition attempting to keep 15% free
provides an attack vector where filling the disk sufficiently disposes
of almost all logs.
Instead, obey KeepFree= only as a limit on adding additional files.
When replacing old files with new, ignore KeepFree=. This means that
if journal disk usage reached some high point that at some later point
start to violate the KeepFree= constraint, journald will not add files
to go above this point, but it will stay (slightly) below it. When
journald is restarted, it forgets the previous maximum usage value,
and sets the limit based on the current usage, so if disk remains to
be filled, journald might use one journal-file-size less on each
restart, if restarts happen just after rotation. This seems like a
reasonable compromise between implementation complexity and robustness.
|
|
sd_j_e_u needs to keep a reference to an object while comparing it
with possibly duplicate objects in other files. Because the size of
mmap cache is limited, with enough files and object to compare to,
at some point the object being compared would be munmapped, resulting
in a segmentation fault.
Fix this issue by turning keep_always into a reference count that can
be increased and decreased. Other callers which set keep_always=true
are unmodified: their references are never released but are ignored
when the whole file is closed, which happens at some point. keep_always
is increased in sd_j_e_u and later on released.
|
|
|
|
"make check-api-unused" informs us about code that is not used anymore
or that is exported but only used internally. Fix these all over the
place.
|
|
The fields in JournalFile are moved around to avoid wasting
7 bytes because of alignment.
|
|
I'm assuming that it's fine if a _const_ or _pure_ function
calls assert. It is assumed that the assert won't trigger,
and even if it does, it can only trigger on the first call
with a given set of parameters, and we don't care if the
compiler moves the order of calls.
|
|
Add option to force journal sync with fsync. Default timeout is 5min.
Interval configured via SyncIntervalSec option at journal.conf. Synced
journal files will be marked as OFFLINE.
Manual sync can be performed via sending SIGUSR1.
|
|
When traversing entry array chains for a bisection or for retrieving an
item by index we previously always started at the beginning of the
chain. Since we tend to look at the same chains repeatedly, let's cache
where we have been the last time, and maybe we can skip ahead with this
the next time.
This turns most bisections and index lookups from O(log(n)*log(n)) into
O(log(n)). More importantly however, we seek around on disk much less,
which is good to reduce buffer cache and seek times on rotational disks.
|
|
entries of the journal
The new 'unique' API allows listing all unique field values that a field
specified by a field name can take in all entries of the journal. This
allows answering queries such as "What units logged to the journal?",
"What hosts have logged into the journal?", "Which boot IDs have logged
into the journal?".
Ultimately this allows implementation of tools similar to lastlog based
on journal data.
Note that listing these field values will not work for journal files
created with older journald, as the field values are not indexed in
older files.
|
|
This also enables time-based rotation (but not vacuuming) after 1month,
so that not more one month of journal is lost at a time per vacuuming.
|
|
|
|
|
|
|
|
|
|
Let's clean up our terminology a bit. New terminology:
FSS = Forward Secure Sealing
FSPRG = Forward Secure Pseudo-Random Generator
FSS is the combination of FSPRG and a HMAC.
Sealing = process of adding authentication tags to the journal.
Verification = process of checking authentication tags to the journal.
Sealing Key = The key used for adding authentication tags to the journal.
Verification Key = The key used for checking authentication tags of the journal.
Key pair = The pair of Sealing Key and Verification Key
Internally, the Sealing Key is the combination of the FSPRG State plus
change interval/start time.
Internally, the Verification Key is the combination of the FSPRG Seed
plus change interval/start time.
|
|
|