Age | Commit message (Collapse) | Author |
|
Now that we bump rlimit, we do not really know how many files
we can open. Remove the check.
https://bugzilla.redhat.com/show_bug.cgi?id=1179980
|
|
When there are a lot of split out journal files, we might run out of fds
quicker then we want. Hence: bump RLIMIT_NOFILE to 16K if possible.
Do these even for journalctl. On Fedora the soft RLIMIT_NOFILE is at 1K,
the hard at 4K by default for normal user processes, this code hence
bumps this up for users to 4K.
https://bugzilla.redhat.com/show_bug.cgi?id=1179980
|
|
fd_setcrtime()
|
|
btrfs' COW logic results in heavily fragment journal files, which is
detrimental for perfomance. Hence, turn off COW for journal files as we
create them.
Turning off COW comes at the cost of data integrity guarantees, but this
should be acceptable, given that we do our own checksumming, and
generally have a pretty conservative write pattern.
Also see discussion on linux-btrfs:
http://www.spinics.net/lists/linux-btrfs/msg41001.html
|
|
|
|
Our write pattern is quite awful for CoW file systems (btrfs...), as we
keep updating file parts in the beginning of the file. This results in
fragmented journal files. Hence: when rotating files, defragment them,
since at that point we know that no further write accesses will be made.
|
|
LOG_DEBUG is already a log level, there is no need to use LOG_PRI which
is for filtering out the facility.
|
|
Making use of the fd storage capability of the previous commit, allow
restarting journald by serilizing stream state to /run, and pushing open
fds to PID 1.
|
|
|
|
deleted, rotate
https://bugzilla.redhat.com/show_bug.cgi?id=1171719
|
|
journal file headers
Since the file headers might be replaced by zeroed pages now due to
sigbus we should make sure we don't end up dividing by zero because we
don't check values read from journal file headers for changes.
|
|
arguments should be prefixed with "arg_"
|
|
This makes them robust regarding truncation. Ideally, we'd export this
as an API, but given how messy SIGBUS handling is, and the uncertain
ownership logic of signal handlers we should not do this (unless libc
one day invents a scheme how to sanely install SIGBUS handlers for
specific memory areas only). However, for now we can still make all our
own tools robust.
Note that external tools will only have read-access to the journal
anyway, where SIGBUS is much more unlikely, given that only writes are
subject to disk full problems.
|
|
|
|
|
|
Even though we use fallocate() it appears that file systems like btrfs
will trigger SIGBUS on certain low-disk-space situation. We should
handle that, hence catch the signal, add it to a list of invalidated
pages, and replace the page with an empty memory area. After each write
check if SIGBUS was triggered, and consider the write invalid if it was.
This should make journald a lot more robust with file systems where
fallocate() is not reliable, for example all CoW file systems
(btrfs...), where changing written data can fail with disk full errors.
https://bugzilla.redhat.com/show_bug.cgi?id=1045810
|
|
https://github.com/vlajos/misspell_fixer
https://github.com/torstehu/systemd/commit/b6fdeb618cf2f3ce1645b3315f15f482710c7ffa
Thanks to Torstein Husebo <torstein@huseboe.net>.
|
|
If OBJECT_PID= came as the last field, we would not reallocate the iovec to bigger size,
and fail the assertion later on in dispatch_message_real().
|
|
https://bugzilla.redhat.com/show_bug.cgi?id=1177184
|
|
|
|
There is alot of cleanup that will have to happen to turn on
-fstrict-aliasing, but I think our code should be "correct" to the rule.
|
|
EOF is meaningless if the direction of iteration changes.
Move the EOF optimization under the direction check.
This fixes test-journal-interleaving for me.
Thanks to Filipe Brandenburger for telling me about the failure.
|
|
next_with_matches() is odd in that its "unit64_t *offset" parameter is
both input and output. In other it's purely for output.
The function is called from two places in next_beyond_location(). In
both of them "&cp" is used as the argument and in both cases cp is
guaranteed to equal f->current_offset.
Let's just have next_with_matches() ignore "*offset" on input and
operate with f->current_offset.
I did not investigate why it is, but it makes my usual benchmark run
reproducibly faster:
$ time ./journalctl --since=2014-06-01 --until=2014-07-01 > /dev/null
real 0m4.032s
user 0m3.896s
sys 0m0.135s
(Compare to preceding commit, where real was 4.4s.)
|
|
I accidentally broke the detection of duplicate entries in 7943f42275
"journal: optimize iteration by returning previously found candidate
entry".
When we have a known location of a candidate entry, we must not return
from next_beyond_location() immediately. We must go through the
duplicates detection to make sure the candidate differs from the
already iterated entry.
This fix slows down iteration a bit, but it's still faster than it
was before the rework.
$ time ./journalctl --since=2014-06-01 --until=2014-07-01 > /dev/null
real 0m4.448s
user 0m4.298s
sys 0m0.149s
(Compare with results from commit 7943f42275, where real was 5.3s before
the rework.)
|
|
Now that journal_file_next_entry() does not need a pointer to the
current object, next_with_matches() does not need it either.
|
|
The current offset is sufficient information.
|
|
In next_beyond_location() when the JournalFile's location type is
LOCATION_SEEK, it means there's nothing to do, because we already have
the location of the candidate entry. Do an early return. Note that now
next_beyond_location() does not anymore guarantee on return that the
entry is mapped, but previous patches made sure the caller does not
care.
This optimization is at least as good as "journal: optimize iteration:
skip files that cannot improve current candidate entry" was.
Timing results on my workstation, using:
$ time ./journalctl -q --since=2014-06-01 --until=2014-07-01 > /dev/null
Before "Revert "journal: optimize iteration: skip files that cannot
improve current candidate entry":
real 0m5.349s
user 0m5.166s
sys 0m0.181s
Now:
real 0m3.901s
user 0m3.724s
sys 0m0.176s
|
|
If from a previous iteration we know we are at the end of a journal
file, don't bother looking into the file again. This is complicated by
the fact that the EOF does not have to be permanent (think of
"journalctl -f"). So we also check if the number of entries in the
journal file changed.
This optimization has a similar effect as "journal: optimize iteration:
skip whole files behind current location" had.
|
|
offset is redundant, because the caller can rely on f->current_offset.
The object pointer the function saves in *ret is thrown away by the caller.
|
|
The file's current_offset is already updated at this point, so let's use
it.
|
|
When comparing the locations of candidate entries, we can rely on the
location information stored in struct JournalFile.
|
|
set_location() is called from real_journal_next() when a winning entry
has been picked from among the candidates in journal files.
The location type is always set to LOCATION_DISCRETE. No need to pass
it as a parameter.
The per-JournalFile location information is already updated at this
point. No need for having the direction and offset here.
|
|
In next_beyond_location() when we find a candidate entry in a journal
file, save its location information in struct JournalFile.
The purpose of remembering the locations of candidate entries is to be
able to save work in the next iteration. This patch does only the
remembering part.
LOCATION_SEEK means the location identifies a candidate entry.
When a winner is picked from among candidates, it becomes
LOCATION_DISCRETE.
LOCATION_TAIL here signifies we've iterated the file to the end (or the
beginning in the case of reversed direction).
|
|
|
|
In preparation for individual JournalFiles maintaining a location
of their own.
|
|
This reverts commit b7c88ab8cc7d55a43450bf3dea750f95f2e910d6.
This optimization will be made redundant by the following patches.
|
|
candidate entry"
This reverts commit f8b5a3b75fb55f0acb85c21424b3893c822742e9.
This optimization will be made redundant by the following patches.
|
|
Its only caller is a test.
|
|
|
|
try_context() is such a hot path that the hashmap lookup is expensive.
The number of contexts is small - it is the number of object types.
Using a hashmap is overkill. A plain array will do.
Before:
$ time ./journalctl --since=2014-06-01 --until=2014-07-01 > /dev/null
real 0m9.445s
user 0m9.228s
sys 0m0.213s
After:
$ time ./journalctl --since=2014-06-01 --until=2014-07-01 > /dev/null
real 0m5.438s
user 0m5.266s
sys 0m0.170s
|
|
This never had any callers. Contexts are freed when the MMapCache is
freed.
|
|
|
|
|
|
Note that numbers 0 and -1 are both replaced with OBJECT_UNUSED,
because they are treated the same everywhere (e.g. type_to_context()
translates them both to 0).
|
|
If type==0 and a non-NULL object were given as arguments to
journal_file_hmac_put_object(), its object type check would fail and it
would return -EBADMSG.
All existing callers use either a positive type or -1. Still, for
behavior consistency with journal_file_move_to_object() let's allow
type 0 to pass.
|
|
It has no other callers. It does not need to be in the header file.
|
|
The only user is sd_journal_enumerate_unique() and, as explained in
the previous commit (fed67c38e3 "journal: map objects to context set by
caller, not by actual object type"), the use of them there is now
superfluous. Let's remove them.
This reverts major parts of commits:
ae97089d49 journal: fix access to munmapped memory in
sd_journal_enumerate_unique
06cc69d44c sd-journal: fix sd_journal_enumerate_unique skipping values
Tested with an "--enable-debug" build and "journalctl --list-boots".
It gives the expected number of results. Additionally, if I then revert
the previous commit ("journal: map objects to context set by caller, not
to actual object type"), it crashes with SIGSEGV, as expected.
|
|
When the caller of journal_file_move_to_object() specifies type==0,
the object header is at first mapped in context 0. Then after the header
is checked, the whole object is mapped in a context determined by
the actual object type (which is not even range-checked using
type_to_context()). This looks wrong. It should map in the
caller-specified context.
An old comment in sd_journal_enumerate_unique() supports this view:
/* We do not use the type context here, but 0 instead,
* so that we can look at this data object at the same
* time as one on another file */
Clearly the expectation was that the data object will remain mapped
in context 0 without being pushed away by mapping other objects in
context OBJECT_DATA.
I suspect that this was the real bug that got fixed by ae97089d49
"journal: fix access to munmapped memory in sd_journal_enumerate_unique".
In other words, journal_file_object_keep/release are superfluous after
applying this patch.
|
|
This is useful for exposing unsafe access to mmapped objects after
the context that they were mapped in was already moved.
For example:
journal_file_move_to_object(f1, OBJECT_DATA, p1, &o1);
journal_file_move_to_object(f2, OBJECT_DATA, p2, &o2);
t = o1->object.type; /* this usually works, but is unsafe */
|
|
|