Age | Commit message (Collapse) | Author |
|
The journalctl man page says: "-m, --merge Show entries interleaved from all
available journals, including remote ones.", but current version of journalctl
doesn't live up to this promise. This patch simply adds
"/var/log/journal/remote" to search path if --merge flag is used.
Should fix issue #3618
|
|
We don't have plural in the name of any other -util files and this
inconsistency trips me up every time I try to type this file name
from memory. "formats-util" is even hard to pronounce.
|
|
This makes strjoin and strjoina more similar and avoids the useless final
argument.
spatch -I . -I ./src -I ./src/basic -I ./src/basic -I ./src/shared -I ./src/shared -I ./src/network -I ./src/locale -I ./src/login -I ./src/journal -I ./src/journal -I ./src/timedate -I ./src/timesync -I ./src/nspawn -I ./src/resolve -I ./src/resolve -I ./src/systemd -I ./src/core -I ./src/core -I ./src/libudev -I ./src/udev -I ./src/udev/net -I ./src/udev -I ./src/libsystemd/sd-bus -I ./src/libsystemd/sd-event -I ./src/libsystemd/sd-login -I ./src/libsystemd/sd-netlink -I ./src/libsystemd/sd-network -I ./src/libsystemd/sd-hwdb -I ./src/libsystemd/sd-device -I ./src/libsystemd/sd-id128 -I ./src/libsystemd-network --sp-file coccinelle/strjoin.cocci --in-place $(git ls-files src/*.c)
git grep -e '\bstrjoin\b.*NULL' -l|xargs sed -i -r 's/strjoin\((.*), NULL\)/strjoin(\1)/'
This might have missed a few cases (spatch has a really hard time dealing
with _cleanup_ macros), but that's no big issue, they can always be fixed
later.
|
|
|
|
The directory argument that is given to sd_j_o_d was ignored when
SD_JOURNAL_OS_ROOT was given, and directories relative to the root of the host
file system were used. With that flag, sd_j_o_d should do the same as
sd_j_open_container: use the path as "prefix", i.e. the directory relative to
which everything happens.
Instead of touching sd_j_o_d, journal_new is fixed to do what sd_j_o_c
was doing, and treat the specified path as prefix when SD_JOURNAL_OS_ROOT is
specified.
|
|
There is no reason not to. This makes journalctl -D ... --system work,
useful for example when viewing files from a deactivated container.
|
|
… in preparation for future changes.
|
|
the /) (#3934)
Fixes #3927.
|
|
|
|
|
|
Let's document the call as deprecated, since it doesn't cover containers with
directories that aren#t visible to the host properly.
|
|
With this change a new flag SD_JOURNAL_OS_ROOT is introduced. If specified
while opening the journal with the per-directory calls (specifically:
sd_journal_open_directory() and sd_journal_open_directory_fd()) the passed
directory is assumed to be the root directory of an OS tree, and the journal
files are searched for in /var/log/journal, /run/log/journal relative to it.
This is useful to allow usage of sd-journal on file descriptors returned by the
OpenRootDirectory() call of machined.
|
|
Also, expose this via the "journalctl --file=-" syntax for STDIN. This feature
remains undocumented though, as it is probably not too useful in real-life as
this still requires fds that support mmaping and seeking, i.e. does not work
for pipes, for which reading from STDIN is most commonly used.
|
|
|
|
Let's also collect errors returned by readdir() into our set of errors, like we
do this for all other errors from journal files.
|
|
This is slightly nicer, since we actually watch the directories we opened and
enumerate. However, primarily this is preparation for adding support for
opening journal files by fd without specifying any path, to be added in a later
commit.
|
|
|
|
|
|
Throughout the tree there's spurious use of spaces separating ++ and --
operators from their respective operands. Make ++ and -- operator
consistent with the majority of existing uses; discard the spaces.
|
|
When we rotate journals, we must set offline and close the current one,
but don't generally need to wait for this to complete.
Instead, we'll initiate an asynchronous offline via
journal_file_set_offline(oldfile, false), and add the file to a
per-server set of deferred closes to be closed later when they
won't block.
There's one complication however; journal_file_open() via
journal_file_verify_header() assumes that any writable journal in the
online state is the product of an unclean shutdown or other form of
corruption.
Thus there's a need for journal_file_open() to be aware of deferred
closes and synchronize with their completion when opening preexisting
journals for writing. To facilitate this the deferred closes set is
supplied to the journal_file_open() function where the deferred closes
may be closed synchronously before verifying the header in such
circumstances.
|
|
|
|
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
|
|
|
|
No need to store the object and offset data if we don't actually need it ever.
|
|
This adds two new calls to get the list of all journal fields names currently in use.
This is the low-level support to implement the feature requested in #2176 in a more optimized way.
|
|
Also introduce sd_journal_has_runtime_files() and
sd_journal_has_persistent_files() to the public API. These functions
can be used to easily find out if the open journal files are runtime
and/or persistent.
|
|
The return value was used directly in an if, so an error was treated
as success; we need to bail out instead. An error should not happen,
unless we have a compression/decompression mismatch, so output a debug
line.
|
|
When we enumerate journal files and encounter an invalid one, remember
which this, and show it to the user.
Note the possibly slightly surprising logic here: we store only one path
per error code. This means we show all error kinds but not every actual
error we encounter. This has the benefit of not requiring us to keep a
potentially unbounded list of errors with their sources around, but can
still provide a pretty complete overview on the errors we encountered.
Fixes #1669.
|
|
- Always print a debug log message about files and directories we cannot
open right when it happens instead of the caller, thus reducing the
number of places where we need to generate the debug message.
- Always push the errors we encounter immediately into the error set,
when we run into them, instead of in the caller. Thus, we never forget
to push them in.
- Use stack instead of heap memory where we can.
- Make remove_file() void, since it cannot fail anyway and always
returned 0.
- Make local machine check of journal directories explicit in a
function, to make things more readable.
- Port to all directory listing loops FOREACH_DIRENT_ALL()
- sd-daemon is library code, hence never log at higher log levels than
LOG_DEBUG.
|
|
|
|
|
|
|
|
|
|
Also, move a couple of more path-related functions to path-util.c.
|
|
|
|
There are more than enough to deserve their own .c file, hence move them
over.
|
|
string-util.[ch]
There are more than enough calls doing string manipulations to deserve
its own files, hence do something about it.
This patch also sorts the #include blocks of all files that needed to be
updated, according to the sorting suggestions from CODING_STYLE. Since
pretty much every file needs our string manipulation functions this
effectively means that most files have sorted #include blocks now.
Also touches a few unrelated include files.
|
|
As it turns out machine_name_is_valid() does the exact same thing as
hostname_is_valid() these days, as it just invoked that and checked the
name length was < 64. However, hostname_is_valid() checks the length
against HOST_NAME_MAX anyway (which is 64 on Linux), hence any
additional check is redundant.
We hence replace machine_name_is_valid() by a macro that simply maps it
to hostname_is_valid() but sets the allow_trailing_dot parameter to
false. We also move this this call to hostname-util.h, to the same place
as the hostname_is_valid() declaration.
|
|
remove_directory will always return 0 so this can never happen.
Besides that, d->path and d are freed so we would end up with
a null pointer dereference anyway.
|
|
|
|
Lack of this caused journalctl not to display a hint about missing groups
properly when the user lacks permissions.
|
|
Commit 668c965af "journal: skipping of exhausted journal files is bad if
direction changed" fixed a correctness issue, but it also significantly
limited the cases where the optimization that skips exhausted journal
files could apply.
As a result, some journalctl queries are much slower in v219 than in v218.
(e.g. queries where a "--since" cutoff should have quickly eliminated
older journal files from consideration, but didn't.)
If already in the initial iteration find_location_with_matches() finds
no entry, the journal file's location is not updated. This is fine,
except that:
- We must update at least f->last_direction. The optimization relies on
it. Let's separate that from journal_file_save_location() and update
it immediately after the direction checks.
- The optimization was conditional on "f->current_offset > 0", but it
would always be 0 in this scenario. This check is unnecessary for the
optimization.
|
|
include-what-you-use automatically does this and it makes finding
unnecessary harder to spot. The only content of poll.h is a include
of sys/poll.h so should be harmless.
|
|
After all it is now much more like strjoin() than strappend(). At the
same time, add support for NULL sentinels, even if they are normally not
necessary.
|
|
If we scale our buffer to be wide enough for the format string, we
should expect that the calculation was correct.
char_array_0() invocations are removed, since snprintf nul-terminates
the output in any case.
A similar wrapper is used for strftime calls, but only in timedatectl.c.
|
|
This reverts commit b914ea8d379b446c4c9fac4ba181771676ef38cd.
We really need to put a limit on all our resources, everywhere, and in
particular if we operate on external data.
Hence, let's reintroduce the limit, but bump it substantially, so that
it is guaranteed to be higher than any realistic RLIMIT_NOFILE setting.
|
|
Now that we bump rlimit, we do not really know how many files
we can open. Remove the check.
https://bugzilla.redhat.com/show_bug.cgi?id=1179980
|
|
There is alot of cleanup that will have to happen to turn on
-fstrict-aliasing, but I think our code should be "correct" to the rule.
|
|
EOF is meaningless if the direction of iteration changes.
Move the EOF optimization under the direction check.
This fixes test-journal-interleaving for me.
Thanks to Filipe Brandenburger for telling me about the failure.
|
|
next_with_matches() is odd in that its "unit64_t *offset" parameter is
both input and output. In other it's purely for output.
The function is called from two places in next_beyond_location(). In
both of them "&cp" is used as the argument and in both cases cp is
guaranteed to equal f->current_offset.
Let's just have next_with_matches() ignore "*offset" on input and
operate with f->current_offset.
I did not investigate why it is, but it makes my usual benchmark run
reproducibly faster:
$ time ./journalctl --since=2014-06-01 --until=2014-07-01 > /dev/null
real 0m4.032s
user 0m3.896s
sys 0m0.135s
(Compare to preceding commit, where real was 4.4s.)
|