Age | Commit message (Collapse) | Author |
|
In the case where no entries have been added to the journal after the specified
cursor, set need_seek before the main loop to prevent display of the entry at
said cursor.
|
|
With DIRECTION_UP (i.e. navigating backwards) in generic_array_bisect() when the
needle was found as the last item in the array, it wasn't actually processed as
match, resulting in entries being missed.
https://bugs.freedesktop.org/show_bug.cgi?id=86855
|
|
This is mostly likely the audit socket, and we really should close it
if we cannot make sense of it, since as long as it is open the kernel
might disable the kmsg forwarding of audit msgs, and we should avoid
that, since audit msgs might get completely lost then.
I also downgraded the log message we show a bit, after all things should
really work fine, and we proceed fine with it.
|
|
Otherwise they can be optimized away with -DNDEBUG
|
|
|
|
|
|
rather than heap
|
|
Use FOREACH_DIRENT() and FOREACH_LINE() macros instead of manual loops.
Don't clobber return parameters on failure.
Simplify some other things.
|
|
Using the same scripts as in f647962d64e "treewide: yet more log_*_errno
+ return simplifications".
|
|
If the format string contains %m, clearly errno must have a meaningful
value, so we might as well use log_*_errno to have ERRNO= logged.
Using:
find . -name '*.[ch]' | xargs sed -r -i -e \
's/log_(debug|info|notice|warning|error|emergency)\((".*%m.*")/log_\1_errno(errno, \2/'
Plus some whitespace, linewrap, and indent adjustments.
|
|
|
|
|
|
Basically:
find . -name '*.[ch]' | while read f; do perl -i.mmm -e \
'local $/;
local $_=<>;
s/log_(debug|info|notice|warning|error|emergency)\("([^"]*)%s"([^;]*),\s*strerror\(-?([->a-zA-Z_]+)\)\);/log_\1_errno(\4, "\2%m"\3);/gms;print;' \
$f; done
Plus manual indentation fixups.
|
|
|
|
It corrrectly handles both positive and negative errno values.
|
|
As a followup to 086891e5c1 "log: add an "error" parameter to all
low-level logging calls and intrdouce log_error_errno() as log calls
that take error numbers", use sed to convert the simple cases to use
the new macros:
find . -name '*.[ch]' | xargs sed -r -i -e \
's/log_(debug|info|notice|warning|error|emergency)\("(.*)%s"(.*), strerror\(-([a-zA-Z_]+)\)\);/log_\1_errno(-\4, "\2%m"\3);/'
Multi-line log_*() invocations are not covered.
And we also should add log_unit_*_errno().
|
|
Also, while we are at it, introduce some syntactic sugar for creating
ERRNO= and MESSAGE= structured logging fields.
|
|
- Rename log_meta() → log_internal(), to follow naming scheme of most
other log functions that are usually invoked through macros, but never
directly.
- Rename log_info_object() to log_object_info(), simply because the
object should be before any other parameters, to follow OO-style
programming style.
|
|
|
|
When I tryed to run journalctl with --follow and --since arguments it
behaved very strangely.
First It prints logs from what I specified in --since argument, then
printed 10 lines (as is default in --follow) and when app put something
new in to log journalctl printed everithing from the last printed line.
How to reproduce:
1. run: journalctl -m --since 14:00 --follow
Then you'll see 10 lines of logs since 14:00. After that wait until some
app add something in the journal or just run `systemd-cat echo test`
2. After that journalctl will print every single line since 14:00 and will
follow as expected.
As long as --since and --follow will eventually print all relevant
lines, I seen no reason why not to print them right away and not after
first new message in journal.
Relevant bugzillas:
https://bugs.freedesktop.org/show_bug.cgi?id=71546
https://bugs.freedesktop.org/show_bug.cgi?id=64291
|
|
/proc/[pid]:
- status
- maps
- limits
- cgroup
- cwd
- root
- environ
- fd/ & fdinfo/ joined in open_fds
|
|
systemd-journald would refuse to start if it received an unknown
socket from systemd. This is annoying, because the failure more for
systemd-journald is unpleasant: systemd will keep restarting journald,
but most likely the same error will occur every time. It is better
to continue. journald will try to open missing sockets on its own,
so things should mostly work.
One question is whether to close the sockets which cannot be parsed or
to keep them open. Either way we might lose some messages. This
failure is most likely for the audit socket (selinux issues), which
can be opened multiple times so this not a problem, so I decided to
keep them open because it makes it easier to debug the issue after the
system is fully started.
|
|
Also, make all parsing of the kernel cmdline non-fatal.
|
|
After all, this is about files, not arguments, hence EFBIG is more
appropriate than E2BIG
|
|
|
|
|
|
|
|
|
|
have anyway
|
|
Let's make the log output more readable, and the header can be
reconstructed in full from the other fields
|
|
|
|
Similar to auditd actually turn on auditing as we are starting. This way
we can operate entirely without auditd around.
|
|
audit doesn't support timestamps anyway
|
|
|
|
|
|
journal files based on a size/time limit
This is equivalent to the effect of SystemMaxUse= and RetentionSec=,
however can be invoked directly instead of implicitly.
|
|
|
|
|
|
|
|
|
|
provided headers
|
|
|
|
in /dev/shm
Previously when a log message grew beyond the maximum AF_UNIX/SOCK_DGRAM
datagram limit we'd send an fd to a deleted file in /dev/shm instead.
Because the sender could still modify the file after delivery we had to
immediately copy the data on the receiving side.
With memfds we can optimize this logic, and also remove the dependency
on /dev/shm: simply send a sealed memfd around, and if we detect the
seal memory map the fd and use it directly.
|
|
coverity otherwise assumes that the chain object might be NULL.
|
|
Commit 74055aa762 'journalctl: add new --flush command and make use of
it in systemd-journal-flush.service' broke flushing because journald
checks for the /run/systemd/journal/flushed file before opening the
permanent journal. When the creation of this file was postponed,
flushing stoppage ensued.
|
|
http://bugs.debian.org/766598
|
|
|
|
|
|
Anything that uses hashmap_next() almost certainly cares about the order
and needs to be an OrderedHashmap.
|
|
Order matters here. It replaces oldest entries first when
USER_JOURNALS_MAX is reached.
|