Age | Commit message (Collapse) | Author |
|
- systemd[1]: hello.service: main process exited, code= dumped, status=3/QUIT
- systemd-coredump[2541]: Failed to generate stack trace: Unwinding not supported for this architecture
- systemd-coredump[2541]: Process 1024 (hello) of user 154 dumped core.
|
|
On receiving a message, "kernel_seqnum" is set to "serial + 1". So
subtracting 1 will cause messages like "Missed 0 kernel messages",
which should be "Missed 1 kernel messages".
|
|
|
|
We should read the entry size before moving to the next iovec, not
after.
|
|
They have different size on 32 bit, so they are really not interchangable.
|
|
getopt is usually good at printing out a nice error message when
commandline options are invalid. It distinguishes between an unknown
option and a known option with a missing arg. It is better to let it
do its job and not use opterr=0 unless we actually want to suppress
messages. So remove opterr=0 in the few places where it wasn't really
useful.
When an error in options is encountered, we should not print a lengthy
help() and overwhelm the user, when we know precisely what is wrong
with the commandline. In addition, since help() prints to stdout, it
should not be used except when requested with -h or --help.
Also, simplify things here and there.
|
|
In practice this shouldn't make much difference, but
sometimes our headers might be newer, and we want to
test them.
|
|
$ systemd-analyze verify trailing-g.service
[./trailing-g.service:2] Trailing garbage, ignoring.
trailing-g.service lacks ExecStart setting. Refusing.
Error: org.freedesktop.systemd1.LoadFailed: Unit trailing-g.service failed to load: Invalid argument.
Failed to create trailing-g.service/start: Invalid argument
|
|
String which ended in an unfinished quote were accepted, potentially
with bad memory accesses.
Reject anything which ends in a unfished quote, or contains
non-whitespace characters right after the closing quote.
_FOREACH_WORD now returns the invalid character in *state. But this return
value is not checked anywhere yet.
Also, make 'word' and 'state' variables const pointers, and rename 'w'
to 'word' in various places. Things are easier to read if the same name
is used consistently.
mbiebl_> am I correct that something like this doesn't work
mbiebl_> ExecStart=/usr/bin/encfs --extpass='/bin/systemd-ask-passwd "Unlock EncFS"'
mbiebl_> systemd seems to strip of the quotes
mbiebl_> systemctl status shows
mbiebl_> ExecStart=/usr/bin/encfs --extpass='/bin/systemd-ask-password Unlock EncFS $RootDir $MountPoint
mbiebl_> which is pretty weird
|
|
|
|
Set SYSLOG_FACILITY field for kernel log messages too. Setting only
SYSLOG_IDENTIFIER="kernel" is not sufficient and tools reading journal
maybe confused by missing SYSLOG_FACILITY field for kernel log messages.
|
|
|
|
There is a small number of the places in sources where we don't check
asprintf() return code and assume that after error the function
returns NULL pointer via the first argument. That's wrong, after
error the content of pointer is undefined.
|
|
https://bugzilla.redhat.com/show_bug.cgi?id=1110712
|
|
|
|
Also be more verbose in devnode_acl_all().
|
|
Also modify the function itself to be a bit simpler to read.
|
|
|
|
The sleep(10) in test-journal-send is quite aggressive. We need it only
for the journal to get our cgroup information. But even that information
is not vital to the test, so a sleep(1) should be just fine.
|
|
Before, fragments of the progress bar would remain when
errors or warnings were printed.
|
|
Special care is needed so that we get an error message if the
file failed to parse, but not when it is missing. To avoid duplicating
the same error check in every caller, add an additional 'warn' boolean
to tell config_parse whether a message should be issued.
This makes things both shorter and more robust wrt. to error reporting.
|
|
|
|
and btw make it pass for 32bits where size_t != uint64_t
|
|
Define DATA_SIZE_MAX to mean the maximum size of a single
field, and ENTRY_SIZE_MAX to mean the size of the whole
entry, with some rough calculation of overhead over the payload.
Check if entries are not too big when processing native journal
messages.
|
|
|
|
|
|
Also fix an infinite loop on E2BIG.
Remember what range we already scanned for '\n', to avoid
quadratic behaviour on long "text" fields.
|
|
Directory src/journal has become one of the largest directories,
and since systemd-journal-gatewayd, systemd-journal-remote, and
forthcoming systemd-journal-upload are all closely related, create
a separate directory for them.
|
|
|
|
If a file was opened for writing, and then closed immediately without
actually writing any entries, on subsequent opening, it would be
considered "corrupted". This should be totally fine, and even in
read mode, an empty file can become non-empty later on.
|
|
|
|
After all, rsyslog and friends nowadays read their data directly from
the journal, hence the forwarding is unnecessary in most cases.
|
|
The new lzma2 compression options at the top of compress_blob_xz are
equivalent to using preset "0", exept for using a 1 MiB dictionary
(the same as preset "1"). This makes the memory usage at most 7.5 MiB
in the compressor, and 1 MiB in the decompressor, instead of the
previous 92 MiB in the compressor and 8 MiB in the decompressor.
According to test-compress-benchmark this commit makes XZ compression
20 times faster, with no increase in compressed data size.
Using more realistic test data (an ELF binary rather than repeating
ASCII letters 'a' through 'z' in order) it only provides a factor 10
speedup, and at a cost if a 10% increase in compressed data size.
But that is still a worthwhile trade-off.
According to test-compress-benchmark XZ compression is still 25 times
slower than LZ4, but the compressed data is one eighth the size.
Using more realistic test data XZ compression is only 18 times slower
than LZ4, and the compressed data is only one quarter the size.
$ ./test-compress-benchmark
XZ: compressed & decompressed 2535300963 bytes in 42.30s (57.15MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.60s (1510.60MiB/s), mean compresion 99.60%, skipped 990 bytes
|
|
|
|
|
|
|
|
This is useful to test the behaviour of the compressor for various buffer
sizes.
Time is limited to a minute per compression, since otherwise, when LZ4
takes more than a second which is necessary to reduce the noise, XZ
takes more than 10 minutes.
% build/test-compress-benchmark (without time limit)
XZ: compressed & decompressed 2535300963 bytes in 794.57s (3.04MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.56s (1550.07MiB/s), mean compresion 99.60%, skipped 990 bytes
% build/test-compress-benchmark (with time limit)
XZ: compressed & decompressed 174321481 bytes in 60.02s (2.77MiB/s), mean compresion 99.76%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.63s (1480.83MiB/s), mean compresion 99.60%, skipped 990 bytes
It appears that there's a bug in lzma_end where it leaks 32 bytes.
|
|
Add liblz4 as an optional dependency when requested with --enable-lz4,
and use it in preference to liblzma for journal blob and coredump
compression. To retain backwards compatibility, XZ is used to
decompress old blobs.
Things will function correctly only with lz4-119.
Based on the benchmarks found on the web, lz4 seems to be the best
choice for "quick" compressors atm.
For pkg-config status, see http://code.google.com/p/lz4/issues/detail?id=135.
|
|
uncompress_startswith would always decode the whole stream, even
if it did not start with the given prefix.
Reallocation policy was also strange.
|
|
|
|
journald.conf(5) states that the default for MaxFileSec is one month,
but the code didn't respect that.
|
|
This also make sure we remove the original coredump temporary file if we
successfully managed to compress the coredump.
|
|
Let's move things closer to journald's configuration settings, which
knows Compress= already, as a boolean. This makes things more uniform,
but also gives us more freedom to possibly swap out the used compression
algorithm one day.
|
|
This sounds overly low-level and implementation-detaily. Let's just
use the default level XZ suggests. This gives us more room to possibly
swap out the compression algorithm used, as the compression level range
will not leak into user configuration.
|
|
|
|
while we work on it
|
|
|
|
When disk space taken up by coredumps grows beyond a configured limit
start removing the oldest coredump of the user with the most coredumps,
until we get below the limit again.
|
|
|
|
reorder the code so the fstat is done before we can jump to
uncompressed
|