Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Also, move a couple of more path-related functions to path-util.c.
|
|
|
|
|
|
We don't need two functions that do essentialy the same, hence drop
path_get_parent(), and stick to dirname_malloc(), but move it to
path-util.[ch].
|
|
|
|
|
|
split up util.[ch] into more pieces, and other stuff
|
|
|
|
journald-server: port to extract_first_word
|
|
|
|
|
|
|
|
Various changes to src/basic/
|
|
There are more than enough to deserve their own .c file, hence move them
over.
|
|
journal: fix error handling when compressing journal objects
|
|
string-util.[ch]
There are more than enough calls doing string manipulations to deserve
its own files, hence do something about it.
This patch also sorts the #include blocks of all files that needed to be
updated, according to the sorting suggestions from CODING_STYLE. Since
pretty much every file needs our string manipulation functions this
effectively means that most files have sorted #include blocks now.
Also touches a few unrelated include files.
|
|
This really deserves its own file, given how much code this is now.
|
|
Let's introduce a common function that makes relative paths absolute and
warns about any errors while doing so.
|
|
get_current_dir_name() can return a variety of errors, not just ENOMEM,
hence don't blindly turn its errors to ENOMEM, but return correct errors
in path_make_absolute_cwd().
This trickles down into a couple of other functions, some of which
receive unrelated minor fixes too with this commit.
|
|
|
|
Let's make sure we handle compression errors properly, and don't
misunderstand an error for success.
Also, let's actually compress things if lz4 is enabled.
Fixes #1662.
|
|
journal: s/Envalid/Invalid/
|
|
|
|
Limit test-compress-benchmark to approx. 12 s of runtime
|
|
We were compressing unitialized memory, which should not result in
any problems, but is inelegant.
|
|
If both lz4 and xz are enabled, this results in a limit of
2×3×2 s ~= 12 s runtime.
Previous implementation started with really small buffer sizes. When
combined with a short time limit this resulteded in abysmal results for xz.
It seems that the initialization overead is really significant for small
buffers. Since xz will not be used by default anymore, this does not
seem worth fixing. Instead buffer sizes are changed to run a
pseudo-random non-repeating pattern. This should allow reasonable testing
for all buffer sizes. For testing, both runtime and the buffer size seed
can be specified on the command line. Sufficiently large runtime allows
all buffer sizes up to 1MB to be tested.
|
|
-q suppresses info messages too
|
|
|
|
|
|
|
|
Using lz4 frame api for coredump files
|
|
Logging for compression and decompression is assymetrical on purpose:
if compiled without some type of compression, those compression code
paths should never be invoked. OTOH, it is possible to encounter
unsupported format on decompression, so leave those log_debug statements
in, to make it easier to diagnose stuff.
|
|
|
|
Fix journalctl --dump-catalog, journalctl --list-catalog
|
|
Make journald audit socket maskable
|
|
Fixes #1514.
|
|
`journalctl --dump-catalog ID1 ID2 ...` works fine.
|
|
|
|
Adding them to the documentation makes it easier to find
the right man page for people who are trying to understand
where some socket in the filesystem is coming from.
|
|
If we were given some sockets through socket activation, and audit
socket is not among them, do not try to open it. This way, if the
socket unit is disabled, we will not receive audit events.
https://bugzilla.redhat.com/show_bug.cgi?id=1227379
|
|
Existing test would use highly-compressible repeatable
input. Two types of input are added:
- zeros
- random blocks interspersed with zeros
The idea is to get more information about behaviour in various cases.
On Intel Xeon the results are:
% ./test-compress-benchmark
XZ/zeros: compressed & decompressed 2535301373 bytes in 32.56s (74.26MiB/s), mean compresion 99.96%, skipped 3160 bytes
LZ4/zeros: compressed & decompressed 2535304362 bytes in 1.16s (2088.69MiB/s), mean compresion 99.60%, skipped 171 bytes
XZ/simple: compressed & decompressed 2535300963 bytes in 30.42s (79.48MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4/simple: compressed & decompressed 2535303543 bytes in 1.22s (1978.86MiB/s), mean compresion 99.60%, skipped 990 bytes
XZ/random: compressed & decompressed 381756649 bytes in 60.02s (6.07MiB/s), mean compresion 39.64%, skipped 27813723 bytes
LZ4/random: compressed & decompressed 2507385036 bytes in 0.97s (2477.52MiB/s), mean compresion 54.77%, skipped 27919497 bytes
If someone has ideas for more realistic test cases, they can be easily
added to this framework.
|
|
This converts the stream compression to use the new lz4frame api,
compatible with lz4cat. Previous code used custom headers, so the
compressed file was not compatible with lz4 command line tools.
I considered this the last blocker to using lz4 by default.
Speed seems to be reasonable, although a bit (a few percent) slower
than the lz4 binary, even though compression is the same. I don't
consider this important. It could be caused by the overhead of library
calls, but is probably caused by slightly different buffer sizes or
such. The code in this patch uses mmap, since since this allows the
buffer to be reused while not making the code more complicated at all.
In my testing, this version is noticably faster (~20%) than a naive
single-buffered version. mmap can cause the program to be killed with
SIGBUS, if the underlying file is truncated or a disk error occurs. We
only use this from within coredump and coredumpctl, so I don't
consider this an issue.
Old decompression code is retained and is used if the new code fails
indicating a format error. There have been reports of various smaller
distributions using previous lz4 code, i.e. the old format, and it is
nice to provide backwards compatibility. We can remove the legacy code
in a few versions.
The way that blobs are compressed in the journal is not affected.
|
|
Make the API of the new helpers more similar to the old wrapper.
In particular we now return the hash as a byte string to avoid
any endianness problems.
|
|
hashmap/siphash24: refactor hash functions
|
|
All our hash functions are based on siphash24(), factor out
siphash_init() and siphash24_finalize() and pass the siphash
state to the hash functions rather than the hash key.
This simplifies the hash functions, and in particular makes
composition simpler as calling siphash24_compress() repeatedly
on separate chunks of input has the same effect as first
concatenating the input and then calling siphash23_compress()
on the result.
|
|
Implement a maximum limit on number of journal files to keep around.
Enforcing a limit is useful on this since our performance when viewing
pays a heavy penalty for each journal file to interleve. This setting is
turned on now by default, and set to 100.
Also, actully implement what 348ced909724a1331b85d57aede80a102a00e428
promised: use whatever we find on disk at startup as lower bound on how
much disk space we can use. That commit introduced some provisions to
implement this, but actually never did.
This also adds "journalctl --vacuum-files=" to vacuum files on disk by
their number explicitly.
|
|
|