Age | Commit message (Collapse) | Author |
|
This patch removes includes that are not used. The removals were found with
include-what-you-use which checks if any of the symbols from a header is
in use.
|
|
Types used for pids and uids in various interfaces are unpredictable.
Too bad.
|
|
loop_write() didn't follow the usual systemd rules and returned status
partially in errno and required extensive checks from callers. Some of
the callers dealt with this properly, but many did not, treating
partial writes as successful. Simplify things by conforming to usual rules.
|
|
We can't use LZ4_compress_limitedOutput_continue() because in the
worst-case scenario the compressed output can be slightly bigger than
the input block. This generally affects very few blocks and is no reason
to abort the compression process.
I ran into this when I noticed that Chromium core dumps weren't being
compressed. After switching to LZ4_compress_continue() a ~330MB Chromium
core dump gets compressed to ~17M.
|
|
They have different size on 32 bit, so they are really not interchangable.
|
|
|
|
|
|
The new lzma2 compression options at the top of compress_blob_xz are
equivalent to using preset "0", exept for using a 1 MiB dictionary
(the same as preset "1"). This makes the memory usage at most 7.5 MiB
in the compressor, and 1 MiB in the decompressor, instead of the
previous 92 MiB in the compressor and 8 MiB in the decompressor.
According to test-compress-benchmark this commit makes XZ compression
20 times faster, with no increase in compressed data size.
Using more realistic test data (an ELF binary rather than repeating
ASCII letters 'a' through 'z' in order) it only provides a factor 10
speedup, and at a cost if a 10% increase in compressed data size.
But that is still a worthwhile trade-off.
According to test-compress-benchmark XZ compression is still 25 times
slower than LZ4, but the compressed data is one eighth the size.
Using more realistic test data XZ compression is only 18 times slower
than LZ4, and the compressed data is only one quarter the size.
$ ./test-compress-benchmark
XZ: compressed & decompressed 2535300963 bytes in 42.30s (57.15MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.60s (1510.60MiB/s), mean compresion 99.60%, skipped 990 bytes
|
|
Add liblz4 as an optional dependency when requested with --enable-lz4,
and use it in preference to liblzma for journal blob and coredump
compression. To retain backwards compatibility, XZ is used to
decompress old blobs.
Things will function correctly only with lz4-119.
Based on the benchmarks found on the web, lz4 seems to be the best
choice for "quick" compressors atm.
For pkg-config status, see http://code.google.com/p/lz4/issues/detail?id=135.
|
|
uncompress_startswith would always decode the whole stream, even
if it did not start with the given prefix.
Reallocation policy was also strange.
|
|
Add Compression={none,xz} and CompressionLevel=0-9 settings. Defaults
are xz/6.
Compression=filesystem may be added later.
I picked "xz" for the compression "type", since we might want to add
different compressors later on. XZ is fairly memory and CPU intensive, and
embedded users will likely want to use LZO or some other lightweight compression
mechanism.
|
|
|
|
|
|
This introduces a new data threshold setting for sd_journal objects
which controls the maximum size of objects to decompress. This is
relieves the library from having to decompress full data objects even
if a client program is only interested in the initial part of them.
This speeds up "systemd-coredumpctl" drastically when invoked without
parameters.
|
|
We finally got the OK from all contributors with non-trivial commits to
relicense systemd from GPL2+ to LGPL2.1+.
Some udev bits continue to be GPL2+ for now, but we are looking into
relicensing them too, to allow free copy/paste of all code within
systemd.
The bits that used to be MIT continue to be MIT.
The big benefit of the relicensing is that closed source code may now
link against libsystemd-login.so and friends.
|
|
|