Age | Commit message (Collapse) | Author |
|
Make the API of the new helpers more similar to the old wrapper.
In particular we now return the hash as a byte string to avoid
any endianness problems.
|
|
test: hashmap - increase number of entries for crippled hash test
|
|
Even more fixes
|
|
The purpose of testing with the crippled hash function is to cover
the otherwise very unlikely codepath in bucket_calculate_dib() where
it has to fall back to recomputing the hash value.
This unlikely path was not covered by test-hashmap anymore after
57217c8f "test: hashmap - cripple the hash function by truncating the
input rather than the output".
Restore the test coverage by increasing the number of entries in the test.
The number was determined empirically by checking with lcov.
|
|
hashmap/siphash24: refactor hash functions
|
|
|
|
|
|
|
|
Add support for naming fds for socket activation and more
|
|
networkd: document ability to disable MACAddressPolicy
|
|
libsystemd: sd-device - translate / vs. ! in sysname
|
|
This adds support for naming file descriptors passed using socket
activation. The names are passed in a new $LISTEN_FDNAMES= environment
variable, that matches the existign $LISTEN_FDS= one and contains a
colon-separated list of names.
This also adds support for naming fds submitted to the per-service fd
store using FDNAME= in the sd_notify() message.
This also adds a new FileDescriptorName= setting for socket unit files
to set the name for fds created by socket units.
This also adds a new call sd_listen_fds_with_names(), that is similar to
sd_listen_fds(), but also returns the names of the fds.
systemd-activate gained the new --fdname= switch to specify a name for
testing socket activation.
This is based on #1247 by Maciej Wereski.
Fixes #1247.
|
|
A variety of journal vacuuming improvements, plus an nspawn fix
|
|
Let's simplify the fd collection code a bit, and return the number of
collected fds as positive integer, the way it's customary in our usual
code.
|
|
|
|
We shouldn't exit the loop early, otherwise our duplicate backing
partition check won't work.
|
|
While it is currently possible to either not set MACAddressPolicy or set
it to a value different from "persistent" or "random", it is not obvious
that a user can do so. Add a policy, "none", which simply retains kernel
MAC addresses (same as not filling in the policy at all) and document it
so that users are aware of this setting.
Signed-off-by: Jacob Keller <jacob.keller@gmail.com>
|
|
The kernel replaces '/' in device names with '!', we translate that back
to '/' in sysname, when taking sysname as input, we should translate it
back again.
|
|
journal-remote: typo in log_error when no sources are specified
[tomegun: this was a pun, but let's not do that]
|
|
LLDP: add API to export neighbors list (v5)
|
|
networkd: add bridge properties
|
|
networkd: add support to configure preferred source of static routes
|
|
Make sure all variable-length inputs are properly terminated or that
their length is encoded in some way. This avoids ambiguity of
adjacent inputs.
E.g., in case of a hash function taking two strings, compressing "ab"
followed by "c" is now distinct from "a" followed by "bc".
|
|
All our hash functions are based on siphash24(), factor out
siphash_init() and siphash24_finalize() and pass the siphash
state to the hash functions rather than the hash key.
This simplifies the hash functions, and in particular makes
composition simpler as calling siphash24_compress() repeatedly
on separate chunks of input has the same effect as first
concatenating the input and then calling siphash23_compress()
on the result.
|
|
than the output
The reason for the crippled hash function is to reduce the distribution
of the hash function, do this by truncating the domain rather than the
range. This does introduce a change in behavoir as the range is no longer
contiguous, which greatly reduces collisions.
This is needed as a follow-up patch will no longer allow individual hash
functions to alter the output directly.
|
|
Verify the state of the hash-function according to the reference paper,
also verify that we can decompose the input and hash the chunks one
by one and still get the same result.
|
|
|
|
This allows the input to siphash24_compress to be decomposed into
smaller chunks and the function to be called on each individual
chunk.
|
|
finalization step
The last compression is special as it deals with the length byte, and padding. Move
it to the finalization step in preparation for making compression decomposable.
|
|
|
|
|
|
Encapsulate the four state variables in a struct so we can more easily pass
them around.
|
|
|
|
ForwardDelaySec: forward delay
HelloTimeSec: hello time
MaxAgeSec: maximum message age
for more information see
http://www.tldp.org/HOWTO/BRIDGE-STP-HOWTO/set-up-the-bridge.html
In kernel
br_dev_newlink: does not have the this functionality to set while
creation.
br_changelink: after creation we can change the parameters.
we need to first create then set it the parameters.
Introduce new callback post_create .This should
set the properties after the creation.
|
|
By default we set as NLM_F_CREATE | NLM_F_EXCL in
sd_rtnl_message_new_link
But incase of bridge we need to set NLM_F_REQUEST | NLM_F_ACK.
If NLM_F_EXCL is set then we are unable to set the parameters. As bridge
supports setting properties after creation not during creation.
|
|
Rename rtnl_link_info_data_bridge_types to
rtnl_link_bridge_management_types
as they are of nested types of IFLA_AF_SPEC.
|
|
|
|
Much like the result of the service itself we should not reset the
reload result unless we actually start from the beginning, so that
clients can query it at any time.
Specifically, let's reset the result states only when we begin with a
start operation (for both the main result, and the reload result), when
we begin with a reload operation (only for the load result), or when the
use explicitly asks for that via "systemctl reset-failed".
This is a more generic fix for #1447.
Fixes #1447.
|
|
Implement a maximum limit on number of journal files to keep around.
Enforcing a limit is useful on this since our performance when viewing
pays a heavy penalty for each journal file to interleve. This setting is
turned on now by default, and set to 100.
Also, actully implement what 348ced909724a1331b85d57aede80a102a00e428
promised: use whatever we find on disk at startup as lower bound on how
much disk space we can use. That commit introduced some provisions to
implement this, but actually never did.
This also adds "journalctl --vacuum-files=" to vacuum files on disk by
their number explicitly.
|
|
|
|
Indicate that we are ignoring errors, when we ignore them, and log that
at LOG_WARNING level.
Use the right error code for the log message.
|
|
|
|
Let's try to use O_NOATIME if we can when vacuuming old journal files,
if we have the permissions for it, so that vacuuming doesn't count as
proper journal read access.
|
|
|
|
The way it is customary everywhere else in our sources.
|
|
Let's use fd_getcrtime_at(), since that *at() family of calls is how we
read the rest of the file metadata, too.
|
|
Add some tests to simulate the reception of LLDP frames and to verify
the correctness of the data in the MIB.
|
|
tlv_packet_read_bytes() and tlv_packet_read_string() returned the
wrong length when called after other functions which modify the offset
in the container.
In other words, if the TLV data length is X and we do a
tlv_packet_read_u8(), a subsequent tlv_packet_read_bytes() should
return a length of (X - 1).
|
|
In order to implement tests for the LLDP state machine, we need to
mock lldp_network_bind_raw_socket(). Move the other function
lldp_receive_packet() to another file so that we can replace the first
function with a custom one and keep the second one.
|
|
|