Age | Commit message (Collapse) | Author |
|
|
|
We don't have plural in the name of any other -util files and this
inconsistency trips me up every time I try to type this file name
from memory. "formats-util" is even hard to pronounce.
|
|
This makes strjoin and strjoina more similar and avoids the useless final
argument.
spatch -I . -I ./src -I ./src/basic -I ./src/basic -I ./src/shared -I ./src/shared -I ./src/network -I ./src/locale -I ./src/login -I ./src/journal -I ./src/journal -I ./src/timedate -I ./src/timesync -I ./src/nspawn -I ./src/resolve -I ./src/resolve -I ./src/systemd -I ./src/core -I ./src/core -I ./src/libudev -I ./src/udev -I ./src/udev/net -I ./src/udev -I ./src/libsystemd/sd-bus -I ./src/libsystemd/sd-event -I ./src/libsystemd/sd-login -I ./src/libsystemd/sd-netlink -I ./src/libsystemd/sd-network -I ./src/libsystemd/sd-hwdb -I ./src/libsystemd/sd-device -I ./src/libsystemd/sd-id128 -I ./src/libsystemd-network --sp-file coccinelle/strjoin.cocci --in-place $(git ls-files src/*.c)
git grep -e '\bstrjoin\b.*NULL' -l|xargs sed -i -r 's/strjoin\((.*), NULL\)/strjoin(\1)/'
This might have missed a few cases (spatch has a really hard time dealing
with _cleanup_ macros), but that's no big issue, they can always be fixed
later.
|
|
It's a common pattern, so add a helper for it. A macro is necessary
because a function that takes a pointer to a pointer would be type specific,
similarly to cleanup functions. Seems better to use a macro.
|
|
|
|
We are going to add this child as a source to our event loop so we don't
want to block when reading data from it as this will prevent us from
processing other events. Specifically this will block the signalfds
which means if we are waiting for data from curl we won't handle SIGTERM
or SIGINT until we happen to get more data.
|
|
Bug introduced in 1b4cd64683.
|
|
Network file dropins
|
|
In preparation for adding a version which takes a strv.
|
|
It's prone to error and annoying to have to add it manually. It was
missing from a few places.
|
|
errno value is not protected (it is undefined after this function returns).
Various mhd_* functions are not documented to protect errno, so this could not
guaranteed anyway.
|
|
|
|
When client requests to get logs with `follow` and `KEY=match` that
doesn't match any log entry, journal-gatewayd segfaulted.
Make request_reader_entries to return zero in such case to wait for
matching entries.
This fixes https://github.com/systemd/systemd/issues/3873.
|
|
Serve journals in the specified directory instead of default journals.
|
|
journal-(gatewayd,remote).c don't actually utilize libgnutls even when
HAVE_GNUTLS is defined.
|
|
In this patch "enabled" and "disabled" is used exclusively, but "enable" and
"disable" forms are need for the following patch.
|
|
|
|
source->size < source->filled (#3086)
While the function journal-remote-parse.c:get_line() enforces an assertion that source->filled <= source->size, in function journal-remote-parse.c:process_source() there is a chance that source->size will be decreased to a lower value than source->filled, when source->buf is reallocated. Therefore a check is added that ensures that source->buf is reallocated only when source->filled is smaller than target / 2.
|
|
This moves the O_TMPFILE handling from the coredumping code into common library
code, and generalizes it as open_tmpfile_linkable() + link_tmpfile(). The
existing open_tmpfile() function (which creates an unlinked temporary file that
cannot be linked into the fs) is renamed to open_tmpfile_unlinkable(), to make
the distinction clear. Thus, code may now choose between:
a) open_tmpfile_linkable() + link_tmpfile()
b) open_tmpfile_unlinkable()
Depending on whether they want a file that may be linked back into the fs later
on or not.
In a later commit we should probably convert fopen_temporary() to make use of
open_tmpfile_linkable().
Followup for: #3065
|
|
Also parse watchdog config when creating the Uploader object.
|
|
It is observed that a combination of high log throughput, low I/O speed on journal remote side and many nodes uploading simultaneously caused the journal-upload process to dump core because of watchdog starvation. This is caused because journal-upload stays in curl_easy_perform(), because it cannot upload fast enough to reach the end of the journal. Currently journal-upload will return from curl_easy_perform() only when the end of the journal is reached. Therefore a check is added in journal_input_callback(), which will update the watchdog if the elapsed time since the start of the uploading process is greater than WATCHDOG_USEC/2.
|
|
|
|
Throughout the tree there's spurious use of spaces separating ++ and --
operators from their respective operands. Make ++ and -- operator
consistent with the majority of existing uses; discard the spaces.
|
|
lldp fixes, second iteration
|
|
Usually, we place the #pragma once before the copyright blurb in header files,
but in a few cases we didn't. Move those around, so that we do the same thing
everywhere.
|
|
When we rotate journals, we must set offline and close the current one,
but don't generally need to wait for this to complete.
Instead, we'll initiate an asynchronous offline via
journal_file_set_offline(oldfile, false), and add the file to a
per-server set of deferred closes to be closed later when they
won't block.
There's one complication however; journal_file_open() via
journal_file_verify_header() assumes that any writable journal in the
online state is the product of an unclean shutdown or other form of
corruption.
Thus there's a need for journal_file_open() to be aware of deferred
closes and synchronize with their completion when opening preexisting
journals for writing. To facilitate this the deferred closes set is
supplied to the journal_file_open() function where the deferred closes
may be closed synchronously before verifying the header in such
circumstances.
|
|
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
|
|
Set the MHD_OPTION_CONNECTION_MEMORY_LIMIT to 128KB. The precious value was DATA_SIZE_MAX, which was defined as 1024*1024*768. This caused journal-remote to allocate 756MB for each journal-upload connection, thus exhausting the available memory.
|
|
ZJS: remove unnecessary oom check after strdupa().
|
|
This commit fixes the following broken --getter option:
when systemd-journal-remote is called with --getter option,
it causes the error meesage "Zero sources specified" and
the getter command will not be called.
|
|
When --url option is specified, e.g. --url='http://some.host:19531/entries'
retrieved remote journal entries will be stored to
/var/log/journal/remote/remote-some.host.journal
|
|
Currently, --url option supports the only form like http(s)://some.host:19531.
This commit adds support to call systemd-journal-remote as follwos:
systemd-journal-remote --url='http://some.host:19531'
systemd-journal-remote --url='http://some.host:19531/'
systemd-journal-remote --url='http://some.host:19531/entries'
systemd-journal-remote --url='http://some.host:19531/entries?boot&follow'
The first three example result the same and retrieve all entries.
The last example retrieves only current boot entries and wait new events.
|
|
chaloulo/split-mode-host-remove-port-from-journal-filename
journal-remote: split-mode=host, remove port from journal filename
|
|
core: Add flexible way to provide socket type
the socket type should be a diffrent argumet
in make_socket_fd . In this way we can set the socket
type like SOCK_STREAM SOCK_DGRAM in the address.
journal-remote: modify make_socket_fd
|
|
journal-upload : Ignore journal event when already in uploading state.
|
|
I was checking something when writing the patch and
committed this by mistake.
|
|
64 bit offset is now accepted, which is nice. The old function is
deprecated, and generates a compile time warning when used. We only
use an offset of 0, so we really don't care. Adapt to use the new
function, but fall back to the old one on older versions.
|
|
src/journal-remote/journal-remote.c:590:13: warning: Value MHD_HTTP_METHOD_NOT_ACCEPTABLE is deprecated, use MHD_HTTP_NOT_ACCEPTABLE
return mhd_respond(connection, MHD_HTTP_METHOD_NOT_ACCEPTABLE,
^
The new define was added in 0.9.38. Instead of requiring the new
libmicrohttpd version, provide the fallback, it is trivial.
|
|
journal-gatewayd: timeout journal wait to allow thread cleanup
|
|
While journal received remotely can be sealed, it can only be done
on the command line using --seal, so for consistency, we will
also permit to set it in the configuration file.
|
|
When a client connects with follow=1 and then disconnects we can get
stuck in sd_journal_wait indefinitely if no journal messages are logged.
Every time a client does this another thread is allocated and these
continue to stack until either a journal message is logged or we run out
of mapping to put a stack in.
By adding a timeout if we don't see any journal messages in that timeout
we will simply pop back out to microhttpd which will sanity check the
connection for us and if it is still connected pop us back into the wait
for more journal messages.
|
|
When the log rate is high, it is possible that the callback dispatch_journal_input() will be called twice, while the program is in uploading state. There is a guard for this in dispatch_journal_input(). However it is not enough, as it is possible that the uploading state is not set when the code is in dispatch_journal_input().
The result of the above is that a log would be skipped, as sd_journal_next_skip() would be called twice.
Adding a new check in process_journal_input(), just before the code to sd_journal_next_skip(), makes sure that the code ignores a duplicate callback, when the first callback is in uploading state.
Also, removed the warning log from dispatch_journal_input(), as this occurence is normal.
|
|
When constructing the journal filename to store logs from a remote host, remove the port of the tcp connection, as the port will change with every reboot/connection loss between sender/reveiver machines. Having the port in the filename will cause a new journal file to be created for every reboot or connection loss.
For the implementation, a new argument "bool include_port" is added to the getpeername_pretty() function. This is passed to the sockaddr_pretty() function. The value of the include_port argument is set to true in all calls of getpeername_pretty(), except for 2 calls in journal-remote.c, where it is set to false.
|
|
GLIB has recently started to officially support the gcc cleanup
attribute in its public API, hence let's do the same for our APIs.
With this patch we'll define an xyz_unrefp() call for each public
xyz_unref() call, to make it easy to use inside a
__attribute__((cleanup())) expression. Then, all code is ported over to
make use of this.
The new calls are also documented in the man pages, with examples how to
use them (well, I only added docs where the _unref() call itself already
had docs, and the examples, only cover sd_bus_unrefp() and
sd_event_unrefp()).
This also renames sd_lldp_free() to sd_lldp_unref(), since that's how we
tend to call our destructors these days.
Note that this defines no public macro that wraps gcc's attribute and
makes it easier to use. While I think it's our duty in the library to
make our stuff easy to use, I figure it's not our duty to make gcc's own
features easy to use on its own. Most likely, client code which wants to
make use of this should define its own:
#define _cleanup_(function) __attribute__((cleanup(function)))
Or similar, to make the gcc feature easier to use.
Making this logic public has the benefit that we can remove three header
files whose only purpose was to define these functions internally.
See #2008.
|
|
This is a continuation of the previous include sort patch, which
only sorted for .c files.
|
|
Sort the includes accoding to the new coding style.
|
|
The macro is generically useful for putting together search paths, hence
let's make it truly generic, by dropping the implicit ".d" appending it
does, and leave that to the caller. Also rename it from
CONF_DIRS_NULSTR() to CONF_PATHS_NULSTR(), since it's not strictly about
dirs that way, but any kind of file system path.
Also, mark CONF_DIR_SPLIT_USR() as internal macro by renaming it to
_CONF_PATHS_SPLIT_USR() so that the leading underscore indicates that
it's internal.
|
|
Our functions return negative error codes.
Do not rely on errno being set after calling our own functions.
|
|
After all, this is not some compiler or C magic, but something very
specific to how systemd works, hence let's move it into def.h, and out
of macro.h
|
|
This is useful to check that compression actually works, and how
compression influences file size in the best-case-scenario for
compression. (The answer is that not as much as one would hope:
there's still a big overhead of the indexing and since every field
is compressed separately, even fields that compress very well
contribute to the file size. This overhead becomes negligible only
for very big fields.)
|