Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
uuid/id128 code rework
|
|
Safe is safe, let's turn off the whole logic if we can, after all it is
unlikely we'll be able to process further crashes in a reasonable way.
|
|
Fixes: #3285
|
|
Add support for relative TasksMax= specifications, and bump default for services
|
|
Fixes: #3573
Replaces: #3588
|
|
Don't check inhibitors when operating remotely. The interactivity inhibitors
imply can#t be provided anyway, and the current code checks for local sessions
directly, via various sd_session_xyz() APIs, hence bypass it entirely if we
operate on remote systems.
Fixes: #3476
|
|
https://github.com/systemd/systemd/pull/3685 introduced
/run/systemd/inaccessible/{chr,blk} to map inacessible devices,
this patch allows systemd running inside a nspawn container to create
/run/systemd/inaccessible/{chr,blk}.
|
|
|
|
Because /run/systemd/inaccessible/{chr,blk} are devices with
major=0 and minor=0 it might be possible that these devices cannot be created
so we use /run/systemd/inaccessible/sock instead to map them.
|
|
With this NSS module all dynamic service users will be resolvable via NSS like
any real user.
|
|
|
|
service is running
This adds a new boolean setting DynamicUser= to service files. If set, a new
user will be allocated dynamically when the unit is started, and released when
it is stopped. The user ID is allocated from the range 61184..65519. The user
will not be added to /etc/passwd (but an NSS module to be added later should
make it show up in getent passwd).
For now, care should be taken that the service writes no files to disk, since
this might result in files owned by UIDs that might get assigned dynamically to
a different service later on. Later patches will tighten sandboxing in order to
ensure that this cannot happen, except for a few selected directories.
A simple way to test this is:
systemd-run -p DynamicUser=1 /bin/sleep 99999
|
|
Let's verify the validity of the syntax of the user/group names set.
|
|
This way we can reuse them for validating User=/Group= settings in unit files
(to be added in a later commit).
Also, add some tests for them.
|
|
Just in case...
|
|
To remove the hard dependency on systemd, for packages, which function
without a running systemd the %systemd_ordering macro can be used to
ensure ordering in the rpm transaction. %systemd_ordering makes sure,
the systemd rpm is installed prior to the package, so the %pre/%post
scripts can execute the systemd parts.
Installing systemd afterwards though, does not result in the same outcome.
|
|
As it turns out 512 is max number of tasks per service is hit by too many
applications, hence let's bump it a bit, and make it relative to the system's
maximum number of PIDs. With this change the new default is 15%. At the
kernel's default pids_max value of 32768 this translates to 4915. At machined's
default TasksMax= setting of 16384 this translates to 2457.
Why 15%? Because it sounds like a round number and is close enough to 4096
which I was going for, i.e. an eight-fold increase over the old 512
Summary:
| on the host | in a container
old default | 512 | 512
new default | 4915 | 2457
|
|
Let's change from a fixed value of 12288 tasks per user to a relative value of
33%, which with the kernel's default of 32768 translates to 10813. This is a
slight decrease of the limit, for no other reason than "33%" sounding like a nice
round number that is close enough to 12288 (which would translate to 37.5%).
(Well, it also has the nice effect of still leaving a bit of room in the PID
space if there are 3 cooperating evil users that try to consume all PIDs...
Also, I like my bikesheds blue).
Since the new value is taken relative, and machined's TasksMax= setting
defaults to 16384, 33% inside of containers is usually equivalent to 5406,
which should still be ample space.
To summarize:
| on the host | in the container
old default | 12288 | 12288
new default | 10813 | 5406
|
|
|
|
That way, we can neatly keep this in line with the new TasksMaxScale= option.
Note that we didn't release a version with MemoryLimitByPhysicalMemory= yet,
hence this change should be unproblematic without breaking API.
|
|
This adds support for a TasksMax=40% syntax for specifying values relative to
the system's configured maximum number of processes. This is useful in order to
neatly subdivide the available room for tasks within containers.
|
|
If specified we'll simply output the used machine ID.
|
|
This allows us to delete quite a bit of code and make the whole thing a lot
shorter.
|
|
If the return parameter is NULL, simply validate the string, and return no
error.
|
|
|
|
|
|
With this change we'll no longer write to /etc/machine-id from nspawn, as that
breaks the --volatile= operation, as it ensures the image is never considered
in "first boot", since that's bound to the pre-existance of /etc/machine-id.
The new logic works like this:
- If /etc/machine-id already exists in the container, it is read by nspawn and
exposed in "machinectl status" and friends.
- If the file doesn't exist yet, but --uuid= is passed on the nspawn cmdline,
this UUID is passed in $container_uuid to PID 1, and PID 1 is then expected
to persist this to /etc/machine-id for future boots (which systemd already
does).
- If the file doesn#t exist yet, and no --uuid= is passed a random UUID is
generated and passed via $container_uuid.
The result is that /etc/machine-id is never initialized by nspawn itself, thus
unbreaking the volatile mode. However still the machine ID configured in the
machine always matches nspawn's and thus machined's idea of it.
Fixes: #3611
|
|
|
|
If we show both a control and a main PID for a service fix this line in the
output of "systemctl status":
Main PID: 19670 (sleep); : 19671 (sleep)
to become this:
Main PID: 19670 (sleep); Control PID: 19671 (sleep)
|
|
id128-util.[ch]
|
|
We currently have code to read and write files containing UUIDs at various
places. Unify this in id128-util.[ch], and move some other stuff there too.
The new files are located in src/libsystemd/sd-id128/ (instead of src/shared/),
because they are actually the backend of sd_id128_get_machine() and
sd_id128_get_boot().
In follow-up patches we can use this reduce the code in nspawn and
machine-id-setup by adopted the common implementation.
|
|
It's a bit easier to read because shorter. Also, most likely a tiny bit faster.
|
|
log about all processes we forcibly kill
|
|
Assorted fixes
|
|
https://github.com/systemd/systemd/pull/3685 introduced
/run/systemd/inaccessible/{chr,blk} to map inacessible devices,
this patch allows systemd running inside a nspawn container to create
/run/systemd/inaccessible/{chr,blk}.
|
|
|
|
Previously, we'd not mount the ESP except on EFI boots, and only when the ESP
used for booting matches the ESP we found.
With this change on non-EFI boots we'll mount a discovered ESP anyway, and on
EFI boots we'll only mount it if it matches the ESP we booted from.
|
|
With this change kernel-install will now first look for an existing kernel
installation in /efi, /boot and /boot/efi. If none is found, /efi is used if it
is a mount point, otherwise /boot/efi if it is one. If nothing of that worked
/boot is used without further checking.
This means /boot should be the default unless something was installed before or
something else was explicitly mounted.
|
|
let's the proper APIs to read the machine ID, and properly check for all
errors.
|
|
|
|
Make sure that we always initialize the return parameter on success, and that
all errors result in an error message, not just some.
|
|
After all, the field is kinda borked.
|
|
We already have tolower() calls there, hence let's unify this at one place.
Also, update the code to only use ASCII operations, so that we don't end up
being locale dependant.
|
|
|
|
This rearranges bootctl a bit, so that it uses the usual verbs parsing
routines, and automatically searches the ESP in /boot, /efi or /boot/efi, thus
increasing compatibility with mainstream distros that insist on /boot/efi.
This also adds minimal support for running bootctl in a container environment:
when run inside a container verification of the ESP via raw block device
access, trusting the container manager to mount the ESP correctly. Moreover,
EFI variables are not accessed when running in the container.
|
|
|
|
|