Age | Commit message (Collapse) | Author |
|
The idev-keyboard object provides keyboard devices to the idev interface.
It uses libxkbcommon to provide proper keymap support.
So far, the keyboard implementation is pretty straightforward with one
keyboard device per matching evdev element. We feed everything into the
system keymap and provide proper high-level keyboard events to the
application. Compose-features and IM need to be added later.
|
|
The evdev-element provides linux evdev interfaces as idev-elements. This
way, all real input hardware devices on linux can be used with the idev
interface.
We use libevdev to interface with the kernel. It's a simple wrapper
library around the kernel evdev API that takes care to resync devices
after kernel-queue overflows, which is a rather non-trivial task.
Furthermore, it's a well tested interface used by all other major input
users (Xorg, weston, libinput, ...).
Last but not least, it provides nice keycode to keyname lookup tables (and
vice versa), which is really nice for debugging input problems.
|
|
The idev-interface provides input drivers for all libsystemd-terminal
based applications. It is split into 4 main objects:
idev_context: The context object tracks global state of the input
interface. This will include data like system-keymaps,
xkb contexts and more.
idev_session: A session serves as controller for a set of devices.
Each session on an idev-context is independent of each
other. The session is also the main notification object.
All events raised via idev are reported through the
session interface. Apart of that, the session is a
pretty dumb object that just contains devices.
idev_element: Elements provide real hardware in the idev stack. For
each hardware device, one element is added. Elements
have no knowledge of higher-level device types, they
only provide raw input data to the upper levels. For
example, each evdev device is represented by a different
element in an idev session.
idev_device: Devices are objects that the application deals with. An
application is usually not interested in elements (and
those are hidden to applications), instead, they want
high-level input devices like keyboard, touchpads, mice
and more. Device are the high-level interface provided
by idev. Each device might be fed by a set of elements.
Elements drive the device. If elements are removed,
devices are destroyed. If elements are added, suitable
devices are created.
Applications should monitor the system for sessions and hardware devices.
For each session they want to operate on, they create an idev_session
object and add hardware to that object. The idev interface requires the
application to monitor the system (preferably via sysview_*, but not
required) for hardware devices. Whenever hardware is added to the idev
session, new devices *might* be created. The relationship between hardware
and high-level idev-devices is hidden in the idev-session and not exposed.
Internally, the idev elements and devices are virtual objects. Each real
hardware and device type inherits those virtual objects and provides real
elements and devices. Those types will be added in follow-up commits.
Data flow from hardware to the application is done via idev_*_feed()
functions. Data flow from applications to hardware is done via
idev_*_feedback() functions. Feedback is usually used for LEDs, FF and
similar operations.
|
|
We're going to need multiple binaries that provide session-services via
logind device management. To avoid re-writing the seat/session/device
scan/monitor interface for each of them, this commit adds a generic helper
to libsystemd-terminal:
The sysview interface scans and tracks seats, sessions and devices on a
system. It basically mirrors the state of logind on the application side.
Now, each session-service can listen for matching sessions and
attach to them. On each session, managed device access is provided. This
way, it is pretty simple to write session-services that attach to multiple
sessions (even split across seats).
|
|
The bus_map_all_properties() helper calls
org.freedesktop.DBus.Properties.GetAll() on a given target and parses the
result according to a given property-table. This simplifies dealing with
DBus.Properties significantly. However, the function is blocking and thus
not really useful in many situations.
This patch extracts the core of this function and adds two new helpers
which directly take dbus-messages as arguments. This way, you can issue
asynchronous requests and parse the result via these helpers:
bus_message_map_all_properties():
This is the same as bus_map_all_properties() but takes the result
message from a GetAll() request as argument. You can thus issue an
asynchronous GetAll() request and then use this helper once you got
the result.
bus_message_map_properties_changed():
This function takes a signal-message that was retrieved via a
PropertiesChanged signal and then parses it like if you retrieved
it via GetAll(). Furthermore, this function returns the number of
matched properties that got invalidated by the PropertiesChanged
signal, but didn't carry the new value. This way, the caller can
issue a new GetAll() request and then parse the result.
The old function bus_map_all_properties() is functionally unchanged, but
now uses bus_message_map_all_properties() internally.
|
|
This is a useful helper, make it global. It will be required for
libsystemd-terminal, at minimum.
|
|
fprintf() does not add new-lines automatically like log_*() does. Add the
missing \n specified so "udevadm" invoked without arguments adds a newline
to:
udevadm: missing or unknown command
|
|
Our bus-name watch helpers only remove a bus-name if it's not a
controller, anymore. If we call manager_drop_busname() before
unregistering the controller, the busname will not be dropped. Therefore,
first drop the controller, then drop the bus-name.
|
|
If you stack container_of() macros, you will get warnings due to shadowing
variables of the parent context. To avoid this, use unique names for
variables.
Two new helpers are added:
UNIQ: This evaluates to a truly unique value never returned by any
evaluation of this macro. It's a shortcut for __COUNTER__.
UNIQ_T: Takes two arguments and concatenates them. It is a shortcut for
CONCATENATE, but meant to defined typed local variables.
As you usually want to use variables that you just defined, you need to
reference the same unique value at least two times. However, UNIQ returns
a new value on each evaluation, therefore, you have to pass the unique
values into the macro like this:
#define my_macro(a, b) __max_macro(UNIQ, UNIQ, (a), (b))
#define __my_macro(uniqa, uniqb, a, b) ({
typeof(a) UNIQ_T(A, uniqa) = (a);
typeof(b) UNIQ_T(B, uniqb) = (b);
MY_UNSAFE_MACRO(UNIQ_T(A, uniqa), UNIQ_T(B, uniqb));
})
This way, MY_UNSAFE_MACRO() can safely evaluate it's arguments multiple
times as they are local variables. But you can also stack invocations to
the macro my_macro() without clashing names.
This is the same as if you did:
#define my_macro(a, b) __max_macro(__COUNTER__, __COUNTER__, (a), (b))
#define __my_macro(prefixa, prefixb, a, b) ({
typeof(a) CONCATENATE(A, prefixa) = (a);
typeof(b) CONCATENATE(B, prefixb) = (b);
MY_UNSAFE_MACRO(CONCATENATE(A, prefixa), CONCATENATE(B, prefixb));
})
...but in my opinion, the first macro is easier to write and read.
This patch starts by converting container_of() to use this new helper.
Other macros may follow (like MIN, MAX, CLAMP, ...).
|
|
The UNIQUE() macro works fine if used in un-stacked macros. However, once
you stack them like:
MAX(MIN(a, b),
CLAMP(MAX(c, d), e, f))
you will get warnings due to shadowing other variables. gcc uses the last
line of a macro expansion as value for __LINE__, therefore, we cannot even
avoid this by splitting the expressions across lines.
Remove the only user of UNIQUE() so we introduce a new helper in
follow-ups.
|
|
|
|
|
|
|
|
Reportedly also applies to NP900X4B, so relax the match to apply to all models
of this series.
https://launchpad.net/bugs/902332
|
|
https://bugs.freedesktop.org/show_bug.cgi?id=83015
|
|
|
|
"each device" was suggesting that this service might be instantiated
multiple times. "hibernation resume" was too jargon-y.
|
|
hibernate-resume-generator understands resume= kernel command line parameter
and instantiates the systemd-resume@.service accordingly if it is passed.
This enables resume from hibernation using device specified on the kernel
command line, and it may be specified either as "/dev/disk/by-foo/bar"
or "FOO=bar", not only "/dev/sdXY" which is understood by the in-kernel
implementation.
So now resume= is brought on par with root= in terms of possible ways to
specify a device.
|
|
/sys/power/resume.
This can be used to initiate a resume from hibernation by path to a swap
device containing the hibernation image.
The respective templated unit is also added. It is instantiated using
path to the desired resume device.
|
|
With this change, it becomes possible to order a unit to activate before any
modifications to the file systems. This is especially useful for supporting
resume from hibernation.
|
|
https://bugs.freedesktop.org/show_bug.cgi?id=82485
|
|
|
|
|
|
If we invoke agents, we should make sure we actually can kill them
again. I mean, it's probably not our job to cleanup the signals if our
tools are invoked in weird contexts, but at least we should make sure,
that the subprocesses we invoke and intend to control work as intended.
Also see:
http://lists.freedesktop.org/archives/systemd-devel/2014-August/022460.html
|
|
handlers when one sigaction() fails
After all, we usually don't check for failures here, and it is better to
do as much as we can...
|
|
https://bugs.freedesktop.org/show_bug.cgi?id=83097
|
|
Actually use the variable containing the return code of bus_process_wait when
printing the error message as a result of it failing.
|
|
Noticed by Djalal Harouni
|
|
Otherwise it gets optimized out when CPPFLAGS='-DNDEBUG' is used.
Tested:
- make check TESTS='test-util' CPPFLAGS='-DNDEBUG'
|
|
Otherwise they get optimized out when CPPFLAGS='-DNDEBUG' is used, and that
causes the tests to fail.
Tested:
- make check TESTS='test-path-util' CPPFLAGS='-DNDEBUG'
|
|
Otherwise the test fails when built with CPPFLAGS='-DNDEBUG' which disables
assertions.
Tested:
- make check TESTS='test-compress' CPPFLAGS='-DNDEBUG'
|
|
|
|
BPF_XOR was introduced in kernel 3.7
|
|
|
|
|
|
Based on a patch from Simon McVittie <simon.mcvittie@collabora.co.uk>.
Bug-Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=758050
|
|
This reverts commit 41a451cc2901a5deb985aea4cc8de204a22e5612.
This breaks checks for masking of units file, since we invoke
null_or_empty_path() on the resulting path.
|
|
|
|
that no sources are pending
|
|
This will allow sd-event to be integrated into an external event loop, which
in turn will allow (say) glib-based applications to use our various libraries,
without manually integrating each of them (bus, rtnl, dhcp, ...).
The external event-loop should integrate sd-event int he following way:
Every iteration must start with a call to sd_event_prepare(), which will
return 0 if no event sources are ready to be processed, a positive value if
they are and a negative value on error. sd_event_prepare() may only be called
following sd_event_dispatch(); a call to sd_event_wait() indicating that no
sources are ready to be dispatched; or a failed call to sd_event_dispatch() or
sd_event_wait().
A successful call to sd_event_prepare() indicating that no event sources are
ready to be dispatched must be followed by a call to sd_event_wait(),
which will return 0 if it timed out without event sources being ready to
be processed, a negative value on error and a positive value otherwise.
sd_event_wait() may only be called following a successful call to
sd_event_prepare() indicating that no event sources are ready to be dispatched.
If sd_event_wait() indicates that some events sources are ready to be
dispatched, it must be followed by a call to sd_event_dispatch(). This
is the only time sd_event_dispatch() may be called.
|
|
This patch modifies unit_file_get_list which will now return
hashmap of structures where f->path is *without* root_dir prefix.
This change should be ok, because current code either does not use
root_dir at all or calls basename() on the f->path.
|
|
|
|
We'll stay in "initializing" until basic.target has reached, at which
point we will enter "starting".
This is preparation so that we can change the startip timeout to only
apply to the first phase of startup, not the full procedure.
|
|
Also, change the default action on a system start-up timeout to powering off.
|
|
|
|
|
|
When this system-wide start-up timeout is hit we execute one of the
failure actions already implemented for services that fail.
This should not only be useful on embedded devices, but also on laptops
which have the power-button reachable when the lid is closed. This
devices, when in a backpack might get powered on by accident due to the
easily reachable power button. We want to make sure that the system
turns itself off if it starts up due this after a while.
When the system manages to fully start-up logind will suspend the
machine by default if the lid is closed. However, in some cases we don't
even get as far as logind, and the boot hangs much earlier, for example
because we ask for a LUKS password that nobody ever enters.
Yeah, this is a real-life problem on my Yoga 13, which has one of those
easily accessible power buttons, even if the device is closed.
|
|
|
|
We don't have the correct __NR_memfd_create syscall number yet, so set it to
0xffffffff for now to prevent compile time errors.
|
|
The MAXSIZE() macro takes two types and returns the size of the larger
one. It is much simpler to use than MAX(sizeof(A), sizeof(B)) and also
avoids any compiler-extensions, unlike CONST_MAX() and MAX() (which are
needed to avoid evaluating arguments more than once). This was suggested
by Daniele Nicolodi <daniele@grinta.net>.
Also make resolved use this macro instead of CONST_MAX(). This enhances
readability quite a bit.
|