Age | Commit message (Collapse) | Author |
|
Fixup for 51c0c2869845a058268d54c3111d55d0dd485704.
|
|
|
|
|
|
Whenever we provide a bus API that allows clients to create and manage
server-side objects, we need to provide a unique name for these objects.
There are two ways to provide them:
1) Let the server choose a name and return it as method reply.
2) Let the client pass its name of choice in the method arguments.
The first method is the easiest one to implement. However, it suffers from
a race condition: If a client creates an object asynchronously, it cannot
destroy that object until it received the method reply. It cannot know the
name of the new object, thus, it cannot destroy it. Furthermore, this
method enforces a round-trip. If the client _depends_ on the method call
to succeed (eg., it would close() the connection if it failed), the client
usually has no reason to wait for the method reply. Instead, the client
can immediately schedule further method calls on the newly created object
(in case the API guarantees in-order method-call handling).
The second method fixes both problems: The client passes an object name
with the method-call. The server uses it to create the object. Therefore,
the client can schedule object destruction even if the object-creation
hasn't finished, yet (again, requiring in-order method-call handling).
Furthermore, the client can schedule further method calls on the newly
created object, before the constructor returned.
There're two problems to solve, though:
1) Object names are usually defined via dbus object paths, which are
usually globally namespaced. Therefore, multiple clients must be able
to choose unique object names without interference.
2) If multiple libraries share the same bus connection, they must be
able to choose unique object names without interference.
The first problem is solved easily by prefixing a name with the
unique-bus-name of a connection. The server side must enforce this and
reject any other name.
The second problem is solved by providing unique suffixes from within
sd-bus. As long as sd-bus always returns a fresh new ID, if requested,
multiple libraries will never interfere. This implementation re-uses
bus->cookie as ID generator, which already provides unique IDs for each
bus connection.
This patch introduces two new helpers:
bus_path_encode_unique(sd_bus *bus,
const char *prefix,
const char *sender_id,
const char *external_id,
char **ret_path);
This creates a new object-path via the template
'/prefix/sender_id/external_id'. That is, it appends two new labels to
the given prefix. If 'sender_id' is NULL, it will use
bus->unique_name, if 'external_id' is NULL, it will allocate a fresh,
unique cookie from bus->cookie.
bus_path_decode_unique(const char *path,
const char *prefix,
char **ret_sender,
char **ret_external);
This reverses what bus_path_encode_unique() did. It parses 'path' from
the template '/prefix/sender/external' and returns both suffix-labels
in 'ret_sender' and 'ret_external'. In case the template does not
match, 0 is returned and both output arguments are set to NULL.
Otherwise, 1 is returned and the output arguments contain the decoded
labels.
Note: Client-side allocated IDs are inspired by the Wayland protocol
(which itself was inspired by X11). Wayland uses those IDs heavily
to avoid round-trips. Clients can create server-side objects and
send method calls without any round-trip and waiting for any object
IDs to be returned. But unlike Wayland, DBus uses gobally namespaced
object names. Therefore, we have to add the extra step by adding the
unique-name of the bus connection.
|
|
This is like bus_label_unescape() but takes a maximum length instead of
relying on NULL-terminated strings. This is highly useful to unescape
labels that are not at the end of a path.
|
|
We _always_ return NULL from destructors to allow direct assignments to
the variable holding the object. Especially on hashmaps, which treat NULL
as empty hashmap, this is pretty neat.
|
|
udev uses inotify to implement a scheme where when the user closes
a writable device node, a change uevent is forcefully generated.
In the case of block devices, it actually requests a partition rescan.
This currently can't be synchronized with "udevadm settle", i.e. this
is not reliable in a script:
sfdisk --change-id /dev/sda 1 81
udevadm settle
mount /dev/sda1 /foo
The settle call doesn't synchronize there, so at the same time we try
to mount the device, udevd is busy removing the partition device nodes and
readding them again. The mount call often happens in that moment where the
partition node has been removed but not readded yet.
This exact issue was fixed long ago:
http://git.kernel.org/cgit/linux/hotplug/udev.git/commit/?id=bb38678e3ccc02bcd970ccde3d8166a40edf92d3
but that fix is no longer valid now that sequence numbers are no longer
used.
Fix this by forcing another mainloop iteration after handling inotify events
before unblocking settle. If the inotify event caused us to generate a
"change" event, we'll pick that up in the following loop iteration, before
we reach the end of the loop where we respond to settle's control message,
unblocking it.
|
|
This adds support for the keyboard illumination keys and fixes
Fn+F1.
|
|
|
|
|
|
|
|
|
|
Commit 9ea28c55a2 (udev: remove seqnum API and all assumptions about
seqnums) introduced a regresion, ignoring the timeout option when
waiting until the event queue is empty.
Previously, if the udev event queue was not empty when the timeout was
expired, udevadm settle was returning with exit code 1. To check if the
queue is empty, you could invoke udevadm settle with timeout=0. This
patch restores the previous behavior.
(David: fixed timeout==0 handling and dropped redundant assignment)
|
|
We only care about whether our direct parent is removable, not whether any
further points up the tree are - the kernel will take care of policy for
those itself. This enables autosuspend on devices where the root hub reports
that its removable state is unknown.
|
|
https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=be08eda5
https://bugs.gentoo.org/show_bug.cgi?id=546194
|
|
|
|
Aarch64 and ARM32 lack an EFI capable objcopy, so use the ldflags + -O
binary trick gnu-efi and the Red Hat shimloader are using.
(David: rebase to systemd-git and added EFI_ prefixes)
|
|
|
|
This is just plumbing to add ARCH_AARCH64 EFI support for makefile tests
and defining the machine name.
|
|
|
|
Move the no-mmx/no-sse CFLAGS to X86-64 and IA32 defines in preparation
for ARM32 and Aarch64 support.
|
|
Verified for the 5,1 Macbook, the others are guesses based on the list of
supported devices of the moshi trackpad protector.
http://www.moshi.com/trackpad-protector-trackguard-macbook-pro#silver
Resolution calculated based on the min/max settings set in the kernel driver,
divided by the physical size. This is probably slightly off, but still better
than no resolution at all.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
|
|
Parse properties in the form
EVDEV_ABS_00="<min>:<max>:<res>:<fuzz>:<flat>"
and apply them to the kernel device. Future processes that open that device
will see the updated EV_ABS range.
This is particularly useful for touchpads that don't provide a resolution in
the kernel driver but can be fixed up through hwdb entries (e.g. bcm5974).
All values in the property are optional, e.g. a string of "::45" is valid to
set the resolution to 45.
The order intentionally orders resolution before fuzz and flat despite it
being the last element in the absinfo struct. The use-case for setting
fuzz/flat is almost non-existent, resolution is probably the most common case
we'll need.
To avoid multiple hwdb invocations for the same device, replace the
hwdb "keyboard:" prefix with "evdev:" and drop the separate 60-keyboard.rules
file. The new 60-evdev.rules is called for all event nodes
anyway, we don't need a separate rules file and second callout to the hwdb
builtin.
|
|
No functional changes, just to make the next patch easier to review
|
|
No changes in the mapping, but previously we opened the device only on
successful parsing. Now we open the mapping as soon as we have a value that
looks interesting. Since errors are supposed to be the exception, not the
rule, this is probably fine.
|
|
Rather than building a map and looping through the map, immediately call the
ioctl when we have a successfully parsed property.
This has a side-effect: before the maximum number of ioctls was limited to the
size of the map (1024), now it is unlimited.
|
|
No point parsing the properties if we can't get the devnode to apply them
later. Plus, this makes future additions easier to slot in.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Clang is not happy about using the cleanup attribute in switches
|
|
|
|
No need to ifdef out efi code as the functions are always defined.
|
|
systemctl and logind were unconditionally using functions that were not compiled
on non-EFI systems. Add stubs returning -EOPNOTSUPP to fix compile again.
|
|
There was a bug where is_efi_*() could return a negative error value, which would be treated as 'true',
just make this a bool in the helper library to avoid the problem.
|
|
This reverts commit 6ec8e7c763b7dfa82e25e31f6938122748d1608f.
This doesn't fix any issues, just makes the code harder to read.
|
|
|
|
|
|
|
|
Users might have hard time figuring out why exactly their systemctl request
failed. If dbus job fails try to figure out more details about failure by
examining Result property of the service.
https://bugzilla.redhat.com/show_bug.cgi?id=1016680
|
|
When the value is already there it returns 0.
Also add a test to ensure this
|
|
Always create files first, and then adjust their ACLs, xattrs, file
attributes, never the opposite. Previously the order was not
deterministic, thus possibly first adjusting ACLs/xattrs/file
attributes before actually creating the items.
|
|
|
|
|
|
|
|
Add a comment why returning a positive error is OK and intended in this
case.
(It's still a nasty hack to do this though!)
|
|
|