From 863981e96738983919de841ec669e157e6bdaeb0 Mon Sep 17 00:00:00 2001 From: André Fabian Silva Delgado Date: Sun, 11 Sep 2016 04:34:46 -0300 Subject: Linux-libre 4.7.1-gnu --- Documentation/security/LoadPin.txt | 17 ++ Documentation/security/keys.txt | 55 ++++++ Documentation/security/self-protection.txt | 261 +++++++++++++++++++++++++++++ 3 files changed, 333 insertions(+) create mode 100644 Documentation/security/LoadPin.txt create mode 100644 Documentation/security/self-protection.txt (limited to 'Documentation/security') diff --git a/Documentation/security/LoadPin.txt b/Documentation/security/LoadPin.txt new file mode 100644 index 000000000..e11877f5d --- /dev/null +++ b/Documentation/security/LoadPin.txt @@ -0,0 +1,17 @@ +LoadPin is a Linux Security Module that ensures all kernel-loaded files +(modules, firmware, etc) all originate from the same filesystem, with +the expectation that such a filesystem is backed by a read-only device +such as dm-verity or CDROM. This allows systems that have a verified +and/or unchangeable filesystem to enforce module and firmware loading +restrictions without needing to sign the files individually. + +The LSM is selectable at build-time with CONFIG_SECURITY_LOADPIN, and +can be controlled at boot-time with the kernel command line option +"loadpin.enabled". By default, it is enabled, but can be disabled at +boot ("loadpin.enabled=0"). + +LoadPin starts pinning when it sees the first file loaded. If the +block device backing the filesystem is not read-only, a sysctl is +created to toggle pinning: /proc/sys/kernel/loadpin/enabled. (Having +a mutable filesystem means pinning is mutable too, but having the +sysctl allows for easy testing on systems with a mutable filesystem.) diff --git a/Documentation/security/keys.txt b/Documentation/security/keys.txt index 8c183873b..3849814bf 100644 --- a/Documentation/security/keys.txt +++ b/Documentation/security/keys.txt @@ -823,6 +823,39 @@ The keyctl syscall functions are: A process must have search permission on the key for this function to be successful. + (*) Compute a Diffie-Hellman shared secret or public key + + long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params, + char *buffer, size_t buflen, + void *reserved); + + The params struct contains serial numbers for three keys: + + - The prime, p, known to both parties + - The local private key + - The base integer, which is either a shared generator or the + remote public key + + The value computed is: + + result = base ^ private (mod prime) + + If the base is the shared generator, the result is the local + public key. If the base is the remote public key, the result is + the shared secret. + + The reserved argument must be set to NULL. + + The buffer length must be at least the length of the prime, or zero. + + If the buffer length is nonzero, the length of the result is + returned when it is successfully calculated and copied in to the + buffer. When the buffer length is zero, the minimum required + buffer length is returned. + + This function will return error EOPNOTSUPP if the key type is not + supported, error ENOKEY if the key could not be found, or error + EACCES if the key is not readable by the caller. =============== KERNEL SERVICES @@ -999,6 +1032,10 @@ payload contents" for more information. struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid, const struct cred *cred, key_perm_t perm, + int (*restrict_link)(struct key *, + const struct key_type *, + unsigned long, + const union key_payload *), unsigned long flags, struct key *dest); @@ -1010,6 +1047,24 @@ payload contents" for more information. KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted towards the user's quota). Error ENOMEM can also be returned. + If restrict_link not NULL, it should point to a function that will be + called each time an attempt is made to link a key into the new keyring. + This function is called to check whether a key may be added into the keying + or not. Callers of key_create_or_update() within the kernel can pass + KEY_ALLOC_BYPASS_RESTRICTION to suppress the check. An example of using + this is to manage rings of cryptographic keys that are set up when the + kernel boots where userspace is also permitted to add keys - provided they + can be verified by a key the kernel already has. + + When called, the restriction function will be passed the keyring being + added to, the key flags value and the type and payload of the key being + added. Note that when a new key is being created, this is called between + payload preparsing and actual key creation. The function should return 0 + to allow the link or an error to reject it. + + A convenience function, restrict_link_reject, exists to always return + -EPERM to in this case. + (*) To check the validity of a key, this function can be called: diff --git a/Documentation/security/self-protection.txt b/Documentation/security/self-protection.txt new file mode 100644 index 000000000..babd6378e --- /dev/null +++ b/Documentation/security/self-protection.txt @@ -0,0 +1,261 @@ +# Kernel Self-Protection + +Kernel self-protection is the design and implementation of systems and +structures within the Linux kernel to protect against security flaws in +the kernel itself. This covers a wide range of issues, including removing +entire classes of bugs, blocking security flaw exploitation methods, +and actively detecting attack attempts. Not all topics are explored in +this document, but it should serve as a reasonable starting point and +answer any frequently asked questions. (Patches welcome, of course!) + +In the worst-case scenario, we assume an unprivileged local attacker +has arbitrary read and write access to the kernel's memory. In many +cases, bugs being exploited will not provide this level of access, +but with systems in place that defend against the worst case we'll +cover the more limited cases as well. A higher bar, and one that should +still be kept in mind, is protecting the kernel against a _privileged_ +local attacker, since the root user has access to a vastly increased +attack surface. (Especially when they have the ability to load arbitrary +kernel modules.) + +The goals for successful self-protection systems would be that they +are effective, on by default, require no opt-in by developers, have no +performance impact, do not impede kernel debugging, and have tests. It +is uncommon that all these goals can be met, but it is worth explicitly +mentioning them, since these aspects need to be explored, dealt with, +and/or accepted. + + +## Attack Surface Reduction + +The most fundamental defense against security exploits is to reduce the +areas of the kernel that can be used to redirect execution. This ranges +from limiting the exposed APIs available to userspace, making in-kernel +APIs hard to use incorrectly, minimizing the areas of writable kernel +memory, etc. + +### Strict kernel memory permissions + +When all of kernel memory is writable, it becomes trivial for attacks +to redirect execution flow. To reduce the availability of these targets +the kernel needs to protect its memory with a tight set of permissions. + +#### Executable code and read-only data must not be writable + +Any areas of the kernel with executable memory must not be writable. +While this obviously includes the kernel text itself, we must consider +all additional places too: kernel modules, JIT memory, etc. (There are +temporary exceptions to this rule to support things like instruction +alternatives, breakpoints, kprobes, etc. If these must exist in a +kernel, they are implemented in a way where the memory is temporarily +made writable during the update, and then returned to the original +permissions.) + +In support of this are (the poorly named) CONFIG_DEBUG_RODATA and +CONFIG_DEBUG_SET_MODULE_RONX, which seek to make sure that code is not +writable, data is not executable, and read-only data is neither writable +nor executable. + +#### Function pointers and sensitive variables must not be writable + +Vast areas of kernel memory contain function pointers that are looked +up by the kernel and used to continue execution (e.g. descriptor/vector +tables, file/network/etc operation structures, etc). The number of these +variables must be reduced to an absolute minimum. + +Many such variables can be made read-only by setting them "const" +so that they live in the .rodata section instead of the .data section +of the kernel, gaining the protection of the kernel's strict memory +permissions as described above. + +For variables that are initialized once at __init time, these can +be marked with the (new and under development) __ro_after_init +attribute. + +What remains are variables that are updated rarely (e.g. GDT). These +will need another infrastructure (similar to the temporary exceptions +made to kernel code mentioned above) that allow them to spend the rest +of their lifetime read-only. (For example, when being updated, only the +CPU thread performing the update would be given uninterruptible write +access to the memory.) + +#### Segregation of kernel memory from userspace memory + +The kernel must never execute userspace memory. The kernel must also never +access userspace memory without explicit expectation to do so. These +rules can be enforced either by support of hardware-based restrictions +(x86's SMEP/SMAP, ARM's PXN/PAN) or via emulation (ARM's Memory Domains). +By blocking userspace memory in this way, execution and data parsing +cannot be passed to trivially-controlled userspace memory, forcing +attacks to operate entirely in kernel memory. + +### Reduced access to syscalls + +One trivial way to eliminate many syscalls for 64-bit systems is building +without CONFIG_COMPAT. However, this is rarely a feasible scenario. + +The "seccomp" system provides an opt-in feature made available to +userspace, which provides a way to reduce the number of kernel entry +points available to a running process. This limits the breadth of kernel +code that can be reached, possibly reducing the availability of a given +bug to an attack. + +An area of improvement would be creating viable ways to keep access to +things like compat, user namespaces, BPF creation, and perf limited only +to trusted processes. This would keep the scope of kernel entry points +restricted to the more regular set of normally available to unprivileged +userspace. + +### Restricting access to kernel modules + +The kernel should never allow an unprivileged user the ability to +load specific kernel modules, since that would provide a facility to +unexpectedly extend the available attack surface. (The on-demand loading +of modules via their predefined subsystems, e.g. MODULE_ALIAS_*, is +considered "expected" here, though additional consideration should be +given even to these.) For example, loading a filesystem module via an +unprivileged socket API is nonsense: only the root or physically local +user should trigger filesystem module loading. (And even this can be up +for debate in some scenarios.) + +To protect against even privileged users, systems may need to either +disable module loading entirely (e.g. monolithic kernel builds or +modules_disabled sysctl), or provide signed modules (e.g. +CONFIG_MODULE_SIG_FORCE, or dm-crypt with LoadPin), to keep from having +root load arbitrary kernel code via the module loader interface. + + +## Memory integrity + +There are many memory structures in the kernel that are regularly abused +to gain execution control during an attack, By far the most commonly +understood is that of the stack buffer overflow in which the return +address stored on the stack is overwritten. Many other examples of this +kind of attack exist, and protections exist to defend against them. + +### Stack buffer overflow + +The classic stack buffer overflow involves writing past the expected end +of a variable stored on the stack, ultimately writing a controlled value +to the stack frame's stored return address. The most widely used defense +is the presence of a stack canary between the stack variables and the +return address (CONFIG_CC_STACKPROTECTOR), which is verified just before +the function returns. Other defenses include things like shadow stacks. + +### Stack depth overflow + +A less well understood attack is using a bug that triggers the +kernel to consume stack memory with deep function calls or large stack +allocations. With this attack it is possible to write beyond the end of +the kernel's preallocated stack space and into sensitive structures. Two +important changes need to be made for better protections: moving the +sensitive thread_info structure elsewhere, and adding a faulting memory +hole at the bottom of the stack to catch these overflows. + +### Heap memory integrity + +The structures used to track heap free lists can be sanity-checked during +allocation and freeing to make sure they aren't being used to manipulate +other memory areas. + +### Counter integrity + +Many places in the kernel use atomic counters to track object references +or perform similar lifetime management. When these counters can be made +to wrap (over or under) this traditionally exposes a use-after-free +flaw. By trapping atomic wrapping, this class of bug vanishes. + +### Size calculation overflow detection + +Similar to counter overflow, integer overflows (usually size calculations) +need to be detected at runtime to kill this class of bug, which +traditionally leads to being able to write past the end of kernel buffers. + + +## Statistical defenses + +While many protections can be considered deterministic (e.g. read-only +memory cannot be written to), some protections provide only statistical +defense, in that an attack must gather enough information about a +running system to overcome the defense. While not perfect, these do +provide meaningful defenses. + +### Canaries, blinding, and other secrets + +It should be noted that things like the stack canary discussed earlier +are technically statistical defenses, since they rely on a (leakable) +secret value. + +Blinding literal values for things like JITs, where the executable +contents may be partially under the control of userspace, need a similar +secret value. + +It is critical that the secret values used must be separate (e.g. +different canary per stack) and high entropy (e.g. is the RNG actually +working?) in order to maximize their success. + +### Kernel Address Space Layout Randomization (KASLR) + +Since the location of kernel memory is almost always instrumental in +mounting a successful attack, making the location non-deterministic +raises the difficulty of an exploit. (Note that this in turn makes +the value of leaks higher, since they may be used to discover desired +memory locations.) + +#### Text and module base + +By relocating the physical and virtual base address of the kernel at +boot-time (CONFIG_RANDOMIZE_BASE), attacks needing kernel code will be +frustrated. Additionally, offsetting the module loading base address +means that even systems that load the same set of modules in the same +order every boot will not share a common base address with the rest of +the kernel text. + +#### Stack base + +If the base address of the kernel stack is not the same between processes, +or even not the same between syscalls, targets on or beyond the stack +become more difficult to locate. + +#### Dynamic memory base + +Much of the kernel's dynamic memory (e.g. kmalloc, vmalloc, etc) ends up +being relatively deterministic in layout due to the order of early-boot +initializations. If the base address of these areas is not the same +between boots, targeting them is frustrated, requiring a leak specific +to the region. + + +## Preventing Leaks + +Since the locations of sensitive structures are the primary target for +attacks, it is important to defend against leaks of both kernel memory +addresses and kernel memory contents (since they may contain kernel +addresses or other sensitive things like canary values). + +### Unique identifiers + +Kernel memory addresses must never be used as identifiers exposed to +userspace. Instead, use an atomic counter, an idr, or similar unique +identifier. + +### Memory initialization + +Memory copied to userspace must always be fully initialized. If not +explicitly memset(), this will require changes to the compiler to make +sure structure holes are cleared. + +### Memory poisoning + +When releasing memory, it is best to poison the contents (clear stack on +syscall return, wipe heap memory on a free), to avoid reuse attacks that +rely on the old contents of memory. This frustrates many uninitialized +variable attacks, stack info leaks, heap info leaks, and use-after-free +attacks. + +### Destination tracking + +To help kill classes of bugs that result in kernel addresses being +written to userspace, the destination of writes needs to be tracked. If +the buffer is destined for userspace (e.g. seq_file backed /proc files), +it should automatically censor sensitive values. -- cgit v1.2.3-54-g00ecf