From 1eae9639aac0f8de4d284f567ec722a822b52513 Mon Sep 17 00:00:00 2001 From: André Fabian Silva Delgado Date: Tue, 1 Nov 2016 14:27:38 -0300 Subject: Linux-libre 4.8.6-gnu --- Documentation/ABI/testing/sysfs-class-cxl | 7 +- Documentation/kernel-parameters.txt | 9 +- Documentation/scheduler/sched-BFS.txt | 216 +++++++++++----------- Documentation/scheduler/sched-MuQSS.txt | 289 ++++++++++++++++++++++++++++-- 4 files changed, 394 insertions(+), 127 deletions(-) (limited to 'Documentation') diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl index 4ba0a2a61..640f65e79 100644 --- a/Documentation/ABI/testing/sysfs-class-cxl +++ b/Documentation/ABI/testing/sysfs-class-cxl @@ -220,8 +220,11 @@ What: /sys/class/cxl//reset Date: October 2014 Contact: linuxppc-dev@lists.ozlabs.org Description: write only - Writing 1 will issue a PERST to card which may cause the card - to reload the FPGA depending on load_image_on_perst. + Writing 1 will issue a PERST to card provided there are no + contexts active on any one of the card AFUs. This may cause + the card to reload the FPGA depending on load_image_on_perst. + Writing -1 will do a force PERST irrespective of any active + contexts on the card AFUs. Users: https://github.com/ibm-capi/libcxl What: /sys/class/cxl//perst_reloads_same_image (not in a guest) diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index a4f4d693e..46726d489 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -1457,7 +1457,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted. i8042.nopnp [HW] Don't use ACPIPnP / PnPBIOS to discover KBD/AUX controllers i8042.notimeout [HW] Ignore timeout condition signalled by controller - i8042.reset [HW] Reset the controller during init and cleanup + i8042.reset [HW] Reset the controller during init, cleanup and + suspend-to-ram transitions, only during s2r + transitions, or never reset + Format: { 1 | Y | y | 0 | N | n } + 1, Y, y: always reset controller + 0, N, n: don't ever reset controller + Default: only on s2r transitions on x86; most other + architectures force reset to be always executed i8042.unlock [HW] Unlock (ignore) the keylock i8042.kbdreset [HW] Reset device connected to KBD port diff --git a/Documentation/scheduler/sched-BFS.txt b/Documentation/scheduler/sched-BFS.txt index 6470f30b0..c0282002a 100644 --- a/Documentation/scheduler/sched-BFS.txt +++ b/Documentation/scheduler/sched-BFS.txt @@ -13,14 +13,14 @@ one workload cause massive detriment to another. Design summary. -BFS is best described as a single runqueue, O(log n) insertion, O(1) lookup, -earliest effective virtual deadline first design, loosely based on EEVDF -(earliest eligible virtual deadline first) and my previous Staircase Deadline -scheduler. Each component shall be described in order to understand the -significance of, and reasoning for it. The codebase when the first stable -version was released was approximately 9000 lines less code than the existing -mainline linux kernel scheduler (in 2.6.31). This does not even take into -account the removal of documentation and the cgroups code that is not used. +BFS is best described as a single runqueue, O(n) lookup, earliest effective +virtual deadline first design, loosely based on EEVDF (earliest eligible virtual +deadline first) and my previous Staircase Deadline scheduler. Each component +shall be described in order to understand the significance of, and reasoning for +it. The codebase when the first stable version was released was approximately +9000 lines less code than the existing mainline linux kernel scheduler (in +2.6.31). This does not even take into account the removal of documentation and +the cgroups code that is not used. Design reasoning. @@ -62,13 +62,12 @@ Design details. Task insertion. -BFS inserts tasks into each relevant queue as an O(log n) insertion into a -customised skip list (as described by William Pugh). At the time of insertion, -*every* running queue is checked to see if the newly queued task can run on any -idle queue, or preempt the lowest running task on the system. This is how the -cross-CPU scheduling of BFS achieves significantly lower latency per extra CPU -the system has. In this case the lookup is, in the worst case scenario, O(k) -where k is the number of online CPUs on the system. +BFS inserts tasks into each relevant queue as an O(1) insertion into a double +linked list. On insertion, *every* running queue is checked to see if the newly +queued task can run on any idle queue, or preempt the lowest running task on the +system. This is how the cross-CPU scheduling of BFS achieves significantly lower +latency per extra CPU the system has. In this case the lookup is, in the worst +case scenario, O(n) where n is the number of CPUs on the system. Data protection. @@ -93,7 +92,7 @@ the virtual deadline mechanism is explained. Virtual deadline. The key to achieving low latency, scheduling fairness, and "nice level" -distribution in BFS is entirely in the virtual deadline mechanism. The related +distribution in BFS is entirely in the virtual deadline mechanism. The one tunable in BFS is the rr_interval, or "round robin interval". This is the maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) tasks of the same nice level will be running for, or looking at it the other @@ -118,7 +117,7 @@ higher priority than a currently running task on any cpu by virtue of the fact that it has an earlier virtual deadline than the currently running task. The earlier deadline is the key to which task is next chosen for the first and second cases. Once a task is descheduled, it is put back on the queue, and an -O(1) lookup of all queued-but-not-running tasks is done to determine which has +O(n) lookup of all queued-but-not-running tasks is done to determine which has the earliest deadline and that task is chosen to receive CPU next. The CPU proportion of different nice tasks works out to be approximately the @@ -135,40 +134,26 @@ Task lookup. BFS has 103 priority queues. 100 of these are dedicated to the static priority of realtime tasks, and the remaining 3 are, in order of best to worst priority, -SCHED_ISO (isochronous), SCHED_NORMAL/SCHED_BATCH, and SCHED_IDLEPRIO (idle -priority scheduling). - -When a task of these priorities is queued, it is added to the skiplist with a -different sorting value according to the type of task. For realtime tasks and -isochronous tasks, it is their static priority. For SCHED_NORMAL and -SCHED_BATCH tasks it is their virtual deadline value. For SCHED_IDLEPRIO tasks -it is their virtual deadline value offset by an impossibly large value to ensure -they never go before normal tasks. When isochronous or idleprio tasks do not -meet the conditions that allow them to run with their special scheduling they -are queued as per the remainder of the SCHED_NORMAL tasks. - -Lookup is performed by selecting the very first entry in the "level 0" skiplist -as it will always be the lowest priority task having been sorted while being -entered into the skiplist. This is usually an O(1) operation, however if there -are tasks with limited affinity set and they are not able to run on the current -CPU, the next in the list is checked and so on. - -Thus, the lookup for the common case is O(1) and O(n) in the worst case when -the system has nothing but selectively affined tasks that can never run on the -current CPU. - - -Task removal. - -Removal of tasks in the skip list is an O(k) operation where 0 <= k < 16, -corresponding with the "levels" in the skip list. 16 was chosen as the upper -limit in the skiplist as it guarantees O(log n) insertion for up to 64k -currently active tasks and most systems do not usually allow more than 32k -tasks, and 16 levels makes the skiplist lookup components fit in 2 cachelines. -The skiplist level chosen when inserting a task is pseudo-random but a minor -optimisation is used to limit the max level based on the absolute number of -queued tasks since high levels afford no advantage at low numbers of queued -tasks yet increase overhead. +SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority +scheduling). When a task of these priorities is queued, a bitmap of running +priorities is set showing which of these priorities has tasks waiting for CPU +time. When a CPU is made to reschedule, the lookup for the next task to get +CPU time is performed in the following way: + +First the bitmap is checked to see what static priority tasks are queued. If +any realtime priorities are found, the corresponding queue is checked and the +first task listed there is taken (provided CPU affinity is suitable) and lookup +is complete. If the priority corresponds to a SCHED_ISO task, they are also +taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds +to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this +stage, every task in the runlist that corresponds to that priority is checked +to see which has the earliest set deadline, and (provided it has suitable CPU +affinity) it is taken off the runqueue and given the CPU. If a task has an +expired deadline, it is taken and the rest of the lookup aborted (as they are +chosen in FIFO order). + +Thus, the lookup is O(n) in the worst case only, where n is as described +earlier, as tasks may be chosen before the whole task list is looked over. Scalability. @@ -191,17 +176,30 @@ when it has been deemed their overhead is so marginal that they're worth adding. The first is the local copy of the running process' data to the CPU it's running on to allow that data to be updated lockless where possible. Then there is deference paid to the last CPU a task was running on, by trying that CPU first -when looking for an idle CPU to use the next time it's scheduled. - -The real cost of migrating a task from one CPU to another is entirely dependant -on the cache footprint of the task, how cache intensive the task is, how long -it's been running on that CPU to take up the bulk of its cache, how big the CPU -cache is, how fast and how layered the CPU cache is, how fast a context switch -is... and so on. In other words, it's close to random in the real world where we -do more than just one sole workload. The only thing we can be sure of is that -it's not free. So BFS uses the principle that an idle CPU is a wasted CPU and -utilising idle CPUs is more important than cache locality, and cache locality -only plays a part after that. +when looking for an idle CPU to use the next time it's scheduled. Finally there +is the notion of cache locality beyond the last running CPU. The sched_domains +information is used to determine the relative virtual "cache distance" that +other CPUs have from the last CPU a task was running on. CPUs with shared +caches, such as SMT siblings, or multicore CPUs with shared caches, are treated +as cache local. CPUs without shared caches are treated as not cache local, and +CPUs on different NUMA nodes are treated as very distant. This "relative cache +distance" is used by modifying the virtual deadline value when doing lookups. +Effectively, the deadline is unaltered between "cache local" CPUs, doubled for +"cache distant" CPUs, and quadrupled for "very distant" CPUs. The reasoning +behind the doubling of deadlines is as follows. The real cost of migrating a +task from one CPU to another is entirely dependant on the cache footprint of +the task, how cache intensive the task is, how long it's been running on that +CPU to take up the bulk of its cache, how big the CPU cache is, how fast and +how layered the CPU cache is, how fast a context switch is... and so on. In +other words, it's close to random in the real world where we do more than just +one sole workload. The only thing we can be sure of is that it's not free. So +BFS uses the principle that an idle CPU is a wasted CPU and utilising idle CPUs +is more important than cache locality, and cache locality only plays a part +after that. Doubling the effective deadline is based on the premise that the +"cache local" CPUs will tend to work on the same tasks up to double the number +of cache local CPUs, and once the workload is beyond that amount, it is likely +that none of the tasks are cache warm anywhere anyway. The quadrupling for NUMA +is a value I pulled out of my arse. When choosing an idle CPU for a waking task, the cache locality is determined according to where the task last ran and then idle CPUs are ranked from best @@ -209,26 +207,31 @@ to worst to choose the most suitable idle CPU based on cache locality, NUMA node locality and hyperthread sibling business. They are chosen in the following preference (if idle): - * Same thread, idle or busy cache, idle or busy threads - * Other core, same cache, idle or busy cache, idle threads. - * Same node, other CPU, idle cache, idle threads. - * Same node, other CPU, busy cache, idle threads. - * Other core, same cache, busy threads. - * Same node, other CPU, busy threads. - * Other node, other CPU, idle cache, idle threads. - * Other node, other CPU, busy cache, idle threads. - * Other node, other CPU, busy threads. +* Same core, idle or busy cache, idle threads +* Other core, same cache, idle or busy cache, idle threads. +* Same node, other CPU, idle cache, idle threads. +* Same node, other CPU, busy cache, idle threads. +* Same core, busy threads. +* Other core, same cache, busy threads. +* Same node, other CPU, busy threads. +* Other node, other CPU, idle cache, idle threads. +* Other node, other CPU, busy cache, idle threads. +* Other node, other CPU, busy threads. This shows the SMT or "hyperthread" awareness in the design as well which will choose a real idle core first before a logical SMT sibling which already has -tasks on the physical CPU. Early benchmarking of BFS suggested scalability -dropped off at the 16 CPU mark. However this benchmarking was performed on an -earlier design that was far less scalable than the current one so it's hard to -know how scalable it is in terms of number of CPUs (due to the global -runqueue). Note that in terms of scalability, the number of _logical_ CPUs -matters, not the number of _physical_ CPUs. Thus, a dual (2x) quad core (4X) -hyperthreaded (2X) machine is effectively a 16X. Newer benchmark results are -very promising indeed. Benchmark contributions are most welcome. +tasks on the physical CPU. + +Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark. +However this benchmarking was performed on an earlier design that was far less +scalable than the current one so it's hard to know how scalable it is in terms +of both CPUs (due to the global runqueue) and heavily loaded machines (due to +O(n) lookup) at this stage. Note that in terms of scalability, the number of +_logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x) +quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark +results are very promising indeed, without needing to tweak any knobs, features +or options. Benchmark contributions are most welcome. + Features @@ -241,43 +244,30 @@ and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is support for CGROUPS. The average user should neither need to know what these are, nor should they need to be using them to have good desktop behaviour. -Rudimentary support for the CPU controller CGROUP in the form of filesystem -stubs for the expected CGROUP structure to allow applications that demand their -presence to work but they do not have any functionality. -There are two "scheduler" tunables, the round robin interval and the -interactive flag. These can be accessed in +rr_interval + +There is only one "scheduler" tunable, the round robin interval. This can be +accessed in /proc/sys/kernel/rr_interval - /proc/sys/kernel/interactive - -rr_interval value - -The value is in milliseconds, and the default value is set to 6ms. Valid values -are from 1 to 1000. Decreasing the value will decrease latencies at the cost of -decreasing throughput, while increasing it will improve throughput, but at the -cost of worsening latencies. The accuracy of the rr interval is limited by HZ -resolution of the kernel configuration. Thus, the worst case latencies are -usually slightly higher than this actual value. BFS uses "dithering" to try and -minimise the effect the Hz limitation has. The default value of 6 is not an -arbitrary one. It is based on the fact that humans can detect jitter at -approximately 7ms, so aiming for much lower latencies is pointless under most -circumstances. It is worth noting this fact when comparing the latency -performance of BFS to other schedulers. Worst case latencies being higher than -7ms are far worse than average latencies not being in the microsecond range. -Experimentation has shown that rr intervals being increased up to 300 can -improve throughput but beyond that, scheduling noise from elsewhere prevents -further demonstrable throughput. - -interactive flag - -This is a simple boolean that can be set to 1 or 0, set to 1 by default. This -sacrifices some of the interactive performance by giving tasks a degree of -soft affinity for logical CPUs when it will lead to improved throughput, but -enabling it also sacrifices the completely deterministic nature with respect -to latency that BFS otherwise normally provides, and subsequently leads to -slightly higher latencies and a noticeably less interactive system. +The value is in milliseconds, and the default value is set to 6 on a +uniprocessor machine, and automatically set to a progressively higher value on +multiprocessor machines. The reasoning behind increasing the value on more CPUs +is that the effective latency is decreased by virtue of there being more CPUs on +BFS (for reasons explained above), and increasing the value allows for less +cache contention and more throughput. Valid values are from 1 to 1000 +Decreasing the value will decrease latencies at the cost of decreasing +throughput, while increasing it will improve throughput, but at the cost of +worsening latencies. The accuracy of the rr interval is limited by HZ resolution +of the kernel configuration. Thus, the worst case latencies are usually slightly +higher than this actual value. The default value of 6 is not an arbitrary one. +It is based on the fact that humans can detect jitter at approximately 7ms, so +aiming for much lower latencies is pointless under most circumstances. It is +worth noting this fact when comparing the latency performance of BFS to other +schedulers. Worst case latencies being higher than 7ms are far worse than +average latencies not being in the microsecond range. Isochronous scheduling. @@ -358,4 +348,4 @@ of total wall clock time taken and total work done, rather than the reported "cpu usage". -Con Kolivas Tue, 5 Apr 2011 +Con Kolivas Fri Aug 27 2010 diff --git a/Documentation/scheduler/sched-MuQSS.txt b/Documentation/scheduler/sched-MuQSS.txt index 2521d1ad0..bbd6980a0 100644 --- a/Documentation/scheduler/sched-MuQSS.txt +++ b/Documentation/scheduler/sched-MuQSS.txt @@ -1,9 +1,10 @@ MuQSS - The Multiple Queue Skiplist Scheduler by Con Kolivas. -See sched-BFS.txt for basic design; MuQSS is a per-cpu runqueue variant with +MuQSS is a per-cpu runqueue variant of the original BFS scheduler with one 8 level skiplist per runqueue, and fine grained locking for much more scalability. + Goals. The goal of the Multiple Queue Skiplist Scheduler, referred to as MuQSS from @@ -19,11 +20,11 @@ scalable to many CPUs and processes. Design summary. MuQSS is best described as per-cpu multiple runqueue, O(log n) insertion, O(1) -lookup, earliest effective virtual deadline first design, loosely based on EEVDF -(earliest eligible virtual deadline first) and my previous Staircase Deadline -scheduler, and evolved from the single runqueue O(n) BFS scheduler. Each -component shall be described in order to understand the significance of, and -reasoning for it. +lookup, earliest effective virtual deadline first tickless design, loosely based +on EEVDF (earliest eligible virtual deadline first) and my previous Staircase +Deadline scheduler, and evolved from the single runqueue O(n) BFS scheduler. +Each component shall be described in order to understand the significance of, +and reasoning for it. Design reasoning. @@ -66,13 +67,279 @@ next task scheduling decision and task wakeup CPU choice to allow balancing to happen by virtue of its choices. -Design: +Design details. + +Custom skip list implementation: + +To avoid the overhead of building up and tearing down skip list structures, +the variant used by MuQSS has a number of optimisations making it specific for +its use case in the scheduler. It uses static arrays of 8 'levels' instead of +building up and tearing down structures dynamically. This makes each runqueue +only scale O(log N) up to 256 tasks. However as there is one runqueue per CPU +it means that it scales O(log N) up to 256 x number of logical CPUs which is +far beyond the realistic task limits each CPU could handle. By being 8 levels +it also makes the array exactly one cacheline in size. Additionally, each +skip list node is bidirectional making insertion and removal amortised O(1), +being O(k) where k is 1-8. Uniquely, we are only ever interested in the very +first entry in each list at all times with MuQSS, so there is never a need to +do a search and thus look up is always O(1). + +Task insertion: + +MuQSS inserts tasks into a per CPU runqueue as an O(log N) insertion into +a custom skip list as described above (based on the original design by William +Pugh). Insertion is ordered in such a way that there is never a need to do a +search by ordering tasks according to static priority primarily, and then +virtual deadline at the time of insertion. + +Niffies: + +Niffies are a monotonic forward moving timer not unlike the "jiffies" but are +of nanosecond resolution. Niffies are calculated per-runqueue from the high +resolution TSC timers, and in order to maintain fairness are synchronised +between CPUs whenever both runqueues are locked concurrently. + +Virtual deadline: + +The key to achieving low latency, scheduling fairness, and "nice level" +distribution in MuQSS is entirely in the virtual deadline mechanism. The one +tunable in MuQSS is the rr_interval, or "round robin interval". This is the +maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) +tasks of the same nice level will be running for, or looking at it the other +way around, the longest duration two tasks of the same nice level will be +delayed for. When a task requests cpu time, it is given a quota (time_slice) +equal to the rr_interval and a virtual deadline. The virtual deadline is +offset from the current time in niffies by this equation: + + niffies + (prio_ratio * rr_interval) + +The prio_ratio is determined as a ratio compared to the baseline of nice -20 +and increases by 10% per nice level. The deadline is a virtual one only in that +no guarantee is placed that a task will actually be scheduled by this time, but +it is used to compare which task should go next. There are three components to +how a task is next chosen. First is time_slice expiration. If a task runs out +of its time_slice, it is descheduled, the time_slice is refilled, and the +deadline reset to that formula above. Second is sleep, where a task no longer +is requesting CPU for whatever reason. The time_slice and deadline are _not_ +adjusted in this case and are just carried over for when the task is next +scheduled. Third is preemption, and that is when a newly waking task is deemed +higher priority than a currently running task on any cpu by virtue of the fact +that it has an earlier virtual deadline than the currently running task. The +earlier deadline is the key to which task is next chosen for the first and +second cases. + +The CPU proportion of different nice tasks works out to be approximately the + + (prio_ratio difference)^2 + +The reason it is squared is that a task's deadline does not change while it is +running unless it runs out of time_slice. Thus, even if the time actually +passes the deadline of another task that is queued, it will not get CPU time +unless the current running task deschedules, and the time "base" (niffies) is +constantly moving. + +Task lookup: + +As tasks are already pre-ordered according to anticipated scheduling order in +the skip lists, lookup for the next suitable task per-runqueue is always a +matter of simply selecting the first task in the 0th level skip list entry. +In order to maintain optimal latency and fairness across CPUs, MuQSS does a +novel examination of every other runqueue in cache locality order, choosing the +best task across all runqueues. This provides near-determinism of how long any +task across the entire system may wait before receiving CPU time. The other +runqueues are first examine lockless and then trylocked to minimise the +potential lock contention if they are likely to have a suitable better task. +Each other runqueue lock is only held for as long as it takes to examine the +entry for suitability. In "interactive" mode, the default setting, MuQSS will +look for the best deadline task across all CPUs, while in !interactive mode, +it will only select a better deadline task from another CPU if it is more +heavily laden than the current one. + +Lookup is therefore O(k) where k is number of CPUs. + + +Latency. + +Through the use of virtual deadlines to govern the scheduling order of normal +tasks, queue-to-activation latency per runqueue is guaranteed to be bound by +the rr_interval tunable which is set to 6ms by default. This means that the +longest a CPU bound task will wait for more CPU is proportional to the number +of running tasks and in the common case of 0-2 running tasks per CPU, will be +under the 7ms threshold for human perception of jitter. Additionally, as newly +woken tasks will have an early deadline from their previous runtime, the very +tasks that are usually latency sensitive will have the shortest interval for +activation, usually preempting any existing CPU bound tasks. + +Tickless expiry: + +A feature of MuQSS is that it is not tied to the resolution of the chosen tick +rate in Hz, instead depending entirely on the high resolution timers where +possible for sub-millisecond accuracy on timeouts regarless of the underlying +tick rate. This allows MuQSS to be run with the low overhead of low Hz rates +such as 100 by default, benefiting from the improved throughput and lower +power usage it provides. Another advantage of this approach is that in +combination with the Full No HZ option, which disables ticks on running task +CPUs instead of just idle CPUs, the tick can be disabled at all times +regardless of how many tasks are running instead of being limited to just one +running task. Note that this option is NOT recommended for regular desktop +users. + + +Scalability and balancing. + +Unlike traditional approaches where balancing is a combination of CPU selection +at task wakeup and intermittent balancing based on a vast array of rules set +according to architecture, busyness calculations and special case management, +MuQSS indirectly balances on the fly at task wakeup and next task selection. +During initialisation, MuQSS creates a cache coherency ordered list of CPUs for +each logical CPU and uses this to aid task/CPU selection when CPUs are busy. +Additionally it selects any idle CPUs, if they are available, at any time over +busy CPUs according to the following preference: + + * Same thread, idle or busy cache, idle or busy threads + * Other core, same cache, idle or busy cache, idle threads. + * Same node, other CPU, idle cache, idle threads. + * Same node, other CPU, busy cache, idle threads. + * Other core, same cache, busy threads. + * Same node, other CPU, busy threads. + * Other node, other CPU, idle cache, idle threads. + * Other node, other CPU, busy cache, idle threads. + * Other node, other CPU, busy threads. + +Mux is therefore SMT, MC and Numa aware without the need for extra +intermittent balancing to maintain CPUs busy and make the most of cache +coherency. + + +Features + +As the initial prime target audience for MuQSS was the average desktop user, it +was designed to not need tweaking, tuning or have features set to obtain benefit +from it. Thus the number of knobs and features has been kept to an absolute +minimum and should not require extra user input for the vast majority of cases. +There are 3 optional tunables, and 2 extra scheduling policies. The rr_interval, +interactive, and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO +policies. In addition to this, MuQSS also uses sub-tick accounting. What MuQSS +does _not_ now feature is support for CGROUPS. The average user should neither +need to know what these are, nor should they need to be using them to have good +desktop behaviour. However since some applications refuse to work without +cgroups, one can enable them with MuQSS as a stub and the filesystem will be +created which will allow the applications to work. + +rr_interval: + + /proc/sys/kernel/rr_interval + +The value is in milliseconds, and the default value is set to 6. Valid values +are from 1 to 1000 Decreasing the value will decrease latencies at the cost of +decreasing throughput, while increasing it will improve throughput, but at the +cost of worsening latencies. It is based on the fact that humans can detect +jitter at approximately 7ms, so aiming for much lower latencies is pointless +under most circumstances. It is worth noting this fact when comparing the +latency performance of MuQSS to other schedulers. Worst case latencies being +higher than 7ms are far worse than average latencies not being in the +microsecond range. + +interactive: + + /proc/sys/kernel/interactive + +The value is a simple boolean of 1 for on and 0 for off and is set to on by +default. Disabling this will disable the near-determinism of MuQSS when +selecting the next task by not examining all CPUs for the earliest deadline +task, or which CPU to wake to, instead prioritising CPU balancing for improved +throughput. Latency will still be bound by rr_interval, but on a per-CPU basis +instead of across the whole system. + +Isochronous scheduling: + +Isochronous scheduling is a unique scheduling policy designed to provide +near-real-time performance to unprivileged (ie non-root) users without the +ability to starve the machine indefinitely. Isochronous tasks (which means +"same time") are set using, for example, the schedtool application like so: + + schedtool -I -e amarok + +This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works +is that it has a priority level between true realtime tasks and SCHED_NORMAL +which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie, +if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval +rate). However if ISO tasks run for more than a tunable finite amount of time, +they are then demoted back to SCHED_NORMAL scheduling. This finite amount of +time is the percentage of CPU available per CPU, configurable as a percentage in +the following "resource handling" tunable (as opposed to a scheduler tunable): + +iso_cpu: + + /proc/sys/kernel/iso_cpu + +and is set to 70% by default. It is calculated over a rolling 5 second average +Because it is the total CPU available, it means that on a multi CPU machine, it +is possible to have an ISO task running as realtime scheduling indefinitely on +just one CPU, as the other CPUs will be available. Setting this to 100 is the +equivalent of giving all users SCHED_RR access and setting it to 0 removes the +ability to run any pseudo-realtime tasks. + +A feature of MuQSS is that it detects when an application tries to obtain a +realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the +appropriate privileges to use those policies. When it detects this, it will +give the task SCHED_ISO policy instead. Thus it is transparent to the user. + + +Idleprio scheduling: + +Idleprio scheduling is a scheduling policy designed to give out CPU to a task +_only_ when the CPU would be otherwise idle. The idea behind this is to allow +ultra low priority tasks to be run in the background that have virtually no +effect on the foreground tasks. This is ideally suited to distributed computing +clients (like setiathome, folding, mprime etc) but can also be used to start a +video encode or so on without any slowdown of other tasks. To avoid this policy +from grabbing shared resources and holding them indefinitely, if it detects a +state where the task is waiting on I/O, the machine is about to suspend to ram +and so on, it will transiently schedule them as SCHED_NORMAL. Once a task has +been scheduled as IDLEPRIO, it cannot be put back to SCHED_NORMAL without +superuser privileges since it is effectively a lower scheduling policy. Tasks +can be set to start as SCHED_IDLEPRIO with the schedtool command like so: + +schedtool -D -e ./mprime + +Subtick accounting: -MuQSS is an 8 level skip list per runqueue variant of BFS. +It is surprisingly difficult to get accurate CPU accounting, and in many cases, +the accounting is done by simply determining what is happening at the precise +moment a timer tick fires off. This becomes increasingly inaccurate as the timer +tick frequency (HZ) is lowered. It is possible to create an application which +uses almost 100% CPU, yet by being descheduled at the right time, records zero +CPU usage. While the main problem with this is that there are possible security +implications, it is also difficult to determine how much CPU a task really does +use. Mux uses sub-tick accounting from the TSC clock to determine real CPU +usage. Thus, the amount of CPU reported as being used by MuQSS will more +accurately represent how much CPU the task itself is using (as is shown for +example by the 'time' application), so the reported values may be quite +different to other schedulers. When comparing throughput of MuQSS to other +designs, it is important to compare the actual completed work in terms of total +wall clock time taken and total work done, rather than the reported "cpu usage". -See sched-BFS.txt for some of the shared design details. +Symmetric MultiThreading (SMT) aware nice: -Documentation yet to be completed. +SMT, a.k.a. hyperthreading, is a very common feature on modern CPUs. While the +logical CPU count rises by adding thread units to each CPU core, allowing more +than one task to be run simultaneously on the same core, the disadvantage of it +is that the CPU power is shared between the tasks, not summating to the power +of two CPUs. The practical upshot of this is that two tasks running on +separate threads of the same core run significantly slower than if they had one +core each to run on. While smart CPU selection allows each task to have a core +to itself whenever available (as is done on MuQSS), it cannot offset the +slowdown that occurs when the cores are all loaded and only a thread is left. +Most of the time this is harmless as the CPU is effectively overloaded at this +point and the extra thread is of benefit. However when running a niced task in +the presence of an un-niced task (say nice 19 v nice 0), the nice task gets +precisely the same amount of CPU power as the unniced one. MuQSS has an +optional configuration feature known as SMT-NICE which selectively idles the +secondary niced thread for a period proportional to the nice difference, +allowing CPU distribution according to nice level to be maintained, at the +expense of a small amount of extra overhead. If this is configured in on a +machine without SMT threads, the overhead is minimal. -Con Kolivas Sun, 2nd October 2016 +Con Kolivas Sat, 29th October 2016 -- cgit v1.2.3-54-g00ecf