diff options
author | André Fabian Silva Delgado <emulatorman@parabola.nu> | 2015-08-05 17:04:01 -0300 |
---|---|---|
committer | André Fabian Silva Delgado <emulatorman@parabola.nu> | 2015-08-05 17:04:01 -0300 |
commit | 57f0f512b273f60d52568b8c6b77e17f5636edc0 (patch) | |
tree | 5e910f0e82173f4ef4f51111366a3f1299037a7b /Documentation/scheduler |
Initial import
Diffstat (limited to 'Documentation/scheduler')
-rw-r--r-- | Documentation/scheduler/00-INDEX | 18 | ||||
-rw-r--r-- | Documentation/scheduler/completion.txt | 248 | ||||
-rw-r--r-- | Documentation/scheduler/sched-BFS.txt | 347 | ||||
-rw-r--r-- | Documentation/scheduler/sched-arch.txt | 74 | ||||
-rw-r--r-- | Documentation/scheduler/sched-bwc.txt | 122 | ||||
-rw-r--r-- | Documentation/scheduler/sched-deadline.txt | 525 | ||||
-rw-r--r-- | Documentation/scheduler/sched-design-CFS.txt | 242 | ||||
-rw-r--r-- | Documentation/scheduler/sched-domains.txt | 77 | ||||
-rw-r--r-- | Documentation/scheduler/sched-nice-design.txt | 108 | ||||
-rw-r--r-- | Documentation/scheduler/sched-rt-group.txt | 183 | ||||
-rw-r--r-- | Documentation/scheduler/sched-stats.txt | 154 |
11 files changed, 2098 insertions, 0 deletions
diff --git a/Documentation/scheduler/00-INDEX b/Documentation/scheduler/00-INDEX new file mode 100644 index 000000000..eccf7ad2e --- /dev/null +++ b/Documentation/scheduler/00-INDEX @@ -0,0 +1,18 @@ +00-INDEX + - this file. +sched-arch.txt + - CPU Scheduler implementation hints for architecture specific code. +sched-bwc.txt + - CFS bandwidth control overview. +sched-design-CFS.txt + - goals, design and implementation of the Completely Fair Scheduler. +sched-domains.txt + - information on scheduling domains. +sched-nice-design.txt + - How and why the scheduler's nice levels are implemented. +sched-rt-group.txt + - real-time group scheduling. +sched-deadline.txt + - deadline scheduling. +sched-stats.txt + - information on schedstats (Linux Scheduler Statistics). diff --git a/Documentation/scheduler/completion.txt b/Documentation/scheduler/completion.txt new file mode 100644 index 000000000..2622bc7a1 --- /dev/null +++ b/Documentation/scheduler/completion.txt @@ -0,0 +1,248 @@ +completions - wait for completion handling +========================================== + +This document was originally written based on 3.18.0 (linux-next) + +Introduction: +------------- + +If you have one or more threads of execution that must wait for some process +to have reached a point or a specific state, completions can provide a +race-free solution to this problem. Semantically they are somewhat like a +pthread_barrier and have similar use-cases. + +Completions are a code synchronization mechanism which is preferable to any +misuse of locks. Any time you think of using yield() or some quirky +msleep(1) loop to allow something else to proceed, you probably want to +look into using one of the wait_for_completion*() calls instead. The +advantage of using completions is clear intent of the code, but also more +efficient code as both threads can continue until the result is actually +needed. + +Completions are built on top of the generic event infrastructure in Linux, +with the event reduced to a simple flag (appropriately called "done") in +struct completion that tells the waiting threads of execution if they +can continue safely. + +As completions are scheduling related, the code is found in +kernel/sched/completion.c - for details on completion design and +implementation see completions-design.txt + + +Usage: +------ + +There are three parts to using completions, the initialization of the +struct completion, the waiting part through a call to one of the variants of +wait_for_completion() and the signaling side through a call to complete() +or complete_all(). Further there are some helper functions for checking the +state of completions. + +To use completions one needs to include <linux/completion.h> and +create a variable of type struct completion. The structure used for +handling of completions is: + + struct completion { + unsigned int done; + wait_queue_head_t wait; + }; + +providing the wait queue to place tasks on for waiting and the flag for +indicating the state of affairs. + +Completions should be named to convey the intent of the waiter. A good +example is: + + wait_for_completion(&early_console_added); + + complete(&early_console_added); + +Good naming (as always) helps code readability. + + +Initializing completions: +------------------------- + +Initialization of dynamically allocated completions, often embedded in +other structures, is done with: + + void init_completion(&done); + +Initialization is accomplished by initializing the wait queue and setting +the default state to "not available", that is, "done" is set to 0. + +The re-initialization function, reinit_completion(), simply resets the +done element to "not available", thus again to 0, without touching the +wait queue. Calling init_completion() twice on the same completion object is +most likely a bug as it re-initializes the queue to an empty queue and +enqueued tasks could get "lost" - use reinit_completion() in that case. + +For static declaration and initialization, macros are available. These are: + + static DECLARE_COMPLETION(setup_done) + +used for static declarations in file scope. Within functions the static +initialization should always use: + + DECLARE_COMPLETION_ONSTACK(setup_done) + +suitable for automatic/local variables on the stack and will make lockdep +happy. Note also that one needs to make *sure* the completion passed to +work threads remains in-scope, and no references remain to on-stack data +when the initiating function returns. + +Using on-stack completions for code that calls any of the _timeout or +_interruptible/_killable variants is not advisable as they will require +additional synchronization to prevent the on-stack completion object in +the timeout/signal cases from going out of scope. Consider using dynamically +allocated completions when intending to use the _interruptible/_killable +or _timeout variants of wait_for_completion(). + + +Waiting for completions: +------------------------ + +For a thread of execution to wait for some concurrent work to finish, it +calls wait_for_completion() on the initialized completion structure. +A typical usage scenario is: + + struct completion setup_done; + init_completion(&setup_done); + initialize_work(...,&setup_done,...) + + /* run non-dependent code */ /* do setup */ + + wait_for_completion(&setup_done); complete(setup_done) + +This is not implying any temporal order on wait_for_completion() and the +call to complete() - if the call to complete() happened before the call +to wait_for_completion() then the waiting side simply will continue +immediately as all dependencies are satisfied if not it will block until +completion is signaled by complete(). + +Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(), +so it can only be called safely when you know that interrupts are enabled. +Calling it from hard-irq or irqs-off atomic contexts will result in +hard-to-detect spurious enabling of interrupts. + +wait_for_completion(): + + void wait_for_completion(struct completion *done): + +The default behavior is to wait without a timeout and to mark the task as +uninterruptible. wait_for_completion() and its variants are only safe +in process context (as they can sleep) but not in atomic context, +interrupt context, with disabled irqs. or preemption is disabled - see also +try_wait_for_completion() below for handling completion in atomic/interrupt +context. + +As all variants of wait_for_completion() can (obviously) block for a long +time, you probably don't want to call this with held mutexes. + + +Variants available: +------------------- + +The below variants all return status and this status should be checked in +most(/all) cases - in cases where the status is deliberately not checked you +probably want to make a note explaining this (e.g. see +arch/arm/kernel/smp.c:__cpu_up()). + +A common problem that occurs is to have unclean assignment of return types, +so care should be taken with assigning return-values to variables of proper +type. Checking for the specific meaning of return values also has been found +to be quite inaccurate e.g. constructs like +if (!wait_for_completion_interruptible_timeout(...)) would execute the same +code path for successful completion and for the interrupted case - which is +probably not what you want. + + int wait_for_completion_interruptible(struct completion *done) + +This function marks the task TASK_INTERRUPTIBLE. If a signal was received +while waiting it will return -ERESTARTSYS; 0 otherwise. + + unsigned long wait_for_completion_timeout(struct completion *done, + unsigned long timeout) + +The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' +(in jiffies). If timeout occurs it returns 0 else the remaining time in +jiffies (but at least 1). Timeouts are preferably calculated with +msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is +deliberately ignored a comment should probably explain why (e.g. see +drivers/mfd/wm8350-core.c wm8350_read_auxadc()) + + long wait_for_completion_interruptible_timeout( + struct completion *done, unsigned long timeout) + +This function passes a timeout in jiffies and marks the task as +TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS; +otherwise it returns 0 if the completion timed out or the remaining time in +jiffies if completion occurred. + +Further variants include _killable which uses TASK_KILLABLE as the +designated tasks state and will return -ERESTARTSYS if it is interrupted or +else 0 if completion was achieved. There is a _timeout variant as well: + + long wait_for_completion_killable(struct completion *done) + long wait_for_completion_killable_timeout(struct completion *done, + unsigned long timeout) + +The _io variants wait_for_completion_io() behave the same as the non-_io +variants, except for accounting waiting time as waiting on IO, which has +an impact on how the task is accounted in scheduling stats. + + void wait_for_completion_io(struct completion *done) + unsigned long wait_for_completion_io_timeout(struct completion *done + unsigned long timeout) + + +Signaling completions: +---------------------- + +A thread that wants to signal that the conditions for continuation have been +achieved calls complete() to signal exactly one of the waiters that it can +continue. + + void complete(struct completion *done) + +or calls complete_all() to signal all current and future waiters. + + void complete_all(struct completion *done) + +The signaling will work as expected even if completions are signaled before +a thread starts waiting. This is achieved by the waiter "consuming" +(decrementing) the done element of struct completion. Waiting threads +wakeup order is the same in which they were enqueued (FIFO order). + +If complete() is called multiple times then this will allow for that number +of waiters to continue - each call to complete() will simply increment the +done element. Calling complete_all() multiple times is a bug though. Both +complete() and complete_all() can be called in hard-irq/atomic context safely. + +There only can be one thread calling complete() or complete_all() on a +particular struct completion at any time - serialized through the wait +queue spinlock. Any such concurrent calls to complete() or complete_all() +probably are a design bug. + +Signaling completion from hard-irq context is fine as it will appropriately +lock with spin_lock_irqsave/spin_unlock_irqrestore and it will never sleep. + + +try_wait_for_completion()/completion_done(): +-------------------------------------------- + +The try_wait_for_completion() function will not put the thread on the wait +queue but rather returns false if it would need to enqueue (block) the thread, +else it consumes one posted completion and returns true. + + bool try_wait_for_completion(struct completion *done) + +Finally, to check the state of a completion without changing it in any way, +call completion_done(), which returns false if there are no posted +completions that were not yet consumed by waiters (implying that there are +waiters) and true otherwise; + + bool completion_done(struct completion *done) + +Both try_wait_for_completion() and completion_done() are safe to be called in +hard-irq or atomic context. diff --git a/Documentation/scheduler/sched-BFS.txt b/Documentation/scheduler/sched-BFS.txt new file mode 100644 index 000000000..c10d95601 --- /dev/null +++ b/Documentation/scheduler/sched-BFS.txt @@ -0,0 +1,347 @@ +BFS - The Brain Fuck Scheduler by Con Kolivas. + +Goals. + +The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to +completely do away with the complex designs of the past for the cpu process +scheduler and instead implement one that is very simple in basic design. +The main focus of BFS is to achieve excellent desktop interactivity and +responsiveness without heuristics and tuning knobs that are difficult to +understand, impossible to model and predict the effect of, and when tuned to +one workload cause massive detriment to another. + + +Design summary. + +BFS is best described as a single runqueue, O(n) lookup, earliest effective +virtual deadline first design, loosely based on EEVDF (earliest eligible virtual +deadline first) and my previous Staircase Deadline scheduler. Each component +shall be described in order to understand the significance of, and reasoning for +it. The codebase when the first stable version was released was approximately +9000 lines less code than the existing mainline linux kernel scheduler (in +2.6.31). This does not even take into account the removal of documentation and +the cgroups code that is not used. + +Design reasoning. + +The single runqueue refers to the queued but not running processes for the +entire system, regardless of the number of CPUs. The reason for going back to +a single runqueue design is that once multiple runqueues are introduced, +per-CPU or otherwise, there will be complex interactions as each runqueue will +be responsible for the scheduling latency and fairness of the tasks only on its +own runqueue, and to achieve fairness and low latency across multiple CPUs, any +advantage in throughput of having CPU local tasks causes other disadvantages. +This is due to requiring a very complex balancing system to at best achieve some +semblance of fairness across CPUs and can only maintain relatively low latency +for tasks bound to the same CPUs, not across them. To increase said fairness +and latency across CPUs, the advantage of local runqueue locking, which makes +for better scalability, is lost due to having to grab multiple locks. + +A significant feature of BFS is that all accounting is done purely based on CPU +used and nowhere is sleep time used in any way to determine entitlement or +interactivity. Interactivity "estimators" that use some kind of sleep/run +algorithm are doomed to fail to detect all interactive tasks, and to falsely tag +tasks that aren't interactive as being so. The reason for this is that it is +close to impossible to determine that when a task is sleeping, whether it is +doing it voluntarily, as in a userspace application waiting for input in the +form of a mouse click or otherwise, or involuntarily, because it is waiting for +another thread, process, I/O, kernel activity or whatever. Thus, such an +estimator will introduce corner cases, and more heuristics will be required to +cope with those corner cases, introducing more corner cases and failed +interactivity detection and so on. Interactivity in BFS is built into the design +by virtue of the fact that tasks that are waking up have not used up their quota +of CPU time, and have earlier effective deadlines, thereby making it very likely +they will preempt any CPU bound task of equivalent nice level. See below for +more information on the virtual deadline mechanism. Even if they do not preempt +a running task, because the rr interval is guaranteed to have a bound upper +limit on how long a task will wait for, it will be scheduled within a timeframe +that will not cause visible interface jitter. + + +Design details. + +Task insertion. + +BFS inserts tasks into each relevant queue as an O(1) insertion into a double +linked list. On insertion, *every* running queue is checked to see if the newly +queued task can run on any idle queue, or preempt the lowest running task on the +system. This is how the cross-CPU scheduling of BFS achieves significantly lower +latency per extra CPU the system has. In this case the lookup is, in the worst +case scenario, O(n) where n is the number of CPUs on the system. + +Data protection. + +BFS has one single lock protecting the process local data of every task in the +global queue. Thus every insertion, removal and modification of task data in the +global runqueue needs to grab the global lock. However, once a task is taken by +a CPU, the CPU has its own local data copy of the running process' accounting +information which only that CPU accesses and modifies (such as during a +timer tick) thus allowing the accounting data to be updated lockless. Once a +CPU has taken a task to run, it removes it from the global queue. Thus the +global queue only ever has, at most, + + (number of tasks requesting cpu time) - (number of logical CPUs) + 1 + +tasks in the global queue. This value is relevant for the time taken to look up +tasks during scheduling. This will increase if many tasks with CPU affinity set +in their policy to limit which CPUs they're allowed to run on if they outnumber +the number of CPUs. The +1 is because when rescheduling a task, the CPU's +currently running task is put back on the queue. Lookup will be described after +the virtual deadline mechanism is explained. + +Virtual deadline. + +The key to achieving low latency, scheduling fairness, and "nice level" +distribution in BFS is entirely in the virtual deadline mechanism. The one +tunable in BFS is the rr_interval, or "round robin interval". This is the +maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) +tasks of the same nice level will be running for, or looking at it the other +way around, the longest duration two tasks of the same nice level will be +delayed for. When a task requests cpu time, it is given a quota (time_slice) +equal to the rr_interval and a virtual deadline. The virtual deadline is +offset from the current time in jiffies by this equation: + + jiffies + (prio_ratio * rr_interval) + +The prio_ratio is determined as a ratio compared to the baseline of nice -20 +and increases by 10% per nice level. The deadline is a virtual one only in that +no guarantee is placed that a task will actually be scheduled by this time, but +it is used to compare which task should go next. There are three components to +how a task is next chosen. First is time_slice expiration. If a task runs out +of its time_slice, it is descheduled, the time_slice is refilled, and the +deadline reset to that formula above. Second is sleep, where a task no longer +is requesting CPU for whatever reason. The time_slice and deadline are _not_ +adjusted in this case and are just carried over for when the task is next +scheduled. Third is preemption, and that is when a newly waking task is deemed +higher priority than a currently running task on any cpu by virtue of the fact +that it has an earlier virtual deadline than the currently running task. The +earlier deadline is the key to which task is next chosen for the first and +second cases. Once a task is descheduled, it is put back on the queue, and an +O(n) lookup of all queued-but-not-running tasks is done to determine which has +the earliest deadline and that task is chosen to receive CPU next. + +The CPU proportion of different nice tasks works out to be approximately the + + (prio_ratio difference)^2 + +The reason it is squared is that a task's deadline does not change while it is +running unless it runs out of time_slice. Thus, even if the time actually +passes the deadline of another task that is queued, it will not get CPU time +unless the current running task deschedules, and the time "base" (jiffies) is +constantly moving. + +Task lookup. + +BFS has 103 priority queues. 100 of these are dedicated to the static priority +of realtime tasks, and the remaining 3 are, in order of best to worst priority, +SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority +scheduling). When a task of these priorities is queued, a bitmap of running +priorities is set showing which of these priorities has tasks waiting for CPU +time. When a CPU is made to reschedule, the lookup for the next task to get +CPU time is performed in the following way: + +First the bitmap is checked to see what static priority tasks are queued. If +any realtime priorities are found, the corresponding queue is checked and the +first task listed there is taken (provided CPU affinity is suitable) and lookup +is complete. If the priority corresponds to a SCHED_ISO task, they are also +taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds +to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this +stage, every task in the runlist that corresponds to that priority is checked +to see which has the earliest set deadline, and (provided it has suitable CPU +affinity) it is taken off the runqueue and given the CPU. If a task has an +expired deadline, it is taken and the rest of the lookup aborted (as they are +chosen in FIFO order). + +Thus, the lookup is O(n) in the worst case only, where n is as described +earlier, as tasks may be chosen before the whole task list is looked over. + + +Scalability. + +The major limitations of BFS will be that of scalability, as the separate +runqueue designs will have less lock contention as the number of CPUs rises. +However they do not scale linearly even with separate runqueues as multiple +runqueues will need to be locked concurrently on such designs to be able to +achieve fair CPU balancing, to try and achieve some sort of nice-level fairness +across CPUs, and to achieve low enough latency for tasks on a busy CPU when +other CPUs would be more suited. BFS has the advantage that it requires no +balancing algorithm whatsoever, as balancing occurs by proxy simply because +all CPUs draw off the global runqueue, in priority and deadline order. Despite +the fact that scalability is _not_ the prime concern of BFS, it both shows very +good scalability to smaller numbers of CPUs and is likely a more scalable design +at these numbers of CPUs. + +It also has some very low overhead scalability features built into the design +when it has been deemed their overhead is so marginal that they're worth adding. +The first is the local copy of the running process' data to the CPU it's running +on to allow that data to be updated lockless where possible. Then there is +deference paid to the last CPU a task was running on, by trying that CPU first +when looking for an idle CPU to use the next time it's scheduled. Finally there +is the notion of "sticky" tasks that are flagged when they are involuntarily +descheduled, meaning they still want further CPU time. This sticky flag is +used to bias heavily against those tasks being scheduled on a different CPU +unless that CPU would be otherwise idle. When a cpu frequency governor is used +that scales with CPU load, such as ondemand, sticky tasks are not scheduled +on a different CPU at all, preferring instead to go idle. This means the CPU +they were bound to is more likely to increase its speed while the other CPU +will go idle, thus speeding up total task execution time and likely decreasing +power usage. This is the only scenario where BFS will allow a CPU to go idle +in preference to scheduling a task on the earliest available spare CPU. + +The real cost of migrating a task from one CPU to another is entirely dependant +on the cache footprint of the task, how cache intensive the task is, how long +it's been running on that CPU to take up the bulk of its cache, how big the CPU +cache is, how fast and how layered the CPU cache is, how fast a context switch +is... and so on. In other words, it's close to random in the real world where we +do more than just one sole workload. The only thing we can be sure of is that +it's not free. So BFS uses the principle that an idle CPU is a wasted CPU and +utilising idle CPUs is more important than cache locality, and cache locality +only plays a part after that. + +When choosing an idle CPU for a waking task, the cache locality is determined +according to where the task last ran and then idle CPUs are ranked from best +to worst to choose the most suitable idle CPU based on cache locality, NUMA +node locality and hyperthread sibling business. They are chosen in the +following preference (if idle): + +* Same core, idle or busy cache, idle threads +* Other core, same cache, idle or busy cache, idle threads. +* Same node, other CPU, idle cache, idle threads. +* Same node, other CPU, busy cache, idle threads. +* Same core, busy threads. +* Other core, same cache, busy threads. +* Same node, other CPU, busy threads. +* Other node, other CPU, idle cache, idle threads. +* Other node, other CPU, busy cache, idle threads. +* Other node, other CPU, busy threads. + +This shows the SMT or "hyperthread" awareness in the design as well which will +choose a real idle core first before a logical SMT sibling which already has +tasks on the physical CPU. + +Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark. +However this benchmarking was performed on an earlier design that was far less +scalable than the current one so it's hard to know how scalable it is in terms +of both CPUs (due to the global runqueue) and heavily loaded machines (due to +O(n) lookup) at this stage. Note that in terms of scalability, the number of +_logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x) +quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark +results are very promising indeed, without needing to tweak any knobs, features +or options. Benchmark contributions are most welcome. + + +Features + +As the initial prime target audience for BFS was the average desktop user, it +was designed to not need tweaking, tuning or have features set to obtain benefit +from it. Thus the number of knobs and features has been kept to an absolute +minimum and should not require extra user input for the vast majority of cases. +There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval +and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition +to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is +support for CGROUPS. The average user should neither need to know what these +are, nor should they need to be using them to have good desktop behaviour. + +rr_interval + +There is only one "scheduler" tunable, the round robin interval. This can be +accessed in + + /proc/sys/kernel/rr_interval + +The value is in milliseconds, and the default value is set to 6ms. Valid values +are from 1 to 1000. Decreasing the value will decrease latencies at the cost of +decreasing throughput, while increasing it will improve throughput, but at the +cost of worsening latencies. The accuracy of the rr interval is limited by HZ +resolution of the kernel configuration. Thus, the worst case latencies are +usually slightly higher than this actual value. BFS uses "dithering" to try and +minimise the effect the Hz limitation has. The default value of 6 is not an +arbitrary one. It is based on the fact that humans can detect jitter at +approximately 7ms, so aiming for much lower latencies is pointless under most +circumstances. It is worth noting this fact when comparing the latency +performance of BFS to other schedulers. Worst case latencies being higher than +7ms are far worse than average latencies not being in the microsecond range. +Experimentation has shown that rr intervals being increased up to 300 can +improve throughput but beyond that, scheduling noise from elsewhere prevents +further demonstrable throughput. + +Isochronous scheduling. + +Isochronous scheduling is a unique scheduling policy designed to provide +near-real-time performance to unprivileged (ie non-root) users without the +ability to starve the machine indefinitely. Isochronous tasks (which means +"same time") are set using, for example, the schedtool application like so: + + schedtool -I -e amarok + +This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works +is that it has a priority level between true realtime tasks and SCHED_NORMAL +which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie, +if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval +rate). However if ISO tasks run for more than a tunable finite amount of time, +they are then demoted back to SCHED_NORMAL scheduling. This finite amount of +time is the percentage of _total CPU_ available across the machine, configurable +as a percentage in the following "resource handling" tunable (as opposed to a +scheduler tunable): + + /proc/sys/kernel/iso_cpu + +and is set to 70% by default. It is calculated over a rolling 5 second average +Because it is the total CPU available, it means that on a multi CPU machine, it +is possible to have an ISO task running as realtime scheduling indefinitely on +just one CPU, as the other CPUs will be available. Setting this to 100 is the +equivalent of giving all users SCHED_RR access and setting it to 0 removes the +ability to run any pseudo-realtime tasks. + +A feature of BFS is that it detects when an application tries to obtain a +realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the +appropriate privileges to use those policies. When it detects this, it will +give the task SCHED_ISO policy instead. Thus it is transparent to the user. +Because some applications constantly set their policy as well as their nice +level, there is potential for them to undo the override specified by the user +on the command line of setting the policy to SCHED_ISO. To counter this, once +a task has been set to SCHED_ISO policy, it needs superuser privileges to set +it back to SCHED_NORMAL. This will ensure the task remains ISO and all child +processes and threads will also inherit the ISO policy. + +Idleprio scheduling. + +Idleprio scheduling is a scheduling policy designed to give out CPU to a task +_only_ when the CPU would be otherwise idle. The idea behind this is to allow +ultra low priority tasks to be run in the background that have virtually no +effect on the foreground tasks. This is ideally suited to distributed computing +clients (like setiathome, folding, mprime etc) but can also be used to start +a video encode or so on without any slowdown of other tasks. To avoid this +policy from grabbing shared resources and holding them indefinitely, if it +detects a state where the task is waiting on I/O, the machine is about to +suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As +per the Isochronous task management, once a task has been scheduled as IDLEPRIO, +it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can +be set to start as SCHED_IDLEPRIO with the schedtool command like so: + + schedtool -D -e ./mprime + +Subtick accounting. + +It is surprisingly difficult to get accurate CPU accounting, and in many cases, +the accounting is done by simply determining what is happening at the precise +moment a timer tick fires off. This becomes increasingly inaccurate as the +timer tick frequency (HZ) is lowered. It is possible to create an application +which uses almost 100% CPU, yet by being descheduled at the right time, records +zero CPU usage. While the main problem with this is that there are possible +security implications, it is also difficult to determine how much CPU a task +really does use. BFS tries to use the sub-tick accounting from the TSC clock, +where possible, to determine real CPU usage. This is not entirely reliable, but +is far more likely to produce accurate CPU usage data than the existing designs +and will not show tasks as consuming no CPU usage when they actually are. Thus, +the amount of CPU reported as being used by BFS will more accurately represent +how much CPU the task itself is using (as is shown for example by the 'time' +application), so the reported values may be quite different to other schedulers. +Values reported as the 'load' are more prone to problems with this design, but +per process values are closer to real usage. When comparing throughput of BFS +to other designs, it is important to compare the actual completed work in terms +of total wall clock time taken and total work done, rather than the reported +"cpu usage". + + +Con Kolivas <kernel@kolivas.org> Tue, 5 Apr 2011 diff --git a/Documentation/scheduler/sched-arch.txt b/Documentation/scheduler/sched-arch.txt new file mode 100644 index 000000000..a2f27bbf2 --- /dev/null +++ b/Documentation/scheduler/sched-arch.txt @@ -0,0 +1,74 @@ + CPU Scheduler implementation hints for architecture specific code + + Nick Piggin, 2005 + +Context switch +============== +1. Runqueue locking +By default, the switch_to arch function is called with the runqueue +locked. This is usually not a problem unless switch_to may need to +take the runqueue lock. This is usually due to a wake up operation in +the context switch. See arch/ia64/include/asm/switch_to.h for an example. + +To request the scheduler call switch_to with the runqueue unlocked, +you must `#define __ARCH_WANT_UNLOCKED_CTXSW` in a header file +(typically the one where switch_to is defined). + +Unlocked context switches introduce only a very minor performance +penalty to the core scheduler implementation in the CONFIG_SMP case. + +CPU idle +======== +Your cpu_idle routines need to obey the following rules: + +1. Preempt should now disabled over idle routines. Should only + be enabled to call schedule() then disabled again. + +2. need_resched/TIF_NEED_RESCHED is only ever set, and will never + be cleared until the running task has called schedule(). Idle + threads need only ever query need_resched, and may never set or + clear it. + +3. When cpu_idle finds (need_resched() == 'true'), it should call + schedule(). It should not call schedule() otherwise. + +4. The only time interrupts need to be disabled when checking + need_resched is if we are about to sleep the processor until + the next interrupt (this doesn't provide any protection of + need_resched, it prevents losing an interrupt). + + 4a. Common problem with this type of sleep appears to be: + local_irq_disable(); + if (!need_resched()) { + local_irq_enable(); + *** resched interrupt arrives here *** + __asm__("sleep until next interrupt"); + } + +5. TIF_POLLING_NRFLAG can be set by idle routines that do not + need an interrupt to wake them up when need_resched goes high. + In other words, they must be periodically polling need_resched, + although it may be reasonable to do some background work or enter + a low CPU priority. + + 5a. If TIF_POLLING_NRFLAG is set, and we do decide to enter + an interrupt sleep, it needs to be cleared then a memory + barrier issued (followed by a test of need_resched with + interrupts disabled, as explained in 3). + +arch/x86/kernel/process.c has examples of both polling and +sleeping idle functions. + + +Possible arch/ problems +======================= + +Possible arch problems I found (and either tried to fix or didn't): + +ia64 - is safe_halt call racy vs interrupts? (does it sleep?) (See #4a) + +sh64 - Is sleeping racy vs interrupts? (See #4a) + +sparc - IRQs on at this point(?), change local_irq_save to _disable. + - TODO: needs secondary CPUs to disable preempt (See #1) + diff --git a/Documentation/scheduler/sched-bwc.txt b/Documentation/scheduler/sched-bwc.txt new file mode 100644 index 000000000..f6b1873f6 --- /dev/null +++ b/Documentation/scheduler/sched-bwc.txt @@ -0,0 +1,122 @@ +CFS Bandwidth Control +===================== + +[ This document only discusses CPU bandwidth control for SCHED_NORMAL. + The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.txt ] + +CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the +specification of the maximum CPU bandwidth available to a group or hierarchy. + +The bandwidth allowed for a group is specified using a quota and period. Within +each given "period" (microseconds), a group is allowed to consume only up to +"quota" microseconds of CPU time. When the CPU bandwidth consumption of a +group exceeds this limit (for that period), the tasks belonging to its +hierarchy will be throttled and are not allowed to run again until the next +period. + +A group's unused runtime is globally tracked, being refreshed with quota units +above at each period boundary. As threads consume this bandwidth it is +transferred to cpu-local "silos" on a demand basis. The amount transferred +within each of these updates is tunable and described as the "slice". + +Management +---------- +Quota and period are managed within the cpu subsystem via cgroupfs. + +cpu.cfs_quota_us: the total available run-time within a period (in microseconds) +cpu.cfs_period_us: the length of a period (in microseconds) +cpu.stat: exports throttling statistics [explained further below] + +The default values are: + cpu.cfs_period_us=100ms + cpu.cfs_quota=-1 + +A value of -1 for cpu.cfs_quota_us indicates that the group does not have any +bandwidth restriction in place, such a group is described as an unconstrained +bandwidth group. This represents the traditional work-conserving behavior for +CFS. + +Writing any (valid) positive value(s) will enact the specified bandwidth limit. +The minimum quota allowed for the quota or period is 1ms. There is also an +upper bound on the period length of 1s. Additional restrictions exist when +bandwidth limits are used in a hierarchical fashion, these are explained in +more detail below. + +Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit +and return the group to an unconstrained state once more. + +Any updates to a group's bandwidth specification will result in it becoming +unthrottled if it is in a constrained state. + +System wide settings +-------------------- +For efficiency run-time is transferred between the global pool and CPU local +"silos" in a batch fashion. This greatly reduces global accounting pressure +on large systems. The amount transferred each time such an update is required +is described as the "slice". + +This is tunable via procfs: + /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms) + +Larger slice values will reduce transfer overheads, while smaller values allow +for more fine-grained consumption. + +Statistics +---------- +A group's bandwidth statistics are exported via 3 fields in cpu.stat. + +cpu.stat: +- nr_periods: Number of enforcement intervals that have elapsed. +- nr_throttled: Number of times the group has been throttled/limited. +- throttled_time: The total time duration (in nanoseconds) for which entities + of the group have been throttled. + +This interface is read-only. + +Hierarchical considerations +--------------------------- +The interface enforces that an individual entity's bandwidth is always +attainable, that is: max(c_i) <= C. However, over-subscription in the +aggregate case is explicitly allowed to enable work-conserving semantics +within a hierarchy. + e.g. \Sum (c_i) may exceed C +[ Where C is the parent's bandwidth, and c_i its children ] + + +There are two ways in which a group may become throttled: + a. it fully consumes its own quota within a period + b. a parent's quota is fully consumed within its period + +In case b) above, even though the child may have runtime remaining it will not +be allowed to until the parent's runtime is refreshed. + +Examples +-------- +1. Limit a group to 1 CPU worth of runtime. + + If period is 250ms and quota is also 250ms, the group will get + 1 CPU worth of runtime every 250ms. + + # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */ + # echo 250000 > cpu.cfs_period_us /* period = 250ms */ + +2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine. + + With 500ms period and 1000ms quota, the group can get 2 CPUs worth of + runtime every 500ms. + + # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */ + # echo 500000 > cpu.cfs_period_us /* period = 500ms */ + + The larger period here allows for increased burst capacity. + +3. Limit a group to 20% of 1 CPU. + + With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU. + + # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ + # echo 50000 > cpu.cfs_period_us /* period = 50ms */ + + By using a small period here we are ensuring a consistent latency + response at the expense of burst capacity. + diff --git a/Documentation/scheduler/sched-deadline.txt b/Documentation/scheduler/sched-deadline.txt new file mode 100644 index 000000000..21461a044 --- /dev/null +++ b/Documentation/scheduler/sched-deadline.txt @@ -0,0 +1,525 @@ + Deadline Task Scheduling + ------------------------ + +CONTENTS +======== + + 0. WARNING + 1. Overview + 2. Scheduling algorithm + 3. Scheduling Real-Time Tasks + 4. Bandwidth management + 4.1 System-wide settings + 4.2 Task interface + 4.3 Default behavior + 5. Tasks CPU affinity + 5.1 SCHED_DEADLINE and cpusets HOWTO + 6. Future plans + A. Test suite + B. Minimal main() + + +0. WARNING +========== + + Fiddling with these settings can result in an unpredictable or even unstable + system behavior. As for -rt (group) scheduling, it is assumed that root users + know what they're doing. + + +1. Overview +=========== + + The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is + basically an implementation of the Earliest Deadline First (EDF) scheduling + algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS) + that makes it possible to isolate the behavior of tasks between each other. + + +2. Scheduling algorithm +================== + + SCHED_DEADLINE uses three parameters, named "runtime", "period", and + "deadline", to schedule tasks. A SCHED_DEADLINE task should receive + "runtime" microseconds of execution time every "period" microseconds, and + these "runtime" microseconds are available within "deadline" microseconds + from the beginning of the period. In order to implement this behaviour, + every time the task wakes up, the scheduler computes a "scheduling deadline" + consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then + scheduled using EDF[1] on these scheduling deadlines (the task with the + earliest scheduling deadline is selected for execution). Notice that the + task actually receives "runtime" time units within "deadline" if a proper + "admission control" strategy (see Section "4. Bandwidth management") is used + (clearly, if the system is overloaded this guarantee cannot be respected). + + Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to tasks so + that each task runs for at most its runtime every period, avoiding any + interference between different tasks (bandwidth isolation), while the EDF[1] + algorithm selects the task with the earliest scheduling deadline as the one + to be executed next. Thanks to this feature, tasks that do not strictly comply + with the "traditional" real-time task model (see Section 3) can effectively + use the new policy. + + In more details, the CBS algorithm assigns scheduling deadlines to + tasks in the following way: + + - Each SCHED_DEADLINE task is characterised by the "runtime", + "deadline", and "period" parameters; + + - The state of the task is described by a "scheduling deadline", and + a "remaining runtime". These two parameters are initially set to 0; + + - When a SCHED_DEADLINE task wakes up (becomes ready for execution), + the scheduler checks if + + remaining runtime runtime + ---------------------------------- > --------- + scheduling deadline - current time period + + then, if the scheduling deadline is smaller than the current time, or + this condition is verified, the scheduling deadline and the + remaining runtime are re-initialised as + + scheduling deadline = current time + deadline + remaining runtime = runtime + + otherwise, the scheduling deadline and the remaining runtime are + left unchanged; + + - When a SCHED_DEADLINE task executes for an amount of time t, its + remaining runtime is decreased as + + remaining runtime = remaining runtime - t + + (technically, the runtime is decreased at every tick, or when the + task is descheduled / preempted); + + - When the remaining runtime becomes less or equal than 0, the task is + said to be "throttled" (also known as "depleted" in real-time literature) + and cannot be scheduled until its scheduling deadline. The "replenishment + time" for this task (see next item) is set to be equal to the current + value of the scheduling deadline; + + - When the current time is equal to the replenishment time of a + throttled task, the scheduling deadline and the remaining runtime are + updated as + + scheduling deadline = scheduling deadline + period + remaining runtime = remaining runtime + runtime + + +3. Scheduling Real-Time Tasks +============================= + + * BIG FAT WARNING ****************************************************** + * + * This section contains a (not-thorough) summary on classical deadline + * scheduling theory, and how it applies to SCHED_DEADLINE. + * The reader can "safely" skip to Section 4 if only interested in seeing + * how the scheduling policy can be used. Anyway, we strongly recommend + * to come back here and continue reading (once the urge for testing is + * satisfied :P) to be sure of fully understanding all technical details. + ************************************************************************ + + There are no limitations on what kind of task can exploit this new + scheduling discipline, even if it must be said that it is particularly + suited for periodic or sporadic real-time tasks that need guarantees on their + timing behavior, e.g., multimedia, streaming, control applications, etc. + + A typical real-time task is composed of a repetition of computation phases + (task instances, or jobs) which are activated on a periodic or sporadic + fashion. + Each job J_j (where J_j is the j^th job of the task) is characterised by an + arrival time r_j (the time when the job starts), an amount of computation + time c_j needed to finish the job, and a job absolute deadline d_j, which + is the time within which the job should be finished. The maximum execution + time max_j{c_j} is called "Worst Case Execution Time" (WCET) for the task. + A real-time task can be periodic with period P if r_{j+1} = r_j + P, or + sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally, + d_j = r_j + D, where D is the task's relative deadline. + The utilisation of a real-time task is defined as the ratio between its + WCET and its period (or minimum inter-arrival time), and represents + the fraction of CPU time needed to execute the task. + + If the total utilisation sum_i(WCET_i/P_i) is larger than M (with M equal + to the number of CPUs), then the scheduler is unable to respect all the + deadlines. + Note that total utilisation is defined as the sum of the utilisations + WCET_i/P_i over all the real-time tasks in the system. When considering + multiple real-time tasks, the parameters of the i-th task are indicated + with the "_i" suffix. + Moreover, if the total utilisation is larger than M, then we risk starving + non- real-time tasks by real-time tasks. + If, instead, the total utilisation is smaller than M, then non real-time + tasks will not be starved and the system might be able to respect all the + deadlines. + As a matter of fact, in this case it is possible to provide an upper bound + for tardiness (defined as the maximum between 0 and the difference + between the finishing time of a job and its absolute deadline). + More precisely, it can be proven that using a global EDF scheduler the + maximum tardiness of each task is smaller or equal than + ((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max + where WCET_max = max_i{WCET_i} is the maximum WCET, WCET_min=min_i{WCET_i} + is the minimum WCET, and U_max = max_i{WCET_i/P_i} is the maximum utilisation. + + If M=1 (uniprocessor system), or in case of partitioned scheduling (each + real-time task is statically assigned to one and only one CPU), it is + possible to formally check if all the deadlines are respected. + If D_i = P_i for all tasks, then EDF is able to respect all the deadlines + of all the tasks executing on a CPU if and only if the total utilisation + of the tasks running on such a CPU is smaller or equal than 1. + If D_i != P_i for some task, then it is possible to define the density of + a task as C_i/min{D_i,T_i}, and EDF is able to respect all the deadlines + of all the tasks running on a CPU if the sum sum_i C_i/min{D_i,T_i} of the + densities of the tasks running on such a CPU is smaller or equal than 1 + (notice that this condition is only sufficient, and not necessary). + + On multiprocessor systems with global EDF scheduling (non partitioned + systems), a sufficient test for schedulability can not be based on the + utilisations (it can be shown that task sets with utilisations slightly + larger than 1 can miss deadlines regardless of the number of CPUs M). + However, as previously stated, enforcing that the total utilisation is smaller + than M is enough to guarantee that non real-time tasks are not starved and + that the tardiness of real-time tasks has an upper bound. + + SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that + the jobs' deadlines of a task are respected. In order to do this, a task + must be scheduled by setting: + + - runtime >= WCET + - deadline = D + - period <= P + + IOW, if runtime >= WCET and if period is >= P, then the scheduling deadlines + and the absolute deadlines (d_j) coincide, so a proper admission control + allows to respect the jobs' absolute deadlines for this task (this is what is + called "hard schedulability property" and is an extension of Lemma 1 of [2]). + Notice that if runtime > deadline the admission control will surely reject + this task, as it is not possible to respect its temporal constraints. + + References: + 1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram- + ming in a hard-real-time environment. Journal of the Association for + Computing Machinery, 20(1), 1973. + 2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard + Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems + Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf + 3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab + Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf + +4. Bandwidth management +======================= + + As previously mentioned, in order for -deadline scheduling to be + effective and useful (that is, to be able to provide "runtime" time units + within "deadline"), it is important to have some method to keep the allocation + of the available fractions of CPU time to the various tasks under control. + This is usually called "admission control" and if it is not performed, then + no guarantee can be given on the actual scheduling of the -deadline tasks. + + As already stated in Section 3, a necessary condition to be respected to + correctly schedule a set of real-time tasks is that the total utilisation + is smaller than M. When talking about -deadline tasks, this requires that + the sum of the ratio between runtime and period for all tasks is smaller + than M. Notice that the ratio runtime/period is equivalent to the utilisation + of a "traditional" real-time task, and is also often referred to as + "bandwidth". + The interface used to control the CPU bandwidth that can be allocated + to -deadline tasks is similar to the one already used for -rt + tasks with real-time group scheduling (a.k.a. RT-throttling - see + Documentation/scheduler/sched-rt-group.txt), and is based on readable/ + writable control files located in procfs (for system wide settings). + Notice that per-group settings (controlled through cgroupfs) are still not + defined for -deadline tasks, because more discussion is needed in order to + figure out how we want to manage SCHED_DEADLINE bandwidth at the task group + level. + + A main difference between deadline bandwidth management and RT-throttling + is that -deadline tasks have bandwidth on their own (while -rt ones don't!), + and thus we don't need a higher level throttling mechanism to enforce the + desired bandwidth. In other words, this means that interface parameters are + only used at admission control time (i.e., when the user calls + sched_setattr()). Scheduling is then performed considering actual tasks' + parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks + respecting their needs in terms of granularity. Therefore, using this simple + interface we can put a cap on total utilization of -deadline tasks (i.e., + \Sum (runtime_i / period_i) < global_dl_utilization_cap). + +4.1 System wide settings +------------------------ + + The system wide settings are configured under the /proc virtual file system. + + For now the -rt knobs are used for -deadline admission control and the + -deadline runtime is accounted against the -rt runtime. We realise that this + isn't entirely desirable; however, it is better to have a small interface for + now, and be able to change it easily later. The ideal situation (see 5.) is to + run -rt tasks from a -deadline server; in which case the -rt bandwidth is a + direct subset of dl_bw. + + This means that, for a root_domain comprising M CPUs, -deadline tasks + can be created while the sum of their bandwidths stays below: + + M * (sched_rt_runtime_us / sched_rt_period_us) + + It is also possible to disable this bandwidth management logic, and + be thus free of oversubscribing the system up to any arbitrary level. + This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us. + + +4.2 Task interface +------------------ + + Specifying a periodic/sporadic task that executes for a given amount of + runtime at each instance, and that is scheduled according to the urgency of + its own timing constraints needs, in general, a way of declaring: + - a (maximum/typical) instance execution time, + - a minimum interval between consecutive instances, + - a time constraint by which each instance must be completed. + + Therefore: + * a new struct sched_attr, containing all the necessary fields is + provided; + * the new scheduling related syscalls that manipulate it, i.e., + sched_setattr() and sched_getattr() are implemented. + + +4.3 Default behavior +--------------------- + + The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to + 950000. With rt_period equal to 1000000, by default, it means that -deadline + tasks can use at most 95%, multiplied by the number of CPUs that compose the + root_domain, for each root_domain. + This means that non -deadline tasks will receive at least 5% of the CPU time, + and that -deadline tasks will receive their runtime with a guaranteed + worst-case delay respect to the "deadline" parameter. If "deadline" = "period" + and the cpuset mechanism is used to implement partitioned scheduling (see + Section 5), then this simple setting of the bandwidth management is able to + deterministically guarantee that -deadline tasks will receive their runtime + in a period. + + Finally, notice that in order not to jeopardize the admission control a + -deadline task cannot fork. + +5. Tasks CPU affinity +===================== + + -deadline tasks cannot have an affinity mask smaller that the entire + root_domain they are created on. However, affinities can be specified + through the cpuset facility (Documentation/cgroups/cpusets.txt). + +5.1 SCHED_DEADLINE and cpusets HOWTO +------------------------------------ + + An example of a simple configuration (pin a -deadline task to CPU0) + follows (rt-app is used to create a -deadline task). + + mkdir /dev/cpuset + mount -t cgroup -o cpuset cpuset /dev/cpuset + cd /dev/cpuset + mkdir cpu0 + echo 0 > cpu0/cpuset.cpus + echo 0 > cpu0/cpuset.mems + echo 1 > cpuset.cpu_exclusive + echo 0 > cpuset.sched_load_balance + echo 1 > cpu0/cpuset.cpu_exclusive + echo 1 > cpu0/cpuset.mem_exclusive + echo $$ > cpu0/tasks + rt-app -t 100000:10000:d:0 -D5 (it is now actually superfluous to specify + task affinity) + +6. Future plans +=============== + + Still missing: + + - refinements to deadline inheritance, especially regarding the possibility + of retaining bandwidth isolation among non-interacting tasks. This is + being studied from both theoretical and practical points of view, and + hopefully we should be able to produce some demonstrative code soon; + - (c)group based bandwidth management, and maybe scheduling; + - access control for non-root users (and related security concerns to + address), which is the best way to allow unprivileged use of the mechanisms + and how to prevent non-root users "cheat" the system? + + As already discussed, we are planning also to merge this work with the EDF + throttling patches [https://lkml.org/lkml/2010/2/23/239] but we still are in + the preliminary phases of the merge and we really seek feedback that would + help us decide on the direction it should take. + +Appendix A. Test suite +====================== + + The SCHED_DEADLINE policy can be easily tested using two applications that + are part of a wider Linux Scheduler validation suite. The suite is + available as a GitHub repository: https://github.com/scheduler-tools. + + The first testing application is called rt-app and can be used to + start multiple threads with specific parameters. rt-app supports + SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related + parameters (e.g., niceness, priority, runtime/deadline/period). rt-app + is a valuable tool, as it can be used to synthetically recreate certain + workloads (maybe mimicking real use-cases) and evaluate how the scheduler + behaves under such workloads. In this way, results are easily reproducible. + rt-app is available at: https://github.com/scheduler-tools/rt-app. + + Thread parameters can be specified from the command line, with something like + this: + + # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5 + + The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE, + executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO + priority 10, executes for 20ms every 150ms. The test will run for a total + of 5 seconds. + + More interestingly, configurations can be described with a json file that + can be passed as input to rt-app with something like this: + + # rt-app my_config.json + + The parameters that can be specified with the second method are a superset + of the command line options. Please refer to rt-app documentation for more + details (<rt-app-sources>/doc/*.json). + + The second testing application is a modification of schedtool, called + schedtool-dl, which can be used to setup SCHED_DEADLINE parameters for a + certain pid/application. schedtool-dl is available at: + https://github.com/scheduler-tools/schedtool-dl.git. + + The usage is straightforward: + + # schedtool -E -t 10000000:100000000 -e ./my_cpuhog_app + + With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation + of 10ms every 100ms (note that parameters are expressed in microseconds). + You can also use schedtool to create a reservation for an already running + application, given that you know its pid: + + # schedtool -E -t 10000000:100000000 my_app_pid + +Appendix B. Minimal main() +========================== + + We provide in what follows a simple (ugly) self-contained code snippet + showing how SCHED_DEADLINE reservations can be created by a real-time + application developer. + + #define _GNU_SOURCE + #include <unistd.h> + #include <stdio.h> + #include <stdlib.h> + #include <string.h> + #include <time.h> + #include <linux/unistd.h> + #include <linux/kernel.h> + #include <linux/types.h> + #include <sys/syscall.h> + #include <pthread.h> + + #define gettid() syscall(__NR_gettid) + + #define SCHED_DEADLINE 6 + + /* XXX use the proper syscall numbers */ + #ifdef __x86_64__ + #define __NR_sched_setattr 314 + #define __NR_sched_getattr 315 + #endif + + #ifdef __i386__ + #define __NR_sched_setattr 351 + #define __NR_sched_getattr 352 + #endif + + #ifdef __arm__ + #define __NR_sched_setattr 380 + #define __NR_sched_getattr 381 + #endif + + static volatile int done; + + struct sched_attr { + __u32 size; + + __u32 sched_policy; + __u64 sched_flags; + + /* SCHED_NORMAL, SCHED_BATCH */ + __s32 sched_nice; + + /* SCHED_FIFO, SCHED_RR */ + __u32 sched_priority; + + /* SCHED_DEADLINE (nsec) */ + __u64 sched_runtime; + __u64 sched_deadline; + __u64 sched_period; + }; + + int sched_setattr(pid_t pid, + const struct sched_attr *attr, + unsigned int flags) + { + return syscall(__NR_sched_setattr, pid, attr, flags); + } + + int sched_getattr(pid_t pid, + struct sched_attr *attr, + unsigned int size, + unsigned int flags) + { + return syscall(__NR_sched_getattr, pid, attr, size, flags); + } + + void *run_deadline(void *data) + { + struct sched_attr attr; + int x = 0; + int ret; + unsigned int flags = 0; + + printf("deadline thread started [%ld]\n", gettid()); + + attr.size = sizeof(attr); + attr.sched_flags = 0; + attr.sched_nice = 0; + attr.sched_priority = 0; + + /* This creates a 10ms/30ms reservation */ + attr.sched_policy = SCHED_DEADLINE; + attr.sched_runtime = 10 * 1000 * 1000; + attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000; + + ret = sched_setattr(0, &attr, flags); + if (ret < 0) { + done = 0; + perror("sched_setattr"); + exit(-1); + } + + while (!done) { + x++; + } + + printf("deadline thread dies [%ld]\n", gettid()); + return NULL; + } + + int main (int argc, char **argv) + { + pthread_t thread; + + printf("main thread [%ld]\n", gettid()); + + pthread_create(&thread, NULL, run_deadline, NULL); + + sleep(10); + + done = 1; + pthread_join(thread, NULL); + + printf("main dies [%ld]\n", gettid()); + return 0; + } diff --git a/Documentation/scheduler/sched-design-CFS.txt b/Documentation/scheduler/sched-design-CFS.txt new file mode 100644 index 000000000..f14f49304 --- /dev/null +++ b/Documentation/scheduler/sched-design-CFS.txt @@ -0,0 +1,242 @@ + ============= + CFS Scheduler + ============= + + +1. OVERVIEW + +CFS stands for "Completely Fair Scheduler," and is the new "desktop" process +scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. It is the +replacement for the previous vanilla scheduler's SCHED_OTHER interactivity +code. + +80% of CFS's design can be summed up in a single sentence: CFS basically models +an "ideal, precise multi-tasking CPU" on real hardware. + +"Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% physical +power and which can run each task at precise equal speed, in parallel, each at +1/nr_running speed. For example: if there are 2 tasks running, then it runs +each at 50% physical power --- i.e., actually in parallel. + +On real hardware, we can run only a single task at once, so we have to +introduce the concept of "virtual runtime." The virtual runtime of a task +specifies when its next timeslice would start execution on the ideal +multi-tasking CPU described above. In practice, the virtual runtime of a task +is its actual runtime normalized to the total number of running tasks. + + + +2. FEW IMPLEMENTATION DETAILS + +In CFS the virtual runtime is expressed and tracked via the per-task +p->se.vruntime (nanosec-unit) value. This way, it's possible to accurately +timestamp and measure the "expected CPU time" a task should have gotten. + +[ small detail: on "ideal" hardware, at any time all tasks would have the same + p->se.vruntime value --- i.e., tasks would execute simultaneously and no task + would ever get "out of balance" from the "ideal" share of CPU time. ] + +CFS's task picking logic is based on this p->se.vruntime value and it is thus +very simple: it always tries to run the task with the smallest p->se.vruntime +value (i.e., the task which executed least so far). CFS always tries to split +up CPU time between runnable tasks as close to "ideal multitasking hardware" as +possible. + +Most of the rest of CFS's design just falls out of this really simple concept, +with a few add-on embellishments like nice levels, multiprocessing and various +algorithm variants to recognize sleepers. + + + +3. THE RBTREE + +CFS's design is quite radical: it does not use the old data structures for the +runqueues, but it uses a time-ordered rbtree to build a "timeline" of future +task execution, and thus has no "array switch" artifacts (by which both the +previous vanilla scheduler and RSDL/SD are affected). + +CFS also maintains the rq->cfs.min_vruntime value, which is a monotonic +increasing value tracking the smallest vruntime among all tasks in the +runqueue. The total amount of work done by the system is tracked using +min_vruntime; that value is used to place newly activated entities on the left +side of the tree as much as possible. + +The total number of running tasks in the runqueue is accounted through the +rq->cfs.load value, which is the sum of the weights of the tasks queued on the +runqueue. + +CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the +p->se.vruntime key. CFS picks the "leftmost" task from this tree and sticks to it. +As the system progresses forwards, the executed tasks are put into the tree +more and more to the right --- slowly but surely giving a chance for every task +to become the "leftmost task" and thus get on the CPU within a deterministic +amount of time. + +Summing up, CFS works like this: it runs a task a bit, and when the task +schedules (or a scheduler tick happens) the task's CPU usage is "accounted +for": the (small) time it just spent using the physical CPU is added to +p->se.vruntime. Once p->se.vruntime gets high enough so that another task +becomes the "leftmost task" of the time-ordered rbtree it maintains (plus a +small amount of "granularity" distance relative to the leftmost task so that we +do not over-schedule tasks and trash the cache), then the new leftmost task is +picked and the current task is preempted. + + + +4. SOME FEATURES OF CFS + +CFS uses nanosecond granularity accounting and does not rely on any jiffies or +other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the +way the previous scheduler had, and has no heuristics whatsoever. There is +only one central tunable (you have to switch on CONFIG_SCHED_DEBUG): + + /proc/sys/kernel/sched_min_granularity_ns + +which can be used to tune the scheduler from "desktop" (i.e., low latencies) to +"server" (i.e., good batching) workloads. It defaults to a setting suitable +for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too. + +Due to its design, the CFS scheduler is not prone to any of the "attacks" that +exist today against the heuristics of the stock scheduler: fiftyp.c, thud.c, +chew.c, ring-test.c, massive_intr.c all work fine and do not impact +interactivity and produce the expected behavior. + +The CFS scheduler has a much stronger handling of nice levels and SCHED_BATCH +than the previous vanilla scheduler: both types of workloads are isolated much +more aggressively. + +SMP load-balancing has been reworked/sanitized: the runqueue-walking +assumptions are gone from the load-balancing code now, and iterators of the +scheduling modules are used. The balancing code got quite a bit simpler as a +result. + + + +5. Scheduling policies + +CFS implements three scheduling policies: + + - SCHED_NORMAL (traditionally called SCHED_OTHER): The scheduling + policy that is used for regular tasks. + + - SCHED_BATCH: Does not preempt nearly as often as regular tasks + would, thereby allowing tasks to run longer and make better use of + caches but at the cost of interactivity. This is well suited for + batch jobs. + + - SCHED_IDLE: This is even weaker than nice 19, but its not a true + idle timer scheduler in order to avoid to get into priority + inversion problems which would deadlock the machine. + +SCHED_FIFO/_RR are implemented in sched/rt.c and are as specified by +POSIX. + +The command chrt from util-linux-ng 2.13.1.1 can set all of these except +SCHED_IDLE. + + + +6. SCHEDULING CLASSES + +The new CFS scheduler has been designed in such a way to introduce "Scheduling +Classes," an extensible hierarchy of scheduler modules. These modules +encapsulate scheduling policy details and are handled by the scheduler core +without the core code assuming too much about them. + +sched/fair.c implements the CFS scheduler described above. + +sched/rt.c implements SCHED_FIFO and SCHED_RR semantics, in a simpler way than +the previous vanilla scheduler did. It uses 100 runqueues (for all 100 RT +priority levels, instead of 140 in the previous scheduler) and it needs no +expired array. + +Scheduling classes are implemented through the sched_class structure, which +contains hooks to functions that must be called whenever an interesting event +occurs. + +This is the (partial) list of the hooks: + + - enqueue_task(...) + + Called when a task enters a runnable state. + It puts the scheduling entity (task) into the red-black tree and + increments the nr_running variable. + + - dequeue_task(...) + + When a task is no longer runnable, this function is called to keep the + corresponding scheduling entity out of the red-black tree. It decrements + the nr_running variable. + + - yield_task(...) + + This function is basically just a dequeue followed by an enqueue, unless the + compat_yield sysctl is turned on; in that case, it places the scheduling + entity at the right-most end of the red-black tree. + + - check_preempt_curr(...) + + This function checks if a task that entered the runnable state should + preempt the currently running task. + + - pick_next_task(...) + + This function chooses the most appropriate task eligible to run next. + + - set_curr_task(...) + + This function is called when a task changes its scheduling class or changes + its task group. + + - task_tick(...) + + This function is mostly called from time tick functions; it might lead to + process switch. This drives the running preemption. + + + + +7. GROUP SCHEDULER EXTENSIONS TO CFS + +Normally, the scheduler operates on individual tasks and strives to provide +fair CPU time to each task. Sometimes, it may be desirable to group tasks and +provide fair CPU time to each such task group. For example, it may be +desirable to first provide fair CPU time to each user on the system and then to +each task belonging to a user. + +CONFIG_CGROUP_SCHED strives to achieve exactly that. It lets tasks to be +grouped and divides CPU time fairly among such groups. + +CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and +SCHED_RR) tasks. + +CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and +SCHED_BATCH) tasks. + + These options need CONFIG_CGROUPS to be defined, and let the administrator + create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See + Documentation/cgroups/cgroups.txt for more information about this filesystem. + +When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each +group created using the pseudo filesystem. See example steps below to create +task groups and modify their CPU share using the "cgroups" pseudo filesystem. + + # mount -t tmpfs cgroup_root /sys/fs/cgroup + # mkdir /sys/fs/cgroup/cpu + # mount -t cgroup -ocpu none /sys/fs/cgroup/cpu + # cd /sys/fs/cgroup/cpu + + # mkdir multimedia # create "multimedia" group of tasks + # mkdir browser # create "browser" group of tasks + + # #Configure the multimedia group to receive twice the CPU bandwidth + # #that of browser group + + # echo 2048 > multimedia/cpu.shares + # echo 1024 > browser/cpu.shares + + # firefox & # Launch firefox and move it to "browser" group + # echo <firefox_pid> > browser/tasks + + # #Launch gmplayer (or your favourite movie player) + # echo <movie_player_pid> > multimedia/tasks diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt new file mode 100644 index 000000000..4af80b1c0 --- /dev/null +++ b/Documentation/scheduler/sched-domains.txt @@ -0,0 +1,77 @@ +Each CPU has a "base" scheduling domain (struct sched_domain). The domain +hierarchy is built from these base domains via the ->parent pointer. ->parent +MUST be NULL terminated, and domain structures should be per-CPU as they are +locklessly updated. + +Each scheduling domain spans a number of CPUs (stored in the ->span field). +A domain's span MUST be a superset of it child's span (this restriction could +be relaxed if the need arises), and a base domain for CPU i MUST span at least +i. The top domain for each CPU will generally span all CPUs in the system +although strictly it doesn't have to, but this could lead to a case where some +CPUs will never be given tasks to run unless the CPUs allowed mask is +explicitly set. A sched domain's span means "balance process load among these +CPUs". + +Each scheduling domain must have one or more CPU groups (struct sched_group) +which are organised as a circular one way linked list from the ->groups +pointer. The union of cpumasks of these groups MUST be the same as the +domain's span. The intersection of cpumasks from any two of these groups +MUST be the empty set. The group pointed to by the ->groups pointer MUST +contain the CPU to which the domain belongs. Groups may be shared among +CPUs as they contain read only data after they have been set up. + +Balancing within a sched domain occurs between groups. That is, each group +is treated as one entity. The load of a group is defined as the sum of the +load of each of its member CPUs, and only when the load of a group becomes +out of balance are tasks moved between groups. + +In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU +through scheduler_tick(). It raises a softirq after the next regularly scheduled +rebalancing event for the current runqueue has arrived. The actual load +balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run +in softirq context (SCHED_SOFTIRQ). + +The latter function takes two arguments: the current CPU and whether it was idle +at the time the scheduler_tick() happened and iterates over all sched domains +our CPU is on, starting from its base domain and going up the ->parent chain. +While doing that, it checks to see if the current domain has exhausted its +rebalance interval. If so, it runs load_balance() on that domain. It then checks +the parent sched_domain (if it exists), and the parent of the parent and so +forth. + +Initially, load_balance() finds the busiest group in the current sched domain. +If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in +that group. If it manages to find such a runqueue, it locks both our initial +CPU's runqueue and the newly found busiest one and starts moving tasks from it +to our runqueue. The exact number of tasks amounts to an imbalance previously +computed while iterating over this sched domain's groups. + +*** Implementing sched domains *** +The "base" domain will "span" the first level of the hierarchy. In the case +of SMT, you'll span all siblings of the physical CPU, with each group being +a single virtual CPU. + +In SMP, the parent of the base domain will span all physical CPUs in the +node. Each group being a single physical CPU. Then with NUMA, the parent +of the SMP domain will span the entire machine, with each group having the +cpumask of a node. Or, you could do multi-level NUMA or Opteron, for example, +might have just one domain covering its one NUMA level. + +The implementor should read comments in include/linux/sched.h: +struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of +the specifics and what to tune. + +Architectures may retain the regular override the default SD_*_INIT flags +while using the generic domain builder in kernel/sched/core.c if they wish to +retain the traditional SMT->SMP->NUMA topology (or some subset of that). This +can be done by #define'ing ARCH_HASH_SCHED_TUNE. + +Alternatively, the architecture may completely override the generic domain +builder by #define'ing ARCH_HASH_SCHED_DOMAIN, and exporting your +arch_init_sched_domains function. This function will attach domains to all +CPUs using cpu_attach_domain. + +The sched-domains debugging infrastructure can be enabled by enabling +CONFIG_SCHED_DEBUG. This enables an error checking parse of the sched domains +which should catch most possible errors (described above). It also prints out +the domain structure in a visual format. diff --git a/Documentation/scheduler/sched-nice-design.txt b/Documentation/scheduler/sched-nice-design.txt new file mode 100644 index 000000000..3ac1e46d5 --- /dev/null +++ b/Documentation/scheduler/sched-nice-design.txt @@ -0,0 +1,108 @@ +This document explains the thinking about the revamped and streamlined +nice-levels implementation in the new Linux scheduler. + +Nice levels were always pretty weak under Linux and people continuously +pestered us to make nice +19 tasks use up much less CPU time. + +Unfortunately that was not that easy to implement under the old +scheduler, (otherwise we'd have done it long ago) because nice level +support was historically coupled to timeslice length, and timeslice +units were driven by the HZ tick, so the smallest timeslice was 1/HZ. + +In the O(1) scheduler (in 2003) we changed negative nice levels to be +much stronger than they were before in 2.4 (and people were happy about +that change), and we also intentionally calibrated the linear timeslice +rule so that nice +19 level would be _exactly_ 1 jiffy. To better +understand it, the timeslice graph went like this (cheesy ASCII art +alert!): + + + A + \ | [timeslice length] + \ | + \ | + \ | + \ | + \|___100msecs + |^ . _ + | ^ . _ + | ^ . _ + -*----------------------------------*-----> [nice level] + -20 | +19 + | + | + +So that if someone wanted to really renice tasks, +19 would give a much +bigger hit than the normal linear rule would do. (The solution of +changing the ABI to extend priorities was discarded early on.) + +This approach worked to some degree for some time, but later on with +HZ=1000 it caused 1 jiffy to be 1 msec, which meant 0.1% CPU usage which +we felt to be a bit excessive. Excessive _not_ because it's too small of +a CPU utilization, but because it causes too frequent (once per +millisec) rescheduling. (and would thus trash the cache, etc. Remember, +this was long ago when hardware was weaker and caches were smaller, and +people were running number crunching apps at nice +19.) + +So for HZ=1000 we changed nice +19 to 5msecs, because that felt like the +right minimal granularity - and this translates to 5% CPU utilization. +But the fundamental HZ-sensitive property for nice+19 still remained, +and we never got a single complaint about nice +19 being too _weak_ in +terms of CPU utilization, we only got complaints about it (still) being +too _strong_ :-) + +To sum it up: we always wanted to make nice levels more consistent, but +within the constraints of HZ and jiffies and their nasty design level +coupling to timeslices and granularity it was not really viable. + +The second (less frequent but still periodically occurring) complaint +about Linux's nice level support was its assymetry around the origo +(which you can see demonstrated in the picture above), or more +accurately: the fact that nice level behavior depended on the _absolute_ +nice level as well, while the nice API itself is fundamentally +"relative": + + int nice(int inc); + + asmlinkage long sys_nice(int increment) + +(the first one is the glibc API, the second one is the syscall API.) +Note that the 'inc' is relative to the current nice level. Tools like +bash's "nice" command mirror this relative API. + +With the old scheduler, if you for example started a niced task with +1 +and another task with +2, the CPU split between the two tasks would +depend on the nice level of the parent shell - if it was at nice -10 the +CPU split was different than if it was at +5 or +10. + +A third complaint against Linux's nice level support was that negative +nice levels were not 'punchy enough', so lots of people had to resort to +run audio (and other multimedia) apps under RT priorities such as +SCHED_FIFO. But this caused other problems: SCHED_FIFO is not starvation +proof, and a buggy SCHED_FIFO app can also lock up the system for good. + +The new scheduler in v2.6.23 addresses all three types of complaints: + +To address the first complaint (of nice levels being not "punchy" +enough), the scheduler was decoupled from 'time slice' and HZ concepts +(and granularity was made a separate concept from nice levels) and thus +it was possible to implement better and more consistent nice +19 +support: with the new scheduler nice +19 tasks get a HZ-independent +1.5%, instead of the variable 3%-5%-9% range they got in the old +scheduler. + +To address the second complaint (of nice levels not being consistent), +the new scheduler makes nice(1) have the same CPU utilization effect on +tasks, regardless of their absolute nice levels. So on the new +scheduler, running a nice +10 and a nice 11 task has the same CPU +utilization "split" between them as running a nice -5 and a nice -4 +task. (one will get 55% of the CPU, the other 45%.) That is why nice +levels were changed to be "multiplicative" (or exponential) - that way +it does not matter which nice level you start out from, the 'relative +result' will always be the same. + +The third complaint (of negative nice levels not being "punchy" enough +and forcing audio apps to run under the more dangerous SCHED_FIFO +scheduling policy) is addressed by the new scheduler almost +automatically: stronger negative nice levels are an automatic +side-effect of the recalibrated dynamic range of nice levels. diff --git a/Documentation/scheduler/sched-rt-group.txt b/Documentation/scheduler/sched-rt-group.txt new file mode 100644 index 000000000..71b54d549 --- /dev/null +++ b/Documentation/scheduler/sched-rt-group.txt @@ -0,0 +1,183 @@ + Real-Time group scheduling + -------------------------- + +CONTENTS +======== + +0. WARNING +1. Overview + 1.1 The problem + 1.2 The solution +2. The interface + 2.1 System-wide settings + 2.2 Default behaviour + 2.3 Basis for grouping tasks +3. Future plans + + +0. WARNING +========== + + Fiddling with these settings can result in an unstable system, the knobs are + root only and assumes root knows what he is doing. + +Most notable: + + * very small values in sched_rt_period_us can result in an unstable + system when the period is smaller than either the available hrtimer + resolution, or the time it takes to handle the budget refresh itself. + + * very small values in sched_rt_runtime_us can result in an unstable + system when the runtime is so small the system has difficulty making + forward progress (NOTE: the migration thread and kstopmachine both + are real-time processes). + +1. Overview +=========== + + +1.1 The problem +--------------- + +Realtime scheduling is all about determinism, a group has to be able to rely on +the amount of bandwidth (eg. CPU time) being constant. In order to schedule +multiple groups of realtime tasks, each group must be assigned a fixed portion +of the CPU time available. Without a minimum guarantee a realtime group can +obviously fall short. A fuzzy upper limit is of no use since it cannot be +relied upon. Which leaves us with just the single fixed portion. + +1.2 The solution +---------------- + +CPU time is divided by means of specifying how much time can be spent running +in a given period. We allocate this "run time" for each realtime group which +the other realtime groups will not be permitted to use. + +Any time not allocated to a realtime group will be used to run normal priority +tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by +SCHED_OTHER. + +Let's consider an example: a frame fixed realtime renderer must deliver 25 +frames a second, which yields a period of 0.04s per frame. Now say it will also +have to play some music and respond to input, leaving it with around 80% CPU +time dedicated for the graphics. We can then give this group a run time of 0.8 +* 0.04s = 0.032s. + +This way the graphics group will have a 0.04s period with a 0.032s run time +limit. Now if the audio thread needs to refill the DMA buffer every 0.005s, but +needs only about 3% CPU time to do so, it can do with a 0.03 * 0.005s = +0.00015s. So this group can be scheduled with a period of 0.005s and a run time +of 0.00015s. + +The remaining CPU time will be used for user input and other tasks. Because +realtime tasks have explicitly allocated the CPU time they need to perform +their tasks, buffer underruns in the graphics or audio can be eliminated. + +NOTE: the above example is not fully implemented yet. We still +lack an EDF scheduler to make non-uniform periods usable. + + +2. The Interface +================ + + +2.1 System wide settings +------------------------ + +The system wide settings are configured under the /proc virtual file system: + +/proc/sys/kernel/sched_rt_period_us: + The scheduling period that is equivalent to 100% CPU bandwidth + +/proc/sys/kernel/sched_rt_runtime_us: + A global limit on how much time realtime scheduling may use. Even without + CONFIG_RT_GROUP_SCHED enabled, this will limit time reserved to realtime + processes. With CONFIG_RT_GROUP_SCHED it signifies the total bandwidth + available to all realtime groups. + + * Time is specified in us because the interface is s32. This gives an + operating range from 1us to about 35 minutes. + * sched_rt_period_us takes values from 1 to INT_MAX. + * sched_rt_runtime_us takes values from -1 to (INT_MAX - 1). + * A run time of -1 specifies runtime == period, ie. no limit. + + +2.2 Default behaviour +--------------------- + +The default values for sched_rt_period_us (1000000 or 1s) and +sched_rt_runtime_us (950000 or 0.95s). This gives 0.05s to be used by +SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away +realtime tasks will not lock up the machine but leave a little time to recover +it. By setting runtime to -1 you'd get the old behaviour back. + +By default all bandwidth is assigned to the root group and new groups get the +period from /proc/sys/kernel/sched_rt_period_us and a run time of 0. If you +want to assign bandwidth to another group, reduce the root group's bandwidth +and assign some or all of the difference to another group. + +Realtime group scheduling means you have to assign a portion of total CPU +bandwidth to the group before it will accept realtime tasks. Therefore you will +not be able to run realtime tasks as any user other than root until you have +done that, even if the user has the rights to run processes with realtime +priority! + + +2.3 Basis for grouping tasks +---------------------------- + +Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real +CPU bandwidth to task groups. + +This uses the cgroup virtual file system and "<cgroup>/cpu.rt_runtime_us" +to control the CPU time reserved for each control group. + +For more information on working with control groups, you should read +Documentation/cgroups/cgroups.txt as well. + +Group settings are checked against the following limits in order to keep the +configuration schedulable: + + \Sum_{i} runtime_{i} / global_period <= global_runtime / global_period + +For now, this can be simplified to just the following (but see Future plans): + + \Sum_{i} runtime_{i} <= global_runtime + + +3. Future plans +=============== + +There is work in progress to make the scheduling period for each group +("<cgroup>/cpu.rt_period_us") configurable as well. + +The constraint on the period is that a subgroup must have a smaller or +equal period to its parent. But realistically its not very useful _yet_ +as its prone to starvation without deadline scheduling. + +Consider two sibling groups A and B; both have 50% bandwidth, but A's +period is twice the length of B's. + +* group A: period=100000us, runtime=10000us + - this runs for 0.01s once every 0.1s + +* group B: period= 50000us, runtime=10000us + - this runs for 0.01s twice every 0.1s (or once every 0.05 sec). + +This means that currently a while (1) loop in A will run for the full period of +B and can starve B's tasks (assuming they are of lower priority) for a whole +period. + +The next project will be SCHED_EDF (Earliest Deadline First scheduling) to bring +full deadline scheduling to the linux kernel. Deadline scheduling the above +groups and treating end of the period as a deadline will ensure that they both +get their allocated time. + +Implementing SCHED_EDF might take a while to complete. Priority Inheritance is +the biggest challenge as the current linux PI infrastructure is geared towards +the limited static priority levels 0-99. With deadline scheduling you need to +do deadline inheritance (since priority is inversely proportional to the +deadline delta (deadline - now)). + +This means the whole PI machinery will have to be reworked - and that is one of +the most complex pieces of code we have. diff --git a/Documentation/scheduler/sched-stats.txt b/Documentation/scheduler/sched-stats.txt new file mode 100644 index 000000000..8259b34a6 --- /dev/null +++ b/Documentation/scheduler/sched-stats.txt @@ -0,0 +1,154 @@ +Version 15 of schedstats dropped counters for some sched_yield: +yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is +identical to version 14. + +Version 14 of schedstats includes support for sched_domains, which hit the +mainline kernel in 2.6.20 although it is identical to the stats from version +12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel +release). Some counters make more sense to be per-runqueue; other to be +per-domain. Note that domains (and their associated information) will only +be pertinent and available on machines utilizing CONFIG_SMP. + +In version 14 of schedstat, there is at least one level of domain +statistics for each cpu listed, and there may well be more than one +domain. Domains have no particular names in this implementation, but +the highest numbered one typically arbitrates balancing across all the +cpus on the machine, while domain0 is the most tightly focused domain, +sometimes balancing only between pairs of cpus. At this time, there +are no architectures which need more than three domain levels. The first +field in the domain stats is a bit map indicating which cpus are affected +by that domain. + +These fields are counters, and only increment. Programs which make use +of these will need to start with a baseline observation and then calculate +the change in the counters at each subsequent observation. A perl script +which does this for many of the fields is available at + + http://eaglet.rain.com/rick/linux/schedstat/ + +Note that any such script will necessarily be version-specific, as the main +reason to change versions is changes in the output format. For those wishing +to write their own scripts, the fields are described here. + +CPU statistics +-------------- +cpu<N> 1 2 3 4 5 6 7 8 9 + +First field is a sched_yield() statistic: + 1) # of times sched_yield() was called + +Next three are schedule() statistics: + 2) This field is a legacy array expiration count field used in the O(1) + scheduler. We kept it for ABI compatibility, but it is always set to zero. + 3) # of times schedule() was called + 4) # of times schedule() left the processor idle + +Next two are try_to_wake_up() statistics: + 5) # of times try_to_wake_up() was called + 6) # of times try_to_wake_up() was called to wake up the local cpu + +Next three are statistics describing scheduling latency: + 7) sum of all time spent running by tasks on this processor (in jiffies) + 8) sum of all time spent waiting to run by tasks on this processor (in + jiffies) + 9) # of timeslices run on this cpu + + +Domain statistics +----------------- +One of these is produced per domain for each cpu described. (Note that if +CONFIG_SMP is not defined, *no* domains are utilized and these lines +will not appear in the output.) + +domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 + +The first field is a bit mask indicating what cpus this domain operates over. + +The next 24 are a variety of load_balance() statistics in grouped into types +of idleness (idle, busy, and newly idle): + + 1) # of times in this domain load_balance() was called when the + cpu was idle + 2) # of times in this domain load_balance() checked but found + the load did not require balancing when the cpu was idle + 3) # of times in this domain load_balance() tried to move one or + more tasks and failed, when the cpu was idle + 4) sum of imbalances discovered (if any) with each call to + load_balance() in this domain when the cpu was idle + 5) # of times in this domain pull_task() was called when the cpu + was idle + 6) # of times in this domain pull_task() was called even though + the target task was cache-hot when idle + 7) # of times in this domain load_balance() was called but did + not find a busier queue while the cpu was idle + 8) # of times in this domain a busier queue was found while the + cpu was idle but no busier group was found + + 9) # of times in this domain load_balance() was called when the + cpu was busy + 10) # of times in this domain load_balance() checked but found the + load did not require balancing when busy + 11) # of times in this domain load_balance() tried to move one or + more tasks and failed, when the cpu was busy + 12) sum of imbalances discovered (if any) with each call to + load_balance() in this domain when the cpu was busy + 13) # of times in this domain pull_task() was called when busy + 14) # of times in this domain pull_task() was called even though the + target task was cache-hot when busy + 15) # of times in this domain load_balance() was called but did not + find a busier queue while the cpu was busy + 16) # of times in this domain a busier queue was found while the cpu + was busy but no busier group was found + + 17) # of times in this domain load_balance() was called when the + cpu was just becoming idle + 18) # of times in this domain load_balance() checked but found the + load did not require balancing when the cpu was just becoming idle + 19) # of times in this domain load_balance() tried to move one or more + tasks and failed, when the cpu was just becoming idle + 20) sum of imbalances discovered (if any) with each call to + load_balance() in this domain when the cpu was just becoming idle + 21) # of times in this domain pull_task() was called when newly idle + 22) # of times in this domain pull_task() was called even though the + target task was cache-hot when just becoming idle + 23) # of times in this domain load_balance() was called but did not + find a busier queue while the cpu was just becoming idle + 24) # of times in this domain a busier queue was found while the cpu + was just becoming idle but no busier group was found + + Next three are active_load_balance() statistics: + 25) # of times active_load_balance() was called + 26) # of times active_load_balance() tried to move a task and failed + 27) # of times active_load_balance() successfully moved a task + + Next three are sched_balance_exec() statistics: + 28) sbe_cnt is not used + 29) sbe_balanced is not used + 30) sbe_pushed is not used + + Next three are sched_balance_fork() statistics: + 31) sbf_cnt is not used + 32) sbf_balanced is not used + 33) sbf_pushed is not used + + Next three are try_to_wake_up() statistics: + 34) # of times in this domain try_to_wake_up() awoke a task that + last ran on a different cpu in this domain + 35) # of times in this domain try_to_wake_up() moved a task to the + waking cpu because it was cache-cold on its own cpu anyway + 36) # of times in this domain try_to_wake_up() started passive balancing + +/proc/<pid>/schedstat +---------------- +schedstats also adds a new /proc/<pid>/schedstat file to include some of +the same information on a per-process level. There are three fields in +this file correlating for that process to: + 1) time spent on the cpu + 2) time spent waiting on a runqueue + 3) # of timeslices run on this cpu + +A program could be easily written to make use of these extra fields to +report on how well a particular process or set of processes is faring +under the scheduler's policies. A simple version of such a program is +available at + http://eaglet.rain.com/rick/linux/schedstat/v12/latency.c |