A scheduler is a process run by the kernel which decides which process will be allocated CPU time next. Scheduling is important on systems where there are numerous processes running on a particular system and it can be chosen who runs next. Scheduling systems can have a large impact upon the perceived performance of a system.
i.e a user is using a GUI application to send an email. They click send and the scheduler runs the daemon for sending mail first and it takes 2seconds to run, during those two seconds however the user tries to use the interface. Unfortunately the interface will not respond because the daemon process is running. If the scheduler chose the GUI to run first and the daemon to run in the background the system would have seemed faster to the user even though the overall time to finish the processing is the same.
Scheduling systems can effect the correctness of the system with deadlines to meet.
Schedulers are called when
- A new process is created
- A process exits
- A process waits for I/O
- A process blocks on a lock
- An I/O interrupt occurs
Goals of Scheduling Edit
- Fairness - Give each process a fairshare of the CPU
- Policy Enforcement - What ever policy chosen, the scheduler should ensure it is carried out
- Balance/Efficiency - Try to keep all parts of the system busy
- Increase Throughput - the number of processes that complete per unit of time
- Decrease Turn-around - time taken for process to complete
- Decrease Waiting time - time spent waiting in ready queue
Interactive Processes Edit
Users are directly waiting for the application. Scheduler can optimise to increase users perception of performance
- Minimise response time - Response time is the time difference between issuing a command and getting the result
- E.g selecting a menu, and getting the result of that selection
- Response time is important to the user’s perception of the performance of the system.
- Provide Proportionality - the user expectation that short jobs will have a short response time, and long jobs can have a long response time.
- Generally, favour short jobs
Real-time Processes Edit
Processes have deadlines, scheduling should ensure that all processes meet deadlines
- Must meet deadlines
- Each job/task has a deadline.
- A missed deadline can result in data loss or catastrophic failure
- Eg: An Aircraft control system missing the deadline to apply brakes
- Provide Predictability
- For some apps, an occasional missed deadline is okay
- E.g. DVD decoder - Predictable behaviour allows smooth DVD decoding with only rare skips
Application Behaviour Edit
To effectively schedule a systems processes it is useful to understand how applications generally behave.
There are generally two behaviours a process can exhibit.
These processes are CPU intensive and the time to completion is relative to the amount of CPU time given to the processor
Theses processes are dependent upon an I/O device which are nearly always slower than the CPU. Hence a substantial amount of time is spent waiting on I/O devices and time to completion is relative to the speed of the I/O device.
Choosing to run an I/O process delays CPU-bound process by very little, where the opposite is not true. In fact scheduling a CPU-bound process blocks the I/O-bound process from making its next I/O request and therefore the system will not be overlapping CPU computation and I/O waiting which is an objective. Therefore scheduling usually favor I/O bound processes over CPU.
Scheduling Topology Edit
During a non-preemptive scheduling topology the process runs until completion or when it blocks on a resource and returns control to the OS. This style of scheduling can allow single threads to monopolise an entire system.
Running threads can be interrupted by the OS and moved into a ready state. This is usually implemented using a timer and hardware interrupt. After a time slice has finished an interrupt is called and control is handed back to the kernel. The interrupt could also be the result of a I/O returning resources. This ensures fair scheduling by preventing single processes from hogging CPU time.
Interactive Scheduling Algorithms Edit
Essentially each process is given a time slice in which to execute. At the end of the time slice the current process is preempted by a timer interrupt. Then put to the end of the ready queue and the process at the top of the queue is run for a time slice.
It requires a ready queue and timer with interrupt.
- Completely fair system, as each process gets equal time to run.
- Simple to implement.
- Assumes all processes are equal.
- If time slice is too short, alot of time is spent switching between processes
- If time slice is too long, system will seem unresponsive
Each process has an associated priority and the scheduler will always chose higher priority processes over lower order processes.
Priority can be internal, ie CPU or I/O bound, or external, ie decided by the user.
Usually each priority level has its own round robin queue.
- Important tasks are run first and less so are left for later.
- Low priority tasks can starve if process aging(time waiting) is not taken into consideration.
Traditional Unix Scheduler Edit
The unix scheduler is a two level scheduler, high level scheduler manages tasks between memory and disks. Lower level scheduler handles CPU time. The lower level scheduler is based upon multilevel queues for each priority level. The highest priority (lower number) is scheduled
Priority levels are reassessed every second and priority queues adjusted to reflect this. Aged processes have their priority systematically increased to ensure that low priority tasks do not starve. The scheduler also penalises CPU-bound processes.
Priority = CPU_usage +nice +base
- CPU_usage = number of clock ticks
- Decays over time to avoid permanently penalising the process
- Nice is a value given to the process by a user to permanently boost or reduce its priority
- Reduce priority of background jobs
- Base is a set of hardwired, negative values used to boost priority of I/O bound system activities
- Swapper, disk I/O, Character I/O
Real-time Scheduling Algorithms Edit
The correctness of a process result in a real-time system depends not only upon the logical value but also the time at which it was produced. Real-time processes are attempting to interact with the real-world so processing must keep up with real-time.
Processes always meet their deadlines.
There is room for late deadline. E.g. 95% of frames are processed in time on a DVD player.
Realtime systems are not necessarily fast, but rather are predictable.
Properties of real-time tasks Edit
To correctly schedule a realtime task a few parameters must be known in advance
- Arrival time (time it can start)
- Maximum execution time
Types of real-time tasks Edit
- Each task is repeated at a regular interval.
- Max execution time is the same each period.
- Arrival time is usually the start of the period.
- Arrival time is usually the start of the period.
- Deadline is usually the end.
- Can arrive at any time.
Scheduling Approaches Edit
A given set of tasks and properties is given and a schedule is computed offline. Suited only to periodic tasks and requires the entire schedule be recomputed if task set changes.
Given a set of tasks and properties each one is assigned a priority. Tasks are then preemptively scheduled according to their priority. Again used for periodic tasks.
A task arrives to be executed. The scheduler examines the processes properties then determines as to whether the process can be computed on time. If it can it is admitted, otherwise it is rejected. Dynamic scheduling can handle periodic and sporadic tasks.
Rate Monotonic Scheduler (RMS) Edit
- Static priority scheduler.
- Shortest tasks are given the highest priority.
- If the CPU load is high the deadlines may not be met.
- Easiest to implement.
Earlies Deadline First (EDF) Edit
- Task with the earliest deadline is chosen next
- Guarrentees that all deadlines are met.