2. CPU Scheduling
CPU scheduling is the basis of multiprogrammed operating
systems. By switching the CPU among processes, the
operating system can make the computer more productive.
• In a single-processor system, only one process can run at a
time. Others must wait until the CPU is free and can be
rescheduled.
• The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
• The idea is relatively simple. A process is executed until it
must wait, typically for the completion of some I/O request. In
a simple computer system, the CPU then just sits idle. All this
waiting time is wasted; no useful work is accomplished.
2
3. CPU Scheduling
• With multiprogramming, we try to use this time productively.
Several processes are kept in memory at one time.
• When one process has to wait, the operating system takes the
CPU away from that process and gives the CPU to another
process. This pattern continues. Every time one process has
to wait, another process can take over use of the CPU.
3
4. CPU–I/O Burst Cycle
• Maximum CPU utilization
obtained with multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main
concern
5. CPU Scheduler
Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
6. Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
7. Scheduling Algorithms
Basic categories:
• Non-preemptive Mode: In this scheduling, a process
continues to execute until it terminates. The process
blocks itself to wait for I/O or by requesting some
operating system service.
• Preemptive Mode: The currently running process may
be interrupted and moved to the Ready state by the
operating system.
The decision to preempt may be due to:
a new process arrives
an interrupt occurs that places a Blocked process in the
Ready state
periodically on the basis of a clock interrupt.
7
8. Scheduling Algorithms
(1) Pure Priority Scheduling Scheme
• Each process is assigned a priority.
• The scheduler always chooses a process of higher priority
over one of the lower priority.
• Multiple “ready queues” with priorities RQ0 > RQ1 > … >
RQn.
• Process is selected from the highest priority ready queue.
• Problem: Lower priority processes may suffer starvation,
when there is a steady supply of higher-priority ready
processes.
• Solution: Priority of a process can change with its age or
execution history.
8
11. SchedulingAlgorithms
(2) First-Come, First-Served Scheme
• First-Come, First-Served (FCFS) scheme is also called
First-In, First-Out (FIFO) scheme.
• The dispatcher runs the processes at head of single ready
queue, new processes come in at the end of the queue.
• Non-Preemptive technique
• Shorter processes have to wait a lot while a long process is
executed by processor.
• FCFS tends to favor CPU-Bound processes over I/O-Bound
processes.
• FCFS is easier and faster but may result inefficient use of
processor and the I/O devices.
11
12. (2) First-Come, First-Served Scheme (con..)
• I/O-Bound Processes: Processes that perform lots of I/O
operations. Each I/O is followed by a short CPU burst.
• CPU-Bound Processes: Processes that perform lots of
computation and do little I/O. They tend to have long few
CPU bursts.
CPU burst: The amount of time the process uses the
processor before it is no longer ready. Types of CPU
bursts:
• long bursts -- process is CPU bound (i.e. array work)
• short bursts -- process I/O bound.
12
SchedulingAlgorithms
13. First- Come, First-Served (FCFS) Scheduling: Example
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
• The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
• Convoy effect - short process behind long process
• There is a convoy effect as all the other processes wait for the one big
process to get off the CPU. This effect results in lower CPU and device
utilization than might be possible if the shorter processes were allowed
to go first.
P P P1 2 3
0 24 3027
14. FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Thus, the average waiting time under an FCFS policy is
generally not minimal and may vary substantially if the
processes’ CPU burst times vary greatly.
P1
0 3 6 30
P2
P3
15. SchedulingAlgorithms
(3) Round-Robin Scheme
The round-robin (RR) scheduling algorithm is designed especially for
time- sharing systems. It is similar to FCFS scheduling, but preemption is
added to enable the system to switch between processes.
• A small unit of time, called a time quantum or time slice, is
defined
• If a process’s CPU burst exceeds 1 time quantum, that process is
preempted and is put back in the ready queue.
• A clock-interrupt is generated at periodic intervals.
• When the interrupt occurs, the currently running process is
placed in the Ready queue
• Next process is selected on first-come, first-served basis.
• Also known as time-slicing or time-quantization, because
each process is given a slice / quantum of time
15
16. SchedulingAlgorithms
(3) Round-Robin Scheme (con…)
Length or time quantum/slice: Short vs. Long
• If the time quantum is very short, then short processes will
move through the system relatively quickly.
• However, there is processing overhead involved in handling
the clock interrupt and performing the dispatching function.
• Standard time quantum is slightly greater than the time
required for a typical interaction.
CPU-Bound vs. I/O-Bound Processes
• I/O-bound processes tend to receive an unfair portion of
processor time, which results in poor performance of I/O-
bound processes.
16
17. Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:
• Typically, higher average turnaround than SJF, but
better response
• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec
P P P1 1 1
0 18 3026144 7 10 22
P2
P3
P1
P1
P1
18. Time Quantum and Context Switch Time
The performance of the RR algorithm depends heavily on the size of the
time quantum. At one extreme, if the time quantum is extremely large,
the RR policy. In contrast, if the time quantum is extremely small (say, 1
millisecond), the RR approach can result in a large number of context
switches.
19. SchedulingAlgorithms
(4) Virtual Round-Robin Scheme
• Similar to Round Robin
• When a running process times out, it is returned to the Ready
queue.
• When a process is blocked for I/O, it joins an I/O queue.
• Auxiliary Queue based on FCFS is used to which processes
that are moved after being released from an I/O wait.
• Processes in the auxiliary queue get preference over those in
the main Ready queue.
• Superior to the Round-Robin in terms of fairness.
19
20. Queuing diagram for Virtual Round Robin Scheduler
Processor
Admit Release
Time-out
Dispatch
Ready Queue
Event 1 - wait
Block Queue 1
Event 1
Occur
Event 2 - wait
Block Queue 2
Event 2
Occur
Event n - wait
Block Queue n
Event n
Occur
Auxiliary Queue
Scheduling Algorithms
20
21. SchedulingAlgorithms
(5) Shortest Job First Scheme
• When the CPU is available, it is assigned to the process that has
the smallest next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
• Note that a more appropriate term for this scheduling method would
be the shortest-next- CPU-burst algorithm, because scheduling
depends on the length of the next CPU burst of a process, rather
than its total length.
• Shortest Job First (SJF) is a non-preemptive policy
• The difficulty with SJF policy is the “need to know the
estimate time required for processing each process”.
• Short processes will jump to the head of the queue past
longer processes
21
22. Example of SJF
ProcessArriva l TimeBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart
• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
P3
0 3 24
P4
P1
169
P2
23. Determining Length of Next CPU Burst
• Can only estimate the length – should be similar to the
previous one
• Then pick process with shortest predicted next CPU burst
• Can be done by using the length of previous CPU bursts,
using exponential averaging
• Commonly, α set to ½
• Preemptive version called shortest-remaining-time-first
:Define4.
10,3.
burstCPUnexttheforvaluepredicted2.
burstCPUoflengthactual1.
1n
th
n nt
.11 nnn t
24. SchedulingAlgorithms
(6) Highest Response Ratio Next (HRRN)
• The response ratio is given by
RR = (w+s)/s
where
w = time spent waiting for the processor
s = expected service time
• Non-Preemptive scheme
• When the current process completes or is blocked, choose the ready
process with the largest value of RR.
• This approach is attractive, because it accounts for the age of the
process.
• Shorter jobs are favored (a small denominator yields a larger ratio),
aging without service increases the ratio.
• As with the shortest process next, the expected time must be
estimated before using the technique.
24
25. Scheduling Algorithms
(7) Feedback Scheme
• This scheme is based on the time spent by a process in
execution.
• Scheduling is done on a preemptive basis, and a dynamic
priority mechanism is used.
• When a process first enters the system, it is placed in RQ0
(high priority queue).
• When it returns to the Ready state after its first execution, it is
placed in RQ1 (lower priority queue).
• After each subsequent execution, it is demoted to the lower
priority queue.
• A shorter process will complete quickly, without migrating very
far down the hierarchy of Ready queues.
• A longer process will gradually drift downwards.
25
26. Scheduling Algorithms
(7) Feedback Scheme
• Thus newer, shorter processes are favored over older longer
processes.
• Last queue is treated with Round Robin technique, while other
are treated as FCFS
26
Processor
Admit
ReleaseDispatchRQ0
Processor
RQ1
Processor
RQn
. . .
Release
* Dotted lines shows a time sequence rather than a static transitions.
27. Scheduling Algorithms
(8) Fair-Share Scheduling (FFS) Scheme
• Preemptive technique
• The processes are divided into groups
• Each group is managed by certain scheme, mostly Round
Robin Scheme is used
• The turn take into account:
• The CPU usage of the group to which the process
belongs.
• The technique used to schedule a process of a group
(e.g. Round Robin, HRRN etc.)
• Used when there is a multiuser system
27
28. Consider the following set of processes that arrive at time
0, with the length of CPU-burst time given in milliseconds:
Process Burst-Time
P1 24
P2 3
P3 3
If we use a time quantum of 4 milliseconds, then find out
the average waiting time with Round Robin (RR).
Solution:
Average waiting time (RR, q = 4) = (6+4+7)/3 = 5.66
0 to 3 4 to 6 7 to 9 10 to 29
Round
Robin
(q = 4)
P1 P1
P2
P4
Scheduling Algorithms Examples
28
29. Scheduling Algorithms Examples
Consider the following set of processes, with the length of the CPU
burst time given in milliseconds (excluding the transition time):
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes are assumed to have arrived in the order P1, P2, P3,
P3, P4, P5, all at time 0.
(a) Draw four Gantt charts illustrating the execution of these processes
using FCFS, SJF, a non-preemptive priority (a smaller priority number
implies a higher priority) and RR (quantum =1) scheduling.
(b) What is the turnaround time of each process for each of the
scheduling algorithm in part a?
(c) What is the waiting time of each process for each of the scheduling
algorithm in part a?
(d) Which of the schedules in part a, results in the minimal average
waiting time (over all processes)?
29
31. (b) Turnaround of each process & (c) Waiting time of
each process
(d) Minimal average waiting time
According to above example, SJF technique
results minimal average time of overall processes.
Scheduling Algorithms Examples
31
P1 P2 P3 P4 P5
FCFS
Turnaround 10 11 13 14 19
Waiting 0 10 11 13 14
SJF
Turnaround 19 1 4 2 9
Waiting 9 0 2 1 4
Non-preemptive
Priority
Turnaround 18 1 9 19 6
Waiting 9 0 6 18 1
Round Robin
Turnaround 19 2 7 4 14
Waiting 9 1 5 3 9
32. Scheduling Algorithms Examples
32
Suppose that the following processes arrive for execution at the times indicated. Each
process will run the listed amount of time. In answering the questions, use non-
preemptive scheduling and base all decisions on the information you have at the time of
the decision must be made.
Process Arrival Time Burst Time
P1 0.0 8
P2 0.4 4
P3 1.0 1
(a) What is the average turnaround time for these processes with the FCFS scheduling
algorithm?
(b) What is the average turnaround time of these processes for SJF scheduling
algorithm?
(c) The SJF algorithm is supposed to improve performance, but notice that we chose to
run process P1 at time 0 because we did not know that two shorter processes would
arrive soon. Compute what the average turnaround time will be, if the CPU is left idle for
the first 1 unit and then SJF scheduling is used. Remember that processes P1 and P2
are waiting during this idle time, so their waiting time may increase. This algorithm could
be known as future-knowledge scheduling.
Solution:
Average turnaround time (Non-preemptive FCFS) = (8+12+13)/3 = 11
Average turnaround time (Non-preemptive SJF) = (8+9+13)/3 = 10
Average turnaround time (Non-preemptive+1 idle unit) = (1+6+14)/3 = 7