3. Introduction
In computer science, scheduling is the method
by which processes or data flows are given
access to system resources. This is usually
done to share system resources effectively or
achieve a target quality of service. The need for
a scheduling algorithm arises from the
requirement for most modern systems to
perform multitasking (executing more than one
process at a time).
4. Process Behaviour For
Scheduling
CPU BURST-when a process has long
computations to be executed by the processor.
I/O BURST-The occurrence of I/O operation.
CPU BOUND-process with intensive CPU-burst i.e.,
longer CPU cycles and a low number of I/O bursts
I/O BOUND-process has a large number of frequent
I/O bursts within the smaller CPU-bursts
6. Long-term scheduling
Long-term scheduling is done when a new
process is created. It initiates processes and
so controls the degree of multi-programming
(number of processes in memory).
8. Short term scheduling
Short-term (process or CPU) scheduling
occurs most frequently and decides which
process to execute next.
9. Process Scheduling Goals
The goals of scheduling may be categorised as
user-based scheduling goals and system-based
scheduling goals.
User based goals are the criteria that benefit the
user. For e.g. a user in a multi user environment
expects the system to give quick response to the
job.
System based goals are the criteria that benefit the
system, i.e., performance of the system. For e.g.
how much the processor is being utilized in
processing the processes.
10. User-Based Scheduling Goals:
Turnaround time –time to execute a process: from start
to completion i.e. sum of
Waiting time,
Executing time
Blocked waiting for an event, for example waiting for I/O
Waiting time – time in Ready queue, directly impacted by
scheduling algorithm.
Response time – time from submission of request to
production of first response.
Predictability – prediction about completion time of a
process by human mind.
11. System-Based Scheduling
Goals:
Throughput – number of processes that
complete their execution per unit time
CPU utilization – percent time CPU busy
Fairness – all processes should be treated
equal, until priority is set
Balance – there should be a balance between
I/O bound and CPU bound processes.
12. Scheduling Algorithms
First Come First Served(FCFS)
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
• The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
0 24 27 30
13. First Come First Serve
The arriving process is added onto the tail of
the queue and the process at the head of the
queue is dispatched to the processor for
execution.
A process that has been interrupted by any
means does not maintain its previous order in
the ready queue.
14. Priority Scheduling
The processes are executed according to the
priority.
1.Priority number based scheduling
In it preference is given to the processes, based
on a priority number assigned to it.
2.Shortest Process Next(SPN)
The process with the shortest execution time is
executed first . It is non-pre-emptive scheduling
algorithm.
16. Round Robin Scheduling
Time slices are assigned to each process in
equal portions and in circular order, handling
all processes without priority.
The design is such that whenever a process
finishes before the time slice expires, the
timer will stop and send the interrupt signal
,so that the next process can be scheduled.
17. Improved Round Robin
Scheduling
CPU consumption ratio=Actual CPU time
consumed/total estimated execution time
Schedule a process if its CPU consumption
ratio is greater than 0.60, else schedule a
process whose CPU consumption ratio is
minimum.
19. Multilevel Queue Scheduling
When processes can be readily categorized, then multiple
separate queues can be established, each implementing
whatever scheduling algorithm is most appropriate for that type
of job, and/or with different parametric adjustments.
Scheduling must also be done between queues, that is
scheduling one queue to get time relative to other queues. Two
common options are strict priority ( no job in a lower priority
queue runs until all higher priority queues are empty ) and round-
robin ( each queue gets a time slice in turn, possibly of different
sizes. )
Note that under this algorithm jobs cannot switch from queue to
queue - Once they are assigned a queue, that is their queue until
they finish.
20.
21. Multilevel Feedback-Queue
Scheduling
Multilevel feedback queue scheduling is similar to the ordinary multilevel queue
scheduling described above, except jobs may be moved from one queue to another
for a variety of reasons:
If the characteristics of a job change between CPU-intensive and I/O intensive, then it
may be appropriate to switch a job from one queue to another.
Aging can also be incorporated, so that a job that has waited for a long time can get
bumped up into a higher priority queue for a while.
Multilevel feedback queue scheduling is the most flexible, because it can be tuned for
any situation. But it is also the most complex to implement because of all the
adjustable parameters. Some of the parameters which define one of these systems
include:
The number of queues.
The scheduling algorithm for each queue.
The methods used to upgrade or demote processes from one queue to another. ( Which
may be different. )
The method used to determine which queue a process enters initially.
22.
23. Fair-share scheduling
Fair-share scheduling is a scheduling strategy
for computer operating systems in which
the CPU usage is equally distributed among
system users or groups, as opposed to equal
distribution among processes.
24. Lottery scheduling
In Lottery Scheduling processes are each assigned
some number of lottery tickets, and the scheduler draws a
random ticket to select the next process. The distribution
of tickets need not be uniform; granting a process more
tickets provides it a relative higher chance of selection.
This technique can be used to approximate other
scheduling algorithms, such as Shortest job next and Fair-
share scheduling.
Lottery scheduling solves the problem of starvation.
Giving each process at least one lottery ticket guarantees
that it has non-zero probability of being selected at each
scheduling operation.