SlideShare ist ein Scribd-Unternehmen logo
1 von 63
Downloaden Sie, um offline zu lesen
PROCESS
SCHEDULING
Mukesh Chinta
Asst Prof, CSE
 Process scheduling is an essential part of a
Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the
executable memory at a time and the loaded process
shares the CPU using time multiplexing.
 A typical process involves both I/O time and CPU time.
 In a uniprogramming system like MS-DOS, time spent
waiting for I/O is wasted and CPU is free during this time.
 In multiprogramming systems, one process can use CPU
while another is waiting for I/O. This is possible only with
process scheduling.
 Process execution begins with a CPU burst. That is followed by an
I/O burst, which is followed by another CPU burst, then another I/O
burst, and so on. Eventually, the final CPU burst ends with a system
request to terminate execution.
 An I/O-bound program typically has many
short CPU bursts. A CPU-bound program
might have a few long CPU bursts.
 The short-term scheduler, or CPU scheduler selects a process from
the processes in memory that are ready to execute and allocates the
CPU to that process.
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait() for the
termination of a child process).
2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
4. When a process terminates
 For conditions 1 and 4 there is no choice - A new process must be
selected.
 For conditions 2 and 3 there is a choice - To either continue running the
current process, or select a different one.
 If scheduling takes place only under conditions 1 and 4, the system is
said to be non-preemptive, or cooperative. Under these conditions,
once a process starts running it keeps running, until it either
voluntarily blocks or until it finishes. Otherwise the system is said to be
preemptive.
The is the module that gives control of the CPU to the
process selected by the scheduler. This function involves:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
The dispatcher needs to be as fast as possible, as it is run on every
context switch. is the amount of time required for the
scheduler to stop one process and start another.
 Different CPU-scheduling algorithms have different properties, and
the choice of a particular algorithm may favor one class of processes
over another.
 Which characteristics are used for comparison can make a
substantial difference in which algorithm is judged to be best.
There are several different criteria to consider when trying to select the
"best" scheduling algorithm for a particular situation and environment,
including:
- Ideally the CPU would be busy 100% of the time,
so as to waste 0 CPU cycles. On a real system CPU usage should range
from 40% ( lightly loaded ) to 90% ( heavily loaded. )
- Number of processes completed per unit time. May
range from 10/second to 1/hour depending on the specific processes.
- Time required for a particular process to
complete, from submission time to completion. (Wall clock time.)
– is the sum of the times, processes spend in the ready
queue waiting their turn to get on the CPU.
- Amount of time it takes from when a request was
submitted until the first response is produced. Remember, it is the time
till the first response and not the completion of process execution(final
response).
• In general one wants to optimize the average value of a
criteria ( Maximize CPU utilization and throughput, and
minimize all the others. ) However some times one
wants to do something different, such as to minimize
the maximum response time.
• Sometimes it is most desirable to minimize the variance
of a criteria than the actual value. i.e. users are more
accepting of a consistent predictable system than an
inconsistent one, even if it is a little bit slower.
Scheduling
Algorithms
First-Come, First-Served Scheduling
 The first-come, first-served(FCFS) is the simplest scheduling
algorithm.
 the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a
FIFO queue.
 When a process enters the ready queue, its PCB is linked onto the
tail of the queue. When the CPU is free, it is allocated to the
process at the head of the queue.
 The running process is then removed from the queue.
 On the negative side, the average waiting time under the FCFS
policy is often quite long.
A Gantt chart is a horizontal bar chart developed as a production control tool in 1917
by Henry L. Gantt, an American engineer and social scientist.
E
X
A
M
P
L
E
E
X
A
M
P
L
E
Consider the following three processes
In the first Gantt chart below, process P1 arrives first. The average waiting
time for the three processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
In the second Gantt chart below, the same three processes have an
average wait time of ( 0 + 3 + 6 ) / 3 = 3.0 ms. This reduction is
substantial.
Thus, the average waiting time under an FCFS policy is generally not
minimal and may vary substantially if the processes’ CPU burst
times vary greatly.
 FCFS can also block the system in a busy dynamic system in
another way, known as the convoy effect.
§ When one CPU intensive process blocks the CPU, a number of I/O
intensive processes can get backed up behind it, leaving the I/O
devices idle.
§ When the CPU hog finally relinquishes the CPU, then the I/O
processes pass through the CPU quickly, leaving the CPU idle
while everyone queues up for I/O, and then the cycle repeats itself
when the CPU intensive process gets back to the ready queue.
 The FCFS scheduling algorithm is nonpreemptive.
§ Once the CPU has been allocated to a process, that process keeps
the CPU until it releases the CPU, either by terminating or by
requesting I/O.
§ The FCFS algorithm is thus particularly troublesome for time-
sharing systems, where it is important that each user get a share of
the CPU at regular intervals.
Shortest-Job-First Scheduling
 Shortest-job-first (SJF) scheduling algorithm associates with
each process the length of the process’s next CPU burst.
 When the CPU is available, it is assigned to the process that
has the smallest next CPU burst. If the next CPU bursts of
two processes are the same, FCFS scheduling is used to
break the tie.
 Easy to implement in Batch systems where required CPU
time is known in advance.
 Impossible to implement in interactive systems where
required CPU time is not known.
E
X
A
M
P
L
E
Consider the following processes
Gantt Chart representation is:
The average waiting time is (3 + 16 + 9 + 0) / 4 = 7 milliseconds
The SJF algorithm can be either preemptive or nonpreemptive. The
choice arises when a new process arrives at the ready queue while a
previous process is still executing.
Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling (SRTF)
E
X
A
M
P
L
E
Consider the following processes
Gantt Chart representation is:
• Process P1 is started at time 0, since it is the only process in the
queue. Process P2 arrives at time 1. The remaining time for
process P1 (7 milliseconds) is larger than the time required by
process P2 (4 milliseconds), so process P1 is preempted, and
process P2 is scheduled.
• The average waiting time for this example is ((10 - 1) + (1 - 1) +
(17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds.
• A nonpreemptive SJF scheduling would result in an average
waiting time of 7.75 milliseconds.
E
X
A
M
P
L
E
• SJF can be proven to be the fastest scheduling algorithm, but it suffers
from one important problem: How do you know how long the next CPU
burst is going to be?
• For long-term batch jobs this can be done based upon the limits that users
set for their jobs when they submit them, which encourages them to set
low limits, but risks their having to re-submit the job if they set the limit
too low. However that does not work for short-term CPU scheduling on an
interactive system.
• Another option would be to statistically measure the run time
characteristics of jobs, particularly if the same tasks are run repeatedly and
predictably. But once again that really isn't a viable option for short term
CPU scheduling in the real world.
• A more practical approach is to predict the length of the next burst, based
on some historical measurement of recent burst times for this process. One
simple, fast, and relatively accurate method is the exponential average of
the measured lengths of previous CPU bursts.
Priority Scheduling
 The SJF algorithm is a special case of the general priority-
scheduling algorithm.
 A priority is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal-
priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the
priority (p) is the inverse of the (predicted) next CPU burst.
The larger the CPU burst, the lower the priority, and vice
versa.
 In practice, priorities are implemented using integers within a
fixed range, but there is no agreed-upon convention as to
whether "high" priorities use large numbers or small numbers.
consider the following set of processes, assumed to have arrived at
time 0 in the order P1, P2, · · ·, P5, with the length of the CPU burst
given in milliseconds:
Gantt Chart representation is:
The average waiting time is 8.2 milliseconds
E
X
A
M
P
L
E
Try this!!!!
Now
Try this!!!!
The average waiting time is 9.6 milliseconds
• Priorities can be assigned either internally or externally.
 Internal priorities are assigned by the OS using criteria such as
average burst time, ratio of CPU to I/O activity, system resource
use, and other factors available to the kernel.
 External priorities are assigned by users, based on the importance
of the job, fees paid, politics, etc.
• Priority scheduling can be either preemptive or non-preemptive.
 When a process arrives at the ready queue, its priority is compared
with the priority of the currently running process.
 A preemptive priority scheduling algorithm will preempt the CPU
if the priority of the newly arrived process is higher than the
priority of the currently running process.
 A nonpreemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.
• Priority scheduling can suffer from a major problem
known as indefinite blocking, or starvation, in which a
low-priority task can wait forever because there are always
some other jobs around that have higher priority.
 If this problem is allowed to occur, then processes will either
run eventually when the system load lightens, or will eventually
get lost when the system is shut down or crashes. (There are
rumors of jobs that have been stuck for years.)
 One common solution to this problem is aging, in which
priorities of jobs increase the longer they wait.
 Under this scheme a low-priority job will eventually get its
priority raised high enough that it gets run.
Round-Robin Scheduling
 The round-robin (RR) scheduling algorithm is designed
especially for timesharing systems.
 Round robin scheduling is similar to FCFS scheduling, except
that CPU bursts are assigned with limits called time quantum.
 When a process is given the CPU, a timer is set for whatever
value has been set for a time quantum.
 If the process finishes its burst before the time quantum timer
expires, then it is swapped out of the CPU just like the normal
FCFS algorithm.
 If the timer goes off first, then the process is swapped out of
the CPU and moved to the back end of the ready queue.
• The ready queue is maintained as a circular queue, so when all processes
have had a turn, then the scheduler gives the first process another turn, and
so on.
• RR scheduling can give the effect of all processors sharing the CPU
equally, although the average wait time can be longer than with other
scheduling algorithms.
The average waiting time is calculated for this schedule. P1 waits for 6
milliseconds (10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7
milliseconds. Thus, the average waiting time is 17/3 = 5.66 milliseconds.
• In the RR scheduling algorithm, no process is allocated the CPU for
more than 1 time quantum in a row (unless it is the only runnable
process).
• If a process’s CPU burst exceeds 1 time quantum, that process is
preempted and is put back in the ready queue. The RR scheduling
algorithm is thus preemptive.
• The performance of RR is sensitive to the time quantum selected. If
the quantum is large enough, then RR reduces to the FCFS
algorithm; If it is very small, then each process gets 1/nth of the
processor time and share the CPU equally.
• BUT, a real system invokes overhead for every context switch, and
the smaller the time quantum the more context switches there are.
• Turnaround time also depends on the size of the time quantum. In
general, turnaround time is minimized if most processes finish their
next cpu burst within one time quantum.
• The way in which a
smaller time quantum
increases context
switches.
• A rule of thumb is that
80 percent of the CPU
bursts should be
shorter than the time
quantum.
Q). Consider the following processes with arrival time and burst time.
Calculate average turnaround time, average waiting time and average
response time using round robin with time quantum 3?
Practice Problem
Solution
Multilevel Queue Scheduling
 When processes can be readily categorized, then multiple
separate queues can be established, each implementing
whatever scheduling algorithm is most appropriate for that
type of job, and/or with different parametric adjustments.
 Scheduling must also be done between queues, that is
scheduling one queue to get time relative to other queues. Two
common options are strict priority ( no job in a lower priority
queue runs until all higher priority queues are empty ) and
round-robin ( each queue gets a time slice in turn, possibly of
different sizes. )
 Under this algorithm jobs cannot switch from queue to queue -
Once they are assigned a queue, that is their queue until they
finish.
Multilevel Feedback-Queue Scheduling
 Multilevel feedback queue scheduling is similar to the ordinary
multilevel queue scheduling described above, except jobs may be moved
from one queue to another for a variety of reasons:
 If the characteristics of a job change between CPU-intensive and I/O
intensive, then it may be appropriate to switch a job from one queue to
another.
 Aging can also be incorporated, so that a job that has waited for a long
time can get bumped up into a higher priority queue for a while.
 Multilevel feedback queue scheduling is the most flexible, because it
can be tuned for any situation. But it is also the most complex to
implement because of all the adjustable parameters. Some of the
parameters which define one of these systems include:
 The number of queues.
 The scheduling algorithm for each queue.
 The methods used to upgrade or demote processes from one queue to
another. ( Which may be different. )
 The method used to determine which queue a process enters initially.
Multiple - Processor Scheduling
 When multiple processors are available, then the scheduling
gets more complicated, because now there is more than one
CPU which must be kept busy and in effective use at all
times.
 Load sharing revolves around balancing the load between
multiple processors.
 Multi-processor systems may be heterogeneous, (different
kinds of CPUs), or homogenous, (all the same kind of CPU).
Even in the latter case there may be special scheduling
constraints, such as devices which are connected via a private
bus to only one of the CPUs.
Approaches to Multiple-Processor Scheduling
• One approach to multi-processor scheduling is asymmetric
multiprocessing, in which one processor is the master
server, controlling all activities and running all kernel code,
while the other runs only user code. This approach is
relatively simple, as there is no need to share critical system
data.
• Another approach is symmetric multiprocessing, SMP,
where each processor schedules its own jobs, either from a
common ready queue or from separate ready queues for
each processor.
• Virtually all modern OSes support SMP, including XP, Win
2000, Solaris, Linux, and Mac OSX
Processor Affinity
 Processors contain cache memory, which speeds up repeated
accesses to the same memory locations.
 If a process were to switch from one processor to another each time
it got a time slice, the data in the cache ( for that process ) would
have to be invalidated and re-loaded from main memory, thereby
obviating the benefit of the cache.
 Therefore SMP systems attempt to keep processes on the same
processor, via processor affinity. Soft affinity occurs when the
system attempts to keep processes on the same processor but makes
no guarantees. Linux and some other OSes support hard affinity, in
which a process specifies that it is not to be moved between
processors.
 Main memory architecture can also affect process affinity, if
particular CPUs have faster access to memory on the same chip or
board than to other memory loaded elsewhere.
Load Balancing
 On SMP systems, it is important to keep the workload balanced
among all processors to fully utilize the benefits of having more
than one processor.
 Load balancing attempts to keep the workload evenly distributed
across all processors in an SMP system.
 There are two general approaches to load balancing: push migration
and pull migration.
 With push migration, a specific task periodically checks the load on
each processor and—if it finds an imbalance—evenly distributes the
load by moving (or pushing) processes from overloaded to idle or
less-busy processors.
 Pull migration occurs when an idle processor pulls a waiting task
from a busy processor.
 Push and pull migration need not be mutually exclusive and are in
fact often implemented in parallel on load-balancing systems.
Multicore Processors
 Recent trends are to put multiple CPUs (cores) onto a single chip,
which appear to the system as multiple processors resulting in a
multicore processor.
 Each core maintains its architectural state and thus appears to the
operating system to be a separate physical processor.
 SMP systems that use multicore processors are faster and consume
less power than systems in which each processor has its own physical
chip.
 Compute cycles can be blocked by the time needed to access
memory, whenever the needed data is not already present in the
cache. (Cache misses). As much as half of the CPU cycles are lost to
memory stall.
 To remedy this situation, many recent hardware designs have
implemented multithreaded processor cores in which two (or more)
hardware threads are assigned to each core. That way, if one thread
stalls while waiting for memory, the core can switch to another
thread.
 By assigning multiple kernel threads to a single processor, memory
stall can be avoided (or reduced) by running one thread on the
processor while the other thread waits for memory.
 A dual-threaded dual-core system has four logical processors available
to the operating system. The UltraSPARC T1 CPU has 8 cores per chip
and 4 hardware threads per core, for a total of 32 logical processors per
chip
There are two ways to multi-thread a processor:
switches between threads only
when one thread blocks, say on a memory read. Context
switching is similar to process switching, with considerable
overhead.
occurs on smaller regular intervals,
say on the boundary of instruction cycles. However the
architecture is designed to support thread switching, so the
overhead is relatively minor.
 Note that for a multi-threaded multi-core system, there are two
levels of scheduling, at the kernel level:
►The OS schedules which kernel thread(s) to assign to which
logical processors, and when to make context switches using
algorithms above.
►On a lower level, the hardware schedules logical processors
on each physical core using some other algorithm.
Real-Time CPU Scheduling
 Real-time systems are those in which the time at which tasks
complete is crucial to their performance.
 Soft real-time systems provide no guarantee as to when a critical
real-time process will be scheduled. They guarantee only that the
process will be given preference over noncritical processes.
 Soft real-time systems have degraded performance if their
timing needs cannot be met. Example: streaming video.
 Hard real-time systems have stricter requirements. A task must be
serviced by its deadline; service after the deadline has expired is
the same as no service at all.
 Hard real-time systems have total failure if their timing needs
cannot be met. Examples: Assembly line robotics, automobile
air-bag deployment.
Minimizing Latency
 A real-time system is event driven in nature. When an event occurs,
the system must respond to and service it as quickly as possible.
 Event Latency is the time between the occurrence of a triggering
event and the (completion of) the system's response to the event.
Usually, different events have different latency requirements.
Two types of latencies affect the performance of real-time systems:
1. Interrupt latency
2. Dispatch latency
refers to the period of time from the arrival of
an interrupt at the CPU to the start of the routine that services the
interrupt.
 It is crucial for real-time
operating systems to
minimize interrupt latency to
ensure that real-time tasks
receive immediate attention.
 Indeed, for hard real-time
systems, interrupt latency
must not simply be
minimized, it must be
bounded to meet the strict
requirements of these
systems.
 The amount of time required for the scheduling dispatcher to stop one
process and start another is known as .
 Providing real-time tasks with immediate access to the CPU mandates
that real-time operating systems minimize this latency as well. The most
effective technique for keeping dispatch latency low is to provide
preemptive kernels.
The conflict phase of dispatch latency has two components:
1. Preemption of any process running in the kernel
2. Release by low-priority processes of resources needed by a high-priority process
Priority-Based Scheduling
 The scheduler for a real-time operating system must support a
priority-based algorithm with preemption.
 Hard real-time systems must guarantee that real-time tasks will be
serviced in accord with their deadline requirements, and making such
guarantees requires additional scheduling features.
 Hard real-time systems are often characterized by tasks that must run
at regular periodic intervals, each having a period p, a constant time
required to execute, (CPU burst), t, and a deadline after the
beginning of each period by which the task must be completed, d.
 In all cases, t <= d <= p
 Using a technique known as an , each task
must specify its needs at the time it attempts to launch.
 The scheduler does one of two things. It either admits the process,
guaranteeing that the process will complete on time, or rejects the
request as impossible if it cannot guarantee that the task will be serviced
by its deadline.
 The process of deciding the execution order of real-time tasks, depends
of the priority of the task.
Fixed Priority
–RM: smaller period higher priority
–DM: smaller deadline higher priority
Dynamic Priority
–EDF: earliest deadline first
Rate-Monotonic Scheduling
 The rate-monotonic scheduling algorithm schedules periodic tasks
using a static priority policy with preemption.
 If a lower-priority process is running and a higher-priority process
becomes available to run, it will preempt the lower-priority process.
 Upon entering the system, each periodic task is assigned a priority
inversely based on its period.
 The shorter the period, the higher the priority; the longer the period,
the lower the priority. The rationale behind this policy is to assign a
higher priority to tasks that require the CPU more often.
 Let’s consider an example with two processes, P1 and P2. The periods
for P1 and P2 are 50 and 100, respectively, p1 = 50 and p2 = 100. The
processing times are t1 = 20 for P1 and t2 = 35 for P2. The deadline for
each process requires that it complete its CPU burst by the start of its
next period.
 The total CPU utilization time is 20/50 = 0.4 for P1, and 25/100 = 0.35
for P2, or 0.75 (75%) overall.
 Let's consider first what can happen if the task with the longer period is given
higher priority. P2 starts execution first and completes at time 35. At this point, P1
starts; it completes its CPU burst at time 55. If P2 is allowed to go first, then P1
cannot complete before its deadline.
 On the other hand, if P1 is given higher priority, it gets to go first, and P2 starts
after P1 completes its burst. At time 50 when the next period for P1 starts, P2 has
only completed 30 of its 35 needed time units, but it gets pre-empted by P1. At
time 70, P1 completes its task for its second period, and the P2 is allowed to
complete its last 5 time units. Overall both processes complete at time 75, and the
cpu is then idle for 25 time units, before the process repeats.
 Rate-monotonic scheduling is considered optimal among algorithms that use static
priorities, because any set of processes that cannot be scheduled with this
algorithm cannot be scheduled with any other static-priority scheduling algorithm
either. There are, however, some sets of processes that cannot be scheduled with
static priorities.
 For example, supposing that P1 =50, T1 = 25, P2 = 80, T2 = 35, and the deadlines
match the periods. Overall CPU usage is 25/50 = 0.5 for P1, 35/80 =0.44 for P2,
or 0.94 (94%) overall, indicating it should be possible to schedule the processes.
With rate-monotonic scheduling, P1 goes first, and completes its first burst at time
25.
 P2 goes next, and completes 25 out of its 35 time units before it gets pre-empted
by P1 at time 50. P1 completes its second burst at 75, and then P2 completes its
last 10 time units at time 85, missing its deadline of 80 by 5 time units.
 The worst-case CPU utilization for scheduling N processes under this
algorithm is
which is 100% for a single process, but drops
to 75% for two processes and to 69% as N approaches infinity. Note that in our
example above 94% is higher than 75%. For two processes, CPU Utilization is
bounded at about 83%
Cases of fixed-priority scheduling with two tasks, T1=50, C1=25, T2=100, C2=40
Earliest-Deadline-First Scheduling
 EDF scheduling dynamically assigns priorities according to
deadline. The earlier the deadline, the higher the priority; the later
the deadline, the lower the priority.
 Under the EDF policy, when a process becomes runnable, it must
announce its deadline requirements to the system.
 EDF has been proven to be an optimal uniprocessor scheduling
algorithm. This means that, if a set of tasks is not schedulable
under EDF, then no other scheduling algorithm can feasibly
schedule this task set.
 For EDF, consider the above example where process P1 has a
period of p1 = 50 and a CPU burst of t1 = 25. For P2, the
corresponding values are p2 = 80 and t2 = 35.
For the above example, if EDF is implemented,
 At time 0, P1 has the earliest deadline, highest priority, and goes first., followed by
P2 at time 25 when P1 completes its first burst.
 At time 50, process P1 begins its second period, but since P2 has a deadline of 80
and the deadline for P1 is not until 100, P2 is allowed to stay on the CPU and
complete its burst, which it does at time 60.
 P1 then starts its second burst, which it completes at time 85. P2 started its second
period at time 80, but since P1 had an earlier deadline, P2 did not pre-empt P1.
 P2 starts its second burst at time 85, and continues until time 100, at which time
P1 starts its third period. At this point P1 has a deadline of 150 and P2 has a
deadline of 160, so P1 preempts P2.
 P1 completes its third burst at time 125, at which time P2 starts, completing its
third burst at time 145. The CPU sits idle for 5 time units, until P1 starts its next
period at 150 and P2 at 160.
Unlike the rate-monotonic algorithm, EDF scheduling does not
require that processes be periodic, nor must a process require a
constant amount of CPU time per burst.
The only requirement is that a process announce its deadline to
the scheduler when it becomes runnable.
The appeal of EDF scheduling is that it is theoretically
optimal—theoretically, it can schedule processes so that each
process can meet its deadline requirements and CPU utilization
will be 100 percent.
In practice, however, it is impossible to achieve this level of
CPU utilization due to the cost of context switching between
processes and interrupt handling.
In the example below, when time is 0, both A1 and B1 arrive. Since
A1 has the earliest deadline, it is scheduled first. When A1
completes, B1 is given the processor. when time is 20, A2 arrives.
Because A2 has an earlier deadline than B1, B1 is interrupted so that
A2 can execute to completion. Then B1 is resumed when time is 30.
when time is 40, A3 arrives. However, B1 has an earlier ending
deadline and is allowed to execute to completion when time is 45. A3
is then given the processor and finishes when time is 55
Proportional Share Scheduling
 Proportional share scheduling works by dividing the total amount
of time available up into an equal number of shares, and then each
process must request a certain share of the total when it tries to
start.
 Assume that a total of T = 100 shares is to be divided among three
processes, A, B, and C. A is assigned 50 shares, B is assigned 15
shares, and C is assigned 20 shares. This scheme ensures that A
will have 50 percent of total processor time, B will have 15
percent, and C will have 20 percent.
 Proportional share scheduling works with an admission-control
policy, not starting any task if it cannot guarantee the shares that
the task says that it needs.
 If a new process D requested 30 shares (100 - 85 = 15 left), the
admission controller would deny D entry into the system.

Weitere ähnliche Inhalte

Was ist angesagt? (20)

Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Priority scheduling algorithms
Priority scheduling algorithmsPriority scheduling algorithms
Priority scheduling algorithms
 
Scheduling algorithms
Scheduling algorithmsScheduling algorithms
Scheduling algorithms
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Process management in os
Process management in osProcess management in os
Process management in os
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Disk scheduling
Disk schedulingDisk scheduling
Disk scheduling
 
Multithreading
MultithreadingMultithreading
Multithreading
 
Process of operating system
Process of operating systemProcess of operating system
Process of operating system
 
process creation OS
process creation OSprocess creation OS
process creation OS
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 
CS6401 OPERATING SYSTEMS Unit 2
CS6401 OPERATING SYSTEMS Unit 2CS6401 OPERATING SYSTEMS Unit 2
CS6401 OPERATING SYSTEMS Unit 2
 
Operating System-Process Scheduling
Operating System-Process SchedulingOperating System-Process Scheduling
Operating System-Process Scheduling
 
Thread scheduling in Operating Systems
Thread scheduling in Operating SystemsThread scheduling in Operating Systems
Thread scheduling in Operating Systems
 
Distributed file system
Distributed file systemDistributed file system
Distributed file system
 
Mainframe systems
Mainframe systemsMainframe systems
Mainframe systems
 
operating system structure
operating system structureoperating system structure
operating system structure
 
Bankers algorithm
Bankers algorithmBankers algorithm
Bankers algorithm
 
Operating Systems Process Scheduling Algorithms
Operating Systems   Process Scheduling AlgorithmsOperating Systems   Process Scheduling Algorithms
Operating Systems Process Scheduling Algorithms
 

Ähnlich wie Process scheduling (CPU Scheduling)

Scheduling algorithms
Scheduling algorithmsScheduling algorithms
Scheduling algorithmsPaurav Shah
 
Cpu scheduling pre final formatting
Cpu scheduling pre final formattingCpu scheduling pre final formatting
Cpu scheduling pre final formattingmarangburu42
 
Preemptive process example.pptx
Preemptive process example.pptxPreemptive process example.pptx
Preemptive process example.pptxjamilaltiti1
 
Cpu scheduling final
Cpu scheduling finalCpu scheduling final
Cpu scheduling finalmarangburu42
 
LM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesLM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
 
chapter 5 CPU scheduling.ppt
chapter  5 CPU scheduling.pptchapter  5 CPU scheduling.ppt
chapter 5 CPU scheduling.pptKeyreSebre
 
Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
 
Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)Harshit Jain
 
20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design8016AryanSabat
 
CPU scheduling in Operating System Explanation
CPU scheduling in Operating System ExplanationCPU scheduling in Operating System Explanation
CPU scheduling in Operating System ExplanationAnitaSofiaKeyser
 
Process Scheduling Algorithms.pdf
Process Scheduling Algorithms.pdfProcess Scheduling Algorithms.pdf
Process Scheduling Algorithms.pdfRakibul Rakib
 

Ähnlich wie Process scheduling (CPU Scheduling) (20)

Scheduling algorithms
Scheduling algorithmsScheduling algorithms
Scheduling algorithms
 
Cpu scheduling pre final formatting
Cpu scheduling pre final formattingCpu scheduling pre final formatting
Cpu scheduling pre final formatting
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
Preemptive process example.pptx
Preemptive process example.pptxPreemptive process example.pptx
Preemptive process example.pptx
 
cpu scheduling.pdf
cpu scheduling.pdfcpu scheduling.pdf
cpu scheduling.pdf
 
Cpu scheduling final
Cpu scheduling finalCpu scheduling final
Cpu scheduling final
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
UNIT II - CPU SCHEDULING.docx
UNIT II - CPU SCHEDULING.docxUNIT II - CPU SCHEDULING.docx
UNIT II - CPU SCHEDULING.docx
 
LM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processesLM10,11,12 - CPU SCHEDULING algorithms and its processes
LM10,11,12 - CPU SCHEDULING algorithms and its processes
 
Osy ppt - Copy.pptx
Osy ppt - Copy.pptxOsy ppt - Copy.pptx
Osy ppt - Copy.pptx
 
Unit 2 notes
Unit 2 notesUnit 2 notes
Unit 2 notes
 
chapter 5 CPU scheduling.ppt
chapter  5 CPU scheduling.pptchapter  5 CPU scheduling.ppt
chapter 5 CPU scheduling.ppt
 
Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - Engineering
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
Cp usched 2
Cp usched  2Cp usched  2
Cp usched 2
 
Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)
 
20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design20118016 aryan sabat study and analysis of scheduler design
20118016 aryan sabat study and analysis of scheduler design
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
CPU scheduling in Operating System Explanation
CPU scheduling in Operating System ExplanationCPU scheduling in Operating System Explanation
CPU scheduling in Operating System Explanation
 
Process Scheduling Algorithms.pdf
Process Scheduling Algorithms.pdfProcess Scheduling Algorithms.pdf
Process Scheduling Algorithms.pdf
 

Mehr von Mukesh Chinta

CCNA-2 SRWE Mod-10 LAN Security Concepts
CCNA-2 SRWE Mod-10 LAN Security ConceptsCCNA-2 SRWE Mod-10 LAN Security Concepts
CCNA-2 SRWE Mod-10 LAN Security ConceptsMukesh Chinta
 
CCNA-2 SRWE Mod-11 Switch Security Configuration
CCNA-2 SRWE Mod-11 Switch Security ConfigurationCCNA-2 SRWE Mod-11 Switch Security Configuration
CCNA-2 SRWE Mod-11 Switch Security ConfigurationMukesh Chinta
 
CCNA-2 SRWE Mod-12 WLAN Concepts
CCNA-2 SRWE Mod-12 WLAN ConceptsCCNA-2 SRWE Mod-12 WLAN Concepts
CCNA-2 SRWE Mod-12 WLAN ConceptsMukesh Chinta
 
CCNA-2 SRWE Mod-13 WLAN Configuration
CCNA-2 SRWE Mod-13 WLAN ConfigurationCCNA-2 SRWE Mod-13 WLAN Configuration
CCNA-2 SRWE Mod-13 WLAN ConfigurationMukesh Chinta
 
CCNA-2 SRWE Mod-15 Static IP Routing
CCNA-2 SRWE Mod-15 Static IP RoutingCCNA-2 SRWE Mod-15 Static IP Routing
CCNA-2 SRWE Mod-15 Static IP RoutingMukesh Chinta
 
CCNA-2 SRWE Mod-14 Routing Concepts
CCNA-2 SRWE Mod-14 Routing ConceptsCCNA-2 SRWE Mod-14 Routing Concepts
CCNA-2 SRWE Mod-14 Routing ConceptsMukesh Chinta
 
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4Mukesh Chinta
 
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3Mukesh Chinta
 
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2Mukesh Chinta
 
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1Mukesh Chinta
 
Cisco Cybersecurity Essentials Chapter- 7
Cisco Cybersecurity Essentials Chapter- 7Cisco Cybersecurity Essentials Chapter- 7
Cisco Cybersecurity Essentials Chapter- 7Mukesh Chinta
 
Protocols and Reference models CCNAv7-1
Protocols and Reference models  CCNAv7-1Protocols and Reference models  CCNAv7-1
Protocols and Reference models CCNAv7-1Mukesh Chinta
 
Basic Switch and End Device configuration CCNA7 Module 2
Basic Switch and End Device configuration   CCNA7 Module 2Basic Switch and End Device configuration   CCNA7 Module 2
Basic Switch and End Device configuration CCNA7 Module 2Mukesh Chinta
 
Introduction to networks CCNAv7 Module-1
Introduction to networks CCNAv7 Module-1Introduction to networks CCNAv7 Module-1
Introduction to networks CCNAv7 Module-1Mukesh Chinta
 
Operating systems system structures
Operating systems   system structuresOperating systems   system structures
Operating systems system structuresMukesh Chinta
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating SystemsMukesh Chinta
 
Cisco cybersecurity essentials chapter 8
Cisco cybersecurity essentials chapter 8Cisco cybersecurity essentials chapter 8
Cisco cybersecurity essentials chapter 8Mukesh Chinta
 
Cisco cybersecurity essentials chapter - 6
Cisco cybersecurity essentials chapter - 6Cisco cybersecurity essentials chapter - 6
Cisco cybersecurity essentials chapter - 6Mukesh Chinta
 
Cisco cybersecurity essentials chapter 4
Cisco cybersecurity essentials chapter 4Cisco cybersecurity essentials chapter 4
Cisco cybersecurity essentials chapter 4Mukesh Chinta
 
Cisco cybersecurity essentials chapter -5
Cisco cybersecurity essentials chapter -5Cisco cybersecurity essentials chapter -5
Cisco cybersecurity essentials chapter -5Mukesh Chinta
 

Mehr von Mukesh Chinta (20)

CCNA-2 SRWE Mod-10 LAN Security Concepts
CCNA-2 SRWE Mod-10 LAN Security ConceptsCCNA-2 SRWE Mod-10 LAN Security Concepts
CCNA-2 SRWE Mod-10 LAN Security Concepts
 
CCNA-2 SRWE Mod-11 Switch Security Configuration
CCNA-2 SRWE Mod-11 Switch Security ConfigurationCCNA-2 SRWE Mod-11 Switch Security Configuration
CCNA-2 SRWE Mod-11 Switch Security Configuration
 
CCNA-2 SRWE Mod-12 WLAN Concepts
CCNA-2 SRWE Mod-12 WLAN ConceptsCCNA-2 SRWE Mod-12 WLAN Concepts
CCNA-2 SRWE Mod-12 WLAN Concepts
 
CCNA-2 SRWE Mod-13 WLAN Configuration
CCNA-2 SRWE Mod-13 WLAN ConfigurationCCNA-2 SRWE Mod-13 WLAN Configuration
CCNA-2 SRWE Mod-13 WLAN Configuration
 
CCNA-2 SRWE Mod-15 Static IP Routing
CCNA-2 SRWE Mod-15 Static IP RoutingCCNA-2 SRWE Mod-15 Static IP Routing
CCNA-2 SRWE Mod-15 Static IP Routing
 
CCNA-2 SRWE Mod-14 Routing Concepts
CCNA-2 SRWE Mod-14 Routing ConceptsCCNA-2 SRWE Mod-14 Routing Concepts
CCNA-2 SRWE Mod-14 Routing Concepts
 
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
Protecting the Organization - Cisco: Intro to Cybersecurity Chap-4
 
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
Protecting Your Data and Privacy- Cisco: Intro to Cybersecurity chap-3
 
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
Attacks, Concepts and Techniques - Cisco: Intro to Cybersecurity Chap-2
 
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
The need for Cybersecurity - Cisco Intro to Cybersec Chap-1
 
Cisco Cybersecurity Essentials Chapter- 7
Cisco Cybersecurity Essentials Chapter- 7Cisco Cybersecurity Essentials Chapter- 7
Cisco Cybersecurity Essentials Chapter- 7
 
Protocols and Reference models CCNAv7-1
Protocols and Reference models  CCNAv7-1Protocols and Reference models  CCNAv7-1
Protocols and Reference models CCNAv7-1
 
Basic Switch and End Device configuration CCNA7 Module 2
Basic Switch and End Device configuration   CCNA7 Module 2Basic Switch and End Device configuration   CCNA7 Module 2
Basic Switch and End Device configuration CCNA7 Module 2
 
Introduction to networks CCNAv7 Module-1
Introduction to networks CCNAv7 Module-1Introduction to networks CCNAv7 Module-1
Introduction to networks CCNAv7 Module-1
 
Operating systems system structures
Operating systems   system structuresOperating systems   system structures
Operating systems system structures
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating Systems
 
Cisco cybersecurity essentials chapter 8
Cisco cybersecurity essentials chapter 8Cisco cybersecurity essentials chapter 8
Cisco cybersecurity essentials chapter 8
 
Cisco cybersecurity essentials chapter - 6
Cisco cybersecurity essentials chapter - 6Cisco cybersecurity essentials chapter - 6
Cisco cybersecurity essentials chapter - 6
 
Cisco cybersecurity essentials chapter 4
Cisco cybersecurity essentials chapter 4Cisco cybersecurity essentials chapter 4
Cisco cybersecurity essentials chapter 4
 
Cisco cybersecurity essentials chapter -5
Cisco cybersecurity essentials chapter -5Cisco cybersecurity essentials chapter -5
Cisco cybersecurity essentials chapter -5
 

Kürzlich hochgeladen

BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 

Kürzlich hochgeladen (20)

BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 

Process scheduling (CPU Scheduling)

  • 2.  Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.  A typical process involves both I/O time and CPU time.  In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time.  In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling.
  • 3.  Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends with a system request to terminate execution.  An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few long CPU bursts.
  • 4.  The short-term scheduler, or CPU scheduler selects a process from the processes in memory that are ready to execute and allocates the CPU to that process.
  • 5. CPU-scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation of wait() for the termination of a child process). 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example, at completion of I/O) 4. When a process terminates  For conditions 1 and 4 there is no choice - A new process must be selected.  For conditions 2 and 3 there is a choice - To either continue running the current process, or select a different one.  If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive, or cooperative. Under these conditions, once a process starts running it keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive.
  • 6. The is the module that gives control of the CPU to the process selected by the scheduler. This function involves: • Switching context. • Switching to user mode. • Jumping to the proper location in the newly loaded program. The dispatcher needs to be as fast as possible, as it is run on every context switch. is the amount of time required for the scheduler to stop one process and start another.
  • 7.  Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another.  Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best.
  • 8. There are several different criteria to consider when trying to select the "best" scheduling algorithm for a particular situation and environment, including: - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. ) - Number of processes completed per unit time. May range from 10/second to 1/hour depending on the specific processes. - Time required for a particular process to complete, from submission time to completion. (Wall clock time.) – is the sum of the times, processes spend in the ready queue waiting their turn to get on the CPU. - Amount of time it takes from when a request was submitted until the first response is produced. Remember, it is the time till the first response and not the completion of process execution(final response).
  • 9. • In general one wants to optimize the average value of a criteria ( Maximize CPU utilization and throughput, and minimize all the others. ) However some times one wants to do something different, such as to minimize the maximum response time. • Sometimes it is most desirable to minimize the variance of a criteria than the actual value. i.e. users are more accepting of a consistent predictable system than an inconsistent one, even if it is a little bit slower.
  • 11. First-Come, First-Served Scheduling  The first-come, first-served(FCFS) is the simplest scheduling algorithm.  the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue.  When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue.  The running process is then removed from the queue.  On the negative side, the average waiting time under the FCFS policy is often quite long.
  • 12.
  • 13. A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L. Gantt, an American engineer and social scientist. E X A M P L E
  • 14. E X A M P L E Consider the following three processes In the first Gantt chart below, process P1 arrives first. The average waiting time for the three processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms. In the second Gantt chart below, the same three processes have an average wait time of ( 0 + 3 + 6 ) / 3 = 3.0 ms. This reduction is substantial. Thus, the average waiting time under an FCFS policy is generally not minimal and may vary substantially if the processes’ CPU burst times vary greatly.
  • 15.  FCFS can also block the system in a busy dynamic system in another way, known as the convoy effect. § When one CPU intensive process blocks the CPU, a number of I/O intensive processes can get backed up behind it, leaving the I/O devices idle. § When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself when the CPU intensive process gets back to the ready queue.  The FCFS scheduling algorithm is nonpreemptive. § Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. § The FCFS algorithm is thus particularly troublesome for time- sharing systems, where it is important that each user get a share of the CPU at regular intervals.
  • 16. Shortest-Job-First Scheduling  Shortest-job-first (SJF) scheduling algorithm associates with each process the length of the process’s next CPU burst.  When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.  Easy to implement in Batch systems where required CPU time is known in advance.  Impossible to implement in interactive systems where required CPU time is not known.
  • 17. E X A M P L E Consider the following processes Gantt Chart representation is: The average waiting time is (3 + 16 + 9 + 0) / 4 = 7 milliseconds
  • 18. The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. Preemptive SJF scheduling is sometimes called shortest-remaining- time-first scheduling (SRTF)
  • 19. E X A M P L E Consider the following processes Gantt Chart representation is: • Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. • The average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds. • A nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
  • 21. • SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important problem: How do you know how long the next CPU burst is going to be? • For long-term batch jobs this can be done based upon the limits that users set for their jobs when they submit them, which encourages them to set low limits, but risks their having to re-submit the job if they set the limit too low. However that does not work for short-term CPU scheduling on an interactive system. • Another option would be to statistically measure the run time characteristics of jobs, particularly if the same tasks are run repeatedly and predictably. But once again that really isn't a viable option for short term CPU scheduling in the real world. • A more practical approach is to predict the length of the next burst, based on some historical measurement of recent burst times for this process. One simple, fast, and relatively accurate method is the exponential average of the measured lengths of previous CPU bursts.
  • 22. Priority Scheduling  The SJF algorithm is a special case of the general priority- scheduling algorithm.  A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal- priority processes are scheduled in FCFS order.  An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.  In practice, priorities are implemented using integers within a fixed range, but there is no agreed-upon convention as to whether "high" priorities use large numbers or small numbers.
  • 23. consider the following set of processes, assumed to have arrived at time 0 in the order P1, P2, · · ·, P5, with the length of the CPU burst given in milliseconds: Gantt Chart representation is: The average waiting time is 8.2 milliseconds
  • 24. E X A M P L E Try this!!!! Now Try this!!!! The average waiting time is 9.6 milliseconds
  • 25.
  • 26. • Priorities can be assigned either internally or externally.  Internal priorities are assigned by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel.  External priorities are assigned by users, based on the importance of the job, fees paid, politics, etc. • Priority scheduling can be either preemptive or non-preemptive.  When a process arrives at the ready queue, its priority is compared with the priority of the currently running process.  A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.  A nonpreemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
  • 27. • Priority scheduling can suffer from a major problem known as indefinite blocking, or starvation, in which a low-priority task can wait forever because there are always some other jobs around that have higher priority.  If this problem is allowed to occur, then processes will either run eventually when the system load lightens, or will eventually get lost when the system is shut down or crashes. (There are rumors of jobs that have been stuck for years.)  One common solution to this problem is aging, in which priorities of jobs increase the longer they wait.  Under this scheme a low-priority job will eventually get its priority raised high enough that it gets run.
  • 28. Round-Robin Scheduling  The round-robin (RR) scheduling algorithm is designed especially for timesharing systems.  Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with limits called time quantum.  When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.  If the process finishes its burst before the time quantum timer expires, then it is swapped out of the CPU just like the normal FCFS algorithm.  If the timer goes off first, then the process is swapped out of the CPU and moved to the back end of the ready queue.
  • 29. • The ready queue is maintained as a circular queue, so when all processes have had a turn, then the scheduler gives the first process another turn, and so on. • RR scheduling can give the effect of all processors sharing the CPU equally, although the average wait time can be longer than with other scheduling algorithms. The average waiting time is calculated for this schedule. P1 waits for 6 milliseconds (10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average waiting time is 17/3 = 5.66 milliseconds.
  • 30. • In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row (unless it is the only runnable process). • If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. The RR scheduling algorithm is thus preemptive. • The performance of RR is sensitive to the time quantum selected. If the quantum is large enough, then RR reduces to the FCFS algorithm; If it is very small, then each process gets 1/nth of the processor time and share the CPU equally. • BUT, a real system invokes overhead for every context switch, and the smaller the time quantum the more context switches there are. • Turnaround time also depends on the size of the time quantum. In general, turnaround time is minimized if most processes finish their next cpu burst within one time quantum.
  • 31. • The way in which a smaller time quantum increases context switches. • A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time quantum.
  • 32.
  • 33. Q). Consider the following processes with arrival time and burst time. Calculate average turnaround time, average waiting time and average response time using round robin with time quantum 3? Practice Problem
  • 35. Multilevel Queue Scheduling  When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.  Scheduling must also be done between queues, that is scheduling one queue to get time relative to other queues. Two common options are strict priority ( no job in a lower priority queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn, possibly of different sizes. )  Under this algorithm jobs cannot switch from queue to queue - Once they are assigned a queue, that is their queue until they finish.
  • 36. Multilevel Feedback-Queue Scheduling  Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons:  If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another.  Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while.  Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters. Some of the parameters which define one of these systems include:  The number of queues.  The scheduling algorithm for each queue.  The methods used to upgrade or demote processes from one queue to another. ( Which may be different. )  The method used to determine which queue a process enters initially.
  • 37.
  • 38. Multiple - Processor Scheduling  When multiple processors are available, then the scheduling gets more complicated, because now there is more than one CPU which must be kept busy and in effective use at all times.  Load sharing revolves around balancing the load between multiple processors.  Multi-processor systems may be heterogeneous, (different kinds of CPUs), or homogenous, (all the same kind of CPU). Even in the latter case there may be special scheduling constraints, such as devices which are connected via a private bus to only one of the CPUs.
  • 39. Approaches to Multiple-Processor Scheduling • One approach to multi-processor scheduling is asymmetric multiprocessing, in which one processor is the master server, controlling all activities and running all kernel code, while the other runs only user code. This approach is relatively simple, as there is no need to share critical system data. • Another approach is symmetric multiprocessing, SMP, where each processor schedules its own jobs, either from a common ready queue or from separate ready queues for each processor. • Virtually all modern OSes support SMP, including XP, Win 2000, Solaris, Linux, and Mac OSX
  • 40. Processor Affinity  Processors contain cache memory, which speeds up repeated accesses to the same memory locations.  If a process were to switch from one processor to another each time it got a time slice, the data in the cache ( for that process ) would have to be invalidated and re-loaded from main memory, thereby obviating the benefit of the cache.  Therefore SMP systems attempt to keep processes on the same processor, via processor affinity. Soft affinity occurs when the system attempts to keep processes on the same processor but makes no guarantees. Linux and some other OSes support hard affinity, in which a process specifies that it is not to be moved between processors.  Main memory architecture can also affect process affinity, if particular CPUs have faster access to memory on the same chip or board than to other memory loaded elsewhere.
  • 41. Load Balancing  On SMP systems, it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor.  Load balancing attempts to keep the workload evenly distributed across all processors in an SMP system.  There are two general approaches to load balancing: push migration and pull migration.  With push migration, a specific task periodically checks the load on each processor and—if it finds an imbalance—evenly distributes the load by moving (or pushing) processes from overloaded to idle or less-busy processors.  Pull migration occurs when an idle processor pulls a waiting task from a busy processor.  Push and pull migration need not be mutually exclusive and are in fact often implemented in parallel on load-balancing systems.
  • 42. Multicore Processors  Recent trends are to put multiple CPUs (cores) onto a single chip, which appear to the system as multiple processors resulting in a multicore processor.  Each core maintains its architectural state and thus appears to the operating system to be a separate physical processor.  SMP systems that use multicore processors are faster and consume less power than systems in which each processor has its own physical chip.  Compute cycles can be blocked by the time needed to access memory, whenever the needed data is not already present in the cache. (Cache misses). As much as half of the CPU cycles are lost to memory stall.
  • 43.  To remedy this situation, many recent hardware designs have implemented multithreaded processor cores in which two (or more) hardware threads are assigned to each core. That way, if one thread stalls while waiting for memory, the core can switch to another thread.  By assigning multiple kernel threads to a single processor, memory stall can be avoided (or reduced) by running one thread on the processor while the other thread waits for memory.  A dual-threaded dual-core system has four logical processors available to the operating system. The UltraSPARC T1 CPU has 8 cores per chip and 4 hardware threads per core, for a total of 32 logical processors per chip
  • 44. There are two ways to multi-thread a processor: switches between threads only when one thread blocks, say on a memory read. Context switching is similar to process switching, with considerable overhead. occurs on smaller regular intervals, say on the boundary of instruction cycles. However the architecture is designed to support thread switching, so the overhead is relatively minor.  Note that for a multi-threaded multi-core system, there are two levels of scheduling, at the kernel level: ►The OS schedules which kernel thread(s) to assign to which logical processors, and when to make context switches using algorithms above. ►On a lower level, the hardware schedules logical processors on each physical core using some other algorithm.
  • 45. Real-Time CPU Scheduling  Real-time systems are those in which the time at which tasks complete is crucial to their performance.  Soft real-time systems provide no guarantee as to when a critical real-time process will be scheduled. They guarantee only that the process will be given preference over noncritical processes.  Soft real-time systems have degraded performance if their timing needs cannot be met. Example: streaming video.  Hard real-time systems have stricter requirements. A task must be serviced by its deadline; service after the deadline has expired is the same as no service at all.  Hard real-time systems have total failure if their timing needs cannot be met. Examples: Assembly line robotics, automobile air-bag deployment.
  • 46. Minimizing Latency  A real-time system is event driven in nature. When an event occurs, the system must respond to and service it as quickly as possible.  Event Latency is the time between the occurrence of a triggering event and the (completion of) the system's response to the event. Usually, different events have different latency requirements. Two types of latencies affect the performance of real-time systems: 1. Interrupt latency 2. Dispatch latency
  • 47. refers to the period of time from the arrival of an interrupt at the CPU to the start of the routine that services the interrupt.  It is crucial for real-time operating systems to minimize interrupt latency to ensure that real-time tasks receive immediate attention.  Indeed, for hard real-time systems, interrupt latency must not simply be minimized, it must be bounded to meet the strict requirements of these systems.
  • 48.  The amount of time required for the scheduling dispatcher to stop one process and start another is known as .  Providing real-time tasks with immediate access to the CPU mandates that real-time operating systems minimize this latency as well. The most effective technique for keeping dispatch latency low is to provide preemptive kernels. The conflict phase of dispatch latency has two components: 1. Preemption of any process running in the kernel 2. Release by low-priority processes of resources needed by a high-priority process
  • 49. Priority-Based Scheduling  The scheduler for a real-time operating system must support a priority-based algorithm with preemption.  Hard real-time systems must guarantee that real-time tasks will be serviced in accord with their deadline requirements, and making such guarantees requires additional scheduling features.  Hard real-time systems are often characterized by tasks that must run at regular periodic intervals, each having a period p, a constant time required to execute, (CPU burst), t, and a deadline after the beginning of each period by which the task must be completed, d.  In all cases, t <= d <= p
  • 50.
  • 51.  Using a technique known as an , each task must specify its needs at the time it attempts to launch.  The scheduler does one of two things. It either admits the process, guaranteeing that the process will complete on time, or rejects the request as impossible if it cannot guarantee that the task will be serviced by its deadline.  The process of deciding the execution order of real-time tasks, depends of the priority of the task.
  • 52. Fixed Priority –RM: smaller period higher priority –DM: smaller deadline higher priority Dynamic Priority –EDF: earliest deadline first
  • 53. Rate-Monotonic Scheduling  The rate-monotonic scheduling algorithm schedules periodic tasks using a static priority policy with preemption.  If a lower-priority process is running and a higher-priority process becomes available to run, it will preempt the lower-priority process.  Upon entering the system, each periodic task is assigned a priority inversely based on its period.  The shorter the period, the higher the priority; the longer the period, the lower the priority. The rationale behind this policy is to assign a higher priority to tasks that require the CPU more often.  Let’s consider an example with two processes, P1 and P2. The periods for P1 and P2 are 50 and 100, respectively, p1 = 50 and p2 = 100. The processing times are t1 = 20 for P1 and t2 = 35 for P2. The deadline for each process requires that it complete its CPU burst by the start of its next period.  The total CPU utilization time is 20/50 = 0.4 for P1, and 25/100 = 0.35 for P2, or 0.75 (75%) overall.
  • 54.  Let's consider first what can happen if the task with the longer period is given higher priority. P2 starts execution first and completes at time 35. At this point, P1 starts; it completes its CPU burst at time 55. If P2 is allowed to go first, then P1 cannot complete before its deadline.  On the other hand, if P1 is given higher priority, it gets to go first, and P2 starts after P1 completes its burst. At time 50 when the next period for P1 starts, P2 has only completed 30 of its 35 needed time units, but it gets pre-empted by P1. At time 70, P1 completes its task for its second period, and the P2 is allowed to complete its last 5 time units. Overall both processes complete at time 75, and the cpu is then idle for 25 time units, before the process repeats.
  • 55.  Rate-monotonic scheduling is considered optimal among algorithms that use static priorities, because any set of processes that cannot be scheduled with this algorithm cannot be scheduled with any other static-priority scheduling algorithm either. There are, however, some sets of processes that cannot be scheduled with static priorities.  For example, supposing that P1 =50, T1 = 25, P2 = 80, T2 = 35, and the deadlines match the periods. Overall CPU usage is 25/50 = 0.5 for P1, 35/80 =0.44 for P2, or 0.94 (94%) overall, indicating it should be possible to schedule the processes. With rate-monotonic scheduling, P1 goes first, and completes its first burst at time 25.  P2 goes next, and completes 25 out of its 35 time units before it gets pre-empted by P1 at time 50. P1 completes its second burst at 75, and then P2 completes its last 10 time units at time 85, missing its deadline of 80 by 5 time units.
  • 56.  The worst-case CPU utilization for scheduling N processes under this algorithm is which is 100% for a single process, but drops to 75% for two processes and to 69% as N approaches infinity. Note that in our example above 94% is higher than 75%. For two processes, CPU Utilization is bounded at about 83% Cases of fixed-priority scheduling with two tasks, T1=50, C1=25, T2=100, C2=40
  • 57. Earliest-Deadline-First Scheduling  EDF scheduling dynamically assigns priorities according to deadline. The earlier the deadline, the higher the priority; the later the deadline, the lower the priority.  Under the EDF policy, when a process becomes runnable, it must announce its deadline requirements to the system.  EDF has been proven to be an optimal uniprocessor scheduling algorithm. This means that, if a set of tasks is not schedulable under EDF, then no other scheduling algorithm can feasibly schedule this task set.  For EDF, consider the above example where process P1 has a period of p1 = 50 and a CPU burst of t1 = 25. For P2, the corresponding values are p2 = 80 and t2 = 35.
  • 58. For the above example, if EDF is implemented,  At time 0, P1 has the earliest deadline, highest priority, and goes first., followed by P2 at time 25 when P1 completes its first burst.  At time 50, process P1 begins its second period, but since P2 has a deadline of 80 and the deadline for P1 is not until 100, P2 is allowed to stay on the CPU and complete its burst, which it does at time 60.  P1 then starts its second burst, which it completes at time 85. P2 started its second period at time 80, but since P1 had an earlier deadline, P2 did not pre-empt P1.  P2 starts its second burst at time 85, and continues until time 100, at which time P1 starts its third period. At this point P1 has a deadline of 150 and P2 has a deadline of 160, so P1 preempts P2.  P1 completes its third burst at time 125, at which time P2 starts, completing its third burst at time 145. The CPU sits idle for 5 time units, until P1 starts its next period at 150 and P2 at 160.
  • 59. Unlike the rate-monotonic algorithm, EDF scheduling does not require that processes be periodic, nor must a process require a constant amount of CPU time per burst. The only requirement is that a process announce its deadline to the scheduler when it becomes runnable. The appeal of EDF scheduling is that it is theoretically optimal—theoretically, it can schedule processes so that each process can meet its deadline requirements and CPU utilization will be 100 percent. In practice, however, it is impossible to achieve this level of CPU utilization due to the cost of context switching between processes and interrupt handling.
  • 60.
  • 61. In the example below, when time is 0, both A1 and B1 arrive. Since A1 has the earliest deadline, it is scheduled first. When A1 completes, B1 is given the processor. when time is 20, A2 arrives. Because A2 has an earlier deadline than B1, B1 is interrupted so that A2 can execute to completion. Then B1 is resumed when time is 30. when time is 40, A3 arrives. However, B1 has an earlier ending deadline and is allowed to execute to completion when time is 45. A3 is then given the processor and finishes when time is 55
  • 62.
  • 63. Proportional Share Scheduling  Proportional share scheduling works by dividing the total amount of time available up into an equal number of shares, and then each process must request a certain share of the total when it tries to start.  Assume that a total of T = 100 shares is to be divided among three processes, A, B, and C. A is assigned 50 shares, B is assigned 15 shares, and C is assigned 20 shares. This scheme ensures that A will have 50 percent of total processor time, B will have 15 percent, and C will have 20 percent.  Proportional share scheduling works with an admission-control policy, not starting any task if it cannot guarantee the shares that the task says that it needs.  If a new process D requested 30 shares (100 - 85 = 15 left), the admission controller would deny D entry into the system.