Weitere ähnliche Inhalte
Ähnlich wie OS_Process_Management_Chap4.pptx (20)
Mehr von DrAmarNathDhebla (8)
Kürzlich hochgeladen (20)
OS_Process_Management_Chap4.pptx
- 2. 3.2 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Outline
• What is a Process?
• What is Process Management?
• Process Architecture
• Process States
• Process Control Block (PCB)
- 3. 3.3 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
What is Process Management?
Process management involves various tasks like creation,
scheduling, termination of processes, and a dead lock
Process is a program that is under execution, which is an
important part of modern-day operating systems.
The OS must allocate resources that enable processes to share
and exchange information. It also protects the resources of
process from other methods and allows synchronization
processes.
It is the job of OS to manage all the running processes of the
system.
It handles operations by performing tasks like process
scheduling and such as resource allocation.
- 4. 3.4 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
•Stack: The Stack stores
temporary data like function
parameters, returns
addresses, and local
•Heap Allocates memory,
which may be processed
during its run time.
•Data: It contains the
variable.
•Text:
Text Section includes the
current activity, which is
represented by the value of
- 6. 3.6 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Control Blocks
• PCB stands for Process Control Block.
• It is a data structure that is maintained by the
Operating System for every process.
• The PCB should be identified by an integer
Process ID (PID).
• It helps you to store all the information required
to keep track of all the running processes.
• It is also accountable for storing the contents of
processor registers.
• These are saved when the process moves from
the running state and then returns back to it.
• The information is quickly updated in the PCB
the OS as soon as the process makes the state
transition.
- 7. 3.7 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Every
process is
represented
in the
operating
system by a
process
control
block, which
is also called
a task control
block.
- 8. 3.8 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process State
As a process executes, it changes state
• New: The process is being created
• Running: Instructions are being executed
• Waiting: The process is waiting for some event to occur
• Ready: The process is waiting to be assigned to a processor
• Terminated: The process has finished execution
- 11. 3.11 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Threads
So far, process has a single thread of execution
Consider having multiple program counters per process
• Multiple locations can execute at once
Multiple threads of control -> threads
Must then have storage for thread details, multiple program counters in PCB
What is a Thread?
A thread is a path of execution within a process. A process can contain
multiple threads.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple
threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below
Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS
resources (like open files and signals). But, like process, a thread has its own
program counter (PC), register set, and stack space.
- 12. 3.12 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Scheduling
Process scheduler selects among available processes
for next execution on CPU core
Goal -- Maximize CPU use, quickly switch processes onto
CPU core
Maintains scheduling queues of processes
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute
• Wait queues – set of processes waiting for an event
(i.e., I/O)
• Processes migrate among the various queues
- 13. 3.13 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Context Switch
When CPU switches to another process, the system must
save the state of the old process and load the saved state
for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is pure overhead; the system does no
useful work while switching
• The more complex the OS and the PCB the longer
the context switch
Time dependent on hardware support
• Some hardware provides multiple sets of registers per
CPU multiple contexts loaded at once
- 14. 3.14 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Operations on Processes
System must provide mechanisms for:
• Process creation
• Process termination
- 15. 3.15 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Creation
Parent process create children processes, which, in turn
create other processes, forming a tree of processes
Generally, process identified and managed via a process
identifier (pid)
Resource sharing options
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
Execution options
• Parent and children execute concurrently
• Parent waits until children terminate
- 17. 3.17 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Termination
Process executes last statement and then asks the operating
system to delete it using the exit() system call.
• Returns status data from child to parent (via wait())
• Process’ resources are deallocated by operating system
Parent may terminate the execution of children processes using
the abort() system call. Some reasons for doing so:
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• The parent is exiting, and the operating systems does not
allow a child to continue if its parent terminates
- 18. 3.18 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Termination
Some operating systems do not allow child to exists if its parent
has terminated. If a process terminates, then all its children
must also be terminated.
• cascading termination. All children, grandchildren, etc.,
are terminated.
• The termination is initiated by the operating system.
The parent process may wait for termination of a child process
by using the wait()system call. The call returns status
information and the pid of the terminated process
pid = wait(&status);
If no parent waiting (did not invoke wait()) process is a
zombie
If parent terminated without invoking wait(), process is an
orphan
- 19. 3.19 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes,
including sharing data
Reasons for cooperating processes:
• Information sharing
• Computation speedup
Cooperating processes need interprocess communication (IPC)
Two models of IPC
• Shared memory
• Message passing
- 20. 3.20 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Communications Models
(a) Shared memory. (b) Message passing.
- 21. 3.21 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Process Synchronization
On the basis of synchronization, processes are categorized as
one of the following two types:
Independent Process : Execution of one process does not
affects the execution of other processes.
Cooperative Process : Execution of one process affects the
execution of other processes.
Process synchronization problem arises in the case of
Cooperative process because resources are shared in
Cooperative processes.
Race Condition
When more than one processes are executing the same code or
accessing the same memory or any shared variable in that
condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the
race to say that my output is correct this condition known as a
race condition.
- 22. 3.22 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
A=500; B=500
WITHDRAW=300
B=500+300=800
A=500-300
200
A=500; B=200
WITHDRAW=300
B=800+300=1100
A=200-300
-100
- 24. 3.24 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Producer-Consumer Problem
Paradigm for cooperating processes:
• producer process produces information that is consumed
by a consumer process
Two variations:
• unbounded-buffer places no practical limit on the size of
the buffer:
Producer never waits
Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed buffer size
Producer must wait if all buffers are full
Consumer waits if there is no buffer to consume
- 26. 3.26 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Critical Section Problem
Critical section is a code segment that can be accessed by
only one process at a time. Critical section contains shared
variables which need to be synchronized to maintain
consistency of data variables.
- 27. 3.27 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Any solution to the critical section problem must satisfy three
requirements:
Mutual Exclusion : If a process is executing in its critical section,
then no other process is allowed to execute in the critical section.
Progress : If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
Bounded Waiting : A bound must exist on the number of times
that other processes are allowed to enter their critical sections after
a process has made a request to enter its critical section and before
that request is granted.
- 28. 3.28 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the
critical section problem.
In Peterson’s solution, we have two shared variables:
boolean flag[i] :Initialized to FALSE, initially no one is interested in
entering the critical section int turn : The process whose turn is to
enter the critical section.
- 29. 3.29 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations, wait and signal
that are used for process synchronization.
Semaphores
Semaphore was proposed by Dijkstra in 1965 which is a very
significant technique to manage concurrent processes by using a
simple integer value, which is known as a semaphore.
Semaphore is simply an integer variable that is shared between
threads.
Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and
1. Its value is initialized to 1. It is used to implement the solution of
critical section problems with multiple processes.
Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.
- 30. 3.30 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
The definitions of wait and signal are as follows −
•Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--; }
•Signal
The signal operation increments the value of its argument S.
signal(S) {
S++;
}
- 31. 3.31 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
Semaphores allow only one process into the critical section.
They follow the mutual exclusion principle strictly and are much
more efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in
semaphores as processor time is not wasted unnecessarily to
check if a condition is fulfilled to allow a process to access the
critical section.
Semaphores are implemented in the machine independent code
of the microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
Semaphores are complicated so the wait and signal operations
must be implemented in the correct order to prevent deadlocks.
- 32. 3.32 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition
Summary
Process Synchronization means coordinating the execution of
processes such that no two processes access the same shared
resources and data.
There are four sections of a program: Entry Section, Critical
Section, Exit Section, and Remainder Section.
A segment of code that a signal process can access at a particular
point of time is known as the critical section.