SlideShare ist ein Scribd-Unternehmen logo
1 von 70
II UNIT
A process is the unit of work in most system. Such system consists of collection
of processes. One impediment to our discussion of operating systems is the question of
what to call all the CPU activities.
The objective of multiprogramming is to have some process running at all times,
so as to maximize CPU utilization. The process in the system concurrently and they must
be created and deleted dynamically. The concurrent process executing in the operating
system may be either independent process or cooperative process.
This chapter introduces many concepts associated with multithreaded computer
system. The another aspect of software and hardware feature can make the program
easier and improve system efficiency by hardware synchronization. It also describes the
prevention from occurrence of the deadlock.
1
2.1 PROCESS MANAGEMENT
2.1 PROCESS CONCEPT
A Process is a program in execution. A process is more than the program code ,
which is sometimes known as the text section. It include the program counter-represents
the current activity and the contents of the processor’s registers.
A program is a passive entity, such as the contents of a file stored on disk.
Whereas, a process is an active entity, with a program counter specifying the next
instruction is to be executed.
2.1.1 Process States
As a process executes, it changes state. The state of a process is defined in the
current activity of the process. Each process may be in one of the following states.
1. NEW: The process is being created.
2. RUNNING: Instructions are being executed.
3. WAITING: The process is waiting for some event to occur (such as an I/O
completion reception of a signal.).
4. READY: The process is waiting to be assigned to a processor for execution .
5. TERMINATED:The process has finished execution.
2.1.2 Process Control Block
Each process is represented by process control Block or Task control Block. It
contains many pieces of information associated with it. They are
1. Process State: The state may be new, ready, running, and waiting, halted and so on
2
Figure 2.1 Process state
2. Program Counter: If the counter indicates the address of next instruction that is to be
executed
3. CPU Registers: The registers are accumulators, index register, stack pointers, general
purpose register, any condition code information
4 CPU Scheduling information: It includes process priority, pointer to scheduling
queues and any other scheduling parameters
5 Memory management information: It includes value of base and limit registers, the
page tables, or the segment tables.
6 Accounting Information: It includes the amount of CPU and real time used, time
limits , account numbers, job or process number and so on.
7 I/O Status information: It includes the list of I/O devices allocated to this process, a
list of open files and so on.
Pointer Process state
Process number
Program counter
Registers
Memory limits
List of open files
3
Figure 2.2 Process control block
Save state into PCB0
Reload state from PCB1
Save state into PCB1
Reload state from PCB0
Executing
Executing
Idle
Interrupt or system call
Idle
Executin
g
Idle
Figure 2.3 CPU Switch from process to process
4
If the program is ready, it will be executed. If any interruption occurs, its status
can be stored into PCB0. It reloads the state from PCB1 which can be executed. If any
interrupt occurs, it saves the state into PCB1 and reloads the state from PCB0 (first
process). Now this process will be executed. Like this way CPU switches from one
process to another.
2.2 PROCESS SCHEDULING
The main objective of multiprogramming is to have some process running at all
times to maximize CPU utilization.
The main objective of time-sharing is to switch the CPU among different
processes so frequently that users can interact with each program while it is running.
A system with one CPU can only have one running process at any time. As user
jobs enter the system, they are put on a queue called as the “job pool”. This consists of all
jobs in the system.
The processes that reside in main memory that are ready to be executed are put in
the “Ready Queue”. A queue has a “header” node which contains pointers to the first and
the last PCBs in the list.
There are also other queues in the system like device queue which is the list of the
processes waiting for a particular device. Each device has its own queue.
A general queuing diagram is given below:
5
Figure 2.8 Queuing diagram for Process control Block
Device queue like Magnetic tape, disk and terminal. Each will be containing
“header” node which points first and last link. Because it has pointer.
List is nothing but, process control block connected as list format
The list of processes waiting for a particular I/O device is called a device queue.
Each device has its own device queue.
A common representation of process scheduling is a queuing diagram.
Two types of queues are present.
1. ready queue
2. Set of device queues.
Figure 2.9 Queuing diagram for process
---- Represents process
----- Flow of processes
6
----- Queue
New process is initially put in the ready queue. It waits in the ready queue until it is
selected for execution. Following events could occur.
1. The process could issue an I/O request, placed in an I/O queue.
2. The process could create a new sub process and wait for its termination.
3. The process removed from CPU, as a result of an interrupt and are put back in
the ready queue.
Schedulers
The selection of processes from the queues is managed by schedulers. There are two
kinds:
• Long-term scheduler selects processes from a mass-storage (i.e., hard disk)
device where they are spooled and loads them into memory for execution. Also
referred to as job scheduler, it selects a job to run and creates a new process.
• Short term Scheduler (CPU Scheduler) is responsible for scheduling ready
processes that are already loaded in memory and are ready to execute.
The long-term scheduler brings the processes into memory and hands them over to
the CPU scheduler. It controls the number of processes that the CPU scheduler is
handling thus maintains the degree of multiprogramming which corresponds to the
number of processes in memory.
The long-term scheduler has to make careful selection among I/O-bound and
CPU-bound processes. I/O bound processes spend more of their time doing I/O then
computation, while CPU-bound processes spend more time on computation than I/O.
7
A long-term scheduler should pick a relatively good mix of I/O- and CPU-bound
processes so that the system resources are better utilized. If all processes are I/O-bound,
the ready queue will almost always be empty. If all processes are CPU-bound, I/O queue
will be almost always empty. A balanced combination should be selected for system
efficiency.
Context Switching
Switching the CPU from one process to another requires saving (storage) the state
of the running process into its PCB and loading the state of the latter from its PCB. This
is known as the context switch.
Switching involves copying registers, local and global data, file buffers and other
information to the PCB.
8
2.3 OPERATIONS ON PROCESSES
Major operations on processes are creation and termination.
Process Creation
The long-term scheduler creates a job’s first process as the job is selected for
execution from the job pool.
A process may create new processes via various system calls. So created
processes are called children processes while the creating process is referred to as the
mother or parent process. On UNIX, the system call is fork (). Creating a process
involves creating a PCB for it and scheduling for its execution.
Process Execution
Depending on OS policy, a newly created process may inherit its resources from
its parent or it may acquire its own resources from the OS. When a child process is
restricted to the parent’s resources, new processes do not overload the system. At the
same time some initialization data may be passed from the parent to the child process. In
Unix OS, each process has a different process identifier (a process number as referred to
above) and each child process executes in the address space of the parent. This eases
communication between parent and child processes.
When a new process (child) is created, either the parent runs concurrently with its
child or parent waits until the child terminates.
Process Termination
Having completed its execution and sent its output to its parent, a process
terminates by signaling the OS that it’s finished. On Unix, this is accomplished via the
exit() system call. The OS de-allocates memory, reclaims resources such as I/O buffers,
open files that were allocated to that process.
On some systems, when a parent process terminates, the OS also terminates all
children processes. Likewise if:
• The child has exceeded its usage of resources it has been allocated; or
• The task assigned to the child is no longer required; or
• The OS does not allow a child to continue after its parent terminates,
Then the parent may terminate its child process. The concept of terminating the children
processes by a terminated parent is known as cascading termination.
9
2.4 COOPERATING PROCESSES
Processes that are running simultaneously may be either independent or
cooperating. An independent process is not affected nor it can affect other processes
executing in the system. A cooperating process can affect the state of other processes by
sharing memory, sending signals, etc....
Advantages of process cooperation
1. Information sharing Accessing some information sources like a shared database
by multiple processes simultaneously may be essential.
2. Computation speed-up Computer systems with multiple CPUs may allow
decomposition of the task into subtasks and then the parallel execution of them
provides speedup.
3. Modularity The system can be constructed in a modular fashion.
4. Convenience A user may be willing to use different system tools like editing,
printing, compiling, etc. in parallel.
Example
Consider the producer- consumer problem which is a typical example of
cooperating processes where cooperation is demonstrating the classical inter-process
communication problem.
The idea is that an operating system may have many processes that need to
communicate. Imagine a program that prints output somewhere internally which is later
consumed by a printer driver.
In the case of unbounded-buffer producer-consumer problem, there is no restriction
on the size of the buffer. On the other hand, bounded-buffer producer-consumer problem
assumes that there is a fixed buffer size. A producer process produces information that is
consumed by a consumer process. Producer places its production into a buffer and
consumer takes its consumption from the buffer. When buffer is full, producer must wait
until consumer consumes at least an item; likewise, when buffer is empty, consumer must
wait until producer places at least an item into the buffer. Consider a shared-memory
solution to the bounded-buffer problem.
10
Producer code:
While (1)
{
/*produce an item in next produced*/
While (((in+1) %BUFFER_SIZE) ==out)
; /*do nothing*/
Buffer [in] =next produced;
In= (in+1) %BUFFER_SIZE;
}
Consumer code:
While (1)
{
/*produce an item in next produced*/
While (in==out)
; /*do nothing*/
Next Consumed=buffer [out];
Out= (out+1) %BUFFER_SIZE;
/* consume the item in next Consumed*/
Shared buffer is an array that is used circularly as follows:
2.5 INTERPROCESS COMMUNICATION
Cooperating process can communicate in a shared –memory environment. It
requires a common buffer pool for sharing. Cooperating processes can communicate via
interprocess communication (IPC) facility.IPC provides a mechanism to allow processes
11
Buffer empty when:
in = out
Buffer full when:
in + 1 mod n = out
out
in..
x
y
z


...
0
1
2
n-1
3
.
to communicate and to synchronize their actions without sharing the same address space.
It includes 4 objectives. They are
1 Message –passing system
2. Naming
3. Sunchronization
4. Buffering
2.5.1 Message –passing system
The function of Message –passing system is to allow the processes to
communicate with one another without the need to resort the shared data.
Communication can be established with the help of message passing mechanism.IPC
contains two operations
1 SEND
2 RECEIVE
Messages sent by a process can be of fixed or variable size. If processes P
and Q want to communicate, they must send messages and receive the messages from
each other called communication link.
To implement link has following operations.
1. Direct or indirect communication
2. Symmetric or asymmetric communication
3. Automatic or explicit buffering
4. send by copy or send by reference
5. Fixed sized or variable sized messages.
2.5.2 Naming
For the processes that want to communicate with each other; it needs
reference with each other. They can use either direct or indirect communication.
1 Direct Communication
Each process that wants to communicate must explicitly name the recipient or
sender of the communication. Here send and receive are defined as
Send (P, message)-send a message to process P.
Receive (Q, message)-receive a message from process Q.
A communication link has following properties.
12
1. A link is established automatically between every pair of processes that want to
communicate. Processes need identity of each process.
2. A link is associated with exactly two processes
3. Exactly one link exists between each pair of processes.
13
2 Indirect Communications
The messages are sent to and receive d from mailboxes, or ports. A
mailbox can be viewed as an object into which messages can be placed by processes and
from which messages can be removed. Each mailbox has unique identification. Two
processes can communicate only if they share a mailbox. The send and receive are
defined as,
Send (A, Message)-Send a message to mailbox A.
Receive (A, message)-Receive a message from mailbox A.
In this scheme a communication link has following properties.
1.A link is established between a pair of processes only if both members of the pair
have a shared mailbox.
1. A link may be associated with more than two processes.
2. A number of different links may exist between each pair of communicating
processes, with each link corresponding to one mailbox.
2.5.3 Synchronization
Communication between processes takes place by calls to send and
receive parameters. Message passing may be either blocking or non-blocking is also
known as synchronous and asynchronous.
1 Blocking end
The sending process is blocked until the message is received by the
receiving process or by the mailbox.
2 Non-blocking end
The sending process sends the message and resumes operation.
3 Blocking receive
The receiver blocks until a message is available.
4 Non-blocking receive
The receiver receives either a valid message or a null.
2.5.4 Buffering
Messages exchanged by communicating processes reside in a temporary
queue. This will be implemented by three ways.
1. Zero capacity The queue has maximum length 0; thus, the link cannot
have any messages waiting in it. In this case, the sender must block
until the recipient receives the message.
14
2. Bounded Capacity The queue has finite length n. So n messages can
reside in it. If the queue is not full when a new message is sent, and the
sender can continue execution without waiting. The link has a finite
capacity, however. If the link is full, the sender must block until space
is available in the queue..
3. Unbounded capacity The queue has potentially infinite length; thus,
any number of messages can wait in it. The sender never blocks.
2.6 THREADS
It is a logical extension of a multiprogramming. It is a process that performs a
single thread of execution.
Example:
If a process is running a word-processor program, a single thread of instruction is
being executed. This single thread of control allows the process to perform only one task
at a time. Example: the user could not simultaneously type in characters and run the spell
checker at the same time.
Threads
A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU
utilization; it comprises a thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals.
A traditional (or heavyweight) process has a single thread of control. If the process has
multiple threads of control, it can do more than one task at a time.
15
Figure2.4: Single-threaded and multi threaded process
Motivation
Many software packages that run on modern desktop PCs are multithreaded.
(i) An application is implemented as a separate process with several threads of control.
(ii) A web browser might have one thread display images or text while another thread
retrieves data from the network.
(iii) A word processor may have a thread for displaying graphics, another thread for
reading keystrokes from the user, and a third thread for performing spelling and grammar
checking in the background.
16
In certain situations a single application may be required to perform several
similar tasks. For example, a web server accepts client requests for web pages, images,
sound, and so forth.
A busy web server may have more number (perhaps hundreds) of clients
concurrently accessing it. If the web server ran as a traditional single-threaded process,
it would be able to service only one client at a time. The amount of time that a client
might have to wait for its request to be serviced could be enormous.
One solution is to have the server run as a single process that accepts
requests. When the server receives a request, it creates a separate process to service that
request.
Process creation is very heavyweight.
If the new process will perform the same tasks as the existing process, why incur all that
overhead?
It is generally more efficient for one process that contains multiple threads to
serve the same purpose.
This approach would multithread the web-server process.
The server would create a separate thread that would listen for client requests;
when a request is made, rather than creating another process, it would create another
thread to service the request.
Threads also play a important role in remote procedure call (RPC) systems.
• RPCs allow inter-process communication by providing a communication
mechanism similar to ordinary function or procedure calls.
• Typically, RPC servers are multithreaded.
• When a server receives a message, it services the message using a separate thread.
This allows the server to service several concurrent requests.
Benefits
The benefits of multithreaded programming can be broken down into four major
categories:
1 Responsiveness Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user. For instance, a multithreaded web browser
could still allow user interaction in one thread while an image is being loaded in another
thread.
17
2 Resource sharing By default, threads share the memory and the resources of the
process to which they belong. The benefit of code sharing is that it allows an application
to have several different threads of activity all within the same address space.
3 Economy Allocating memory and resources for process creation is costly.
Alternatively, because threads share resources of the process to which they belong, it is
more economical to create and context switch threads. It can be difficult to gauge
empirically the difference in overhead for creating and maintaining a process rather than
a thread, but in general it is much more time consuming to create and manage processes
than threads. In Solaris 2, creating a process is about 30 times slower than in creating a
thread, and the context switching is about five times slower.
4 Utilization of multiprocessor architectures The benefits of multithreading can be
greatly increased in a multiprocessor architecture, where each thread may be running in
parallel on a different processor. A single-threaded process can run only on one CPU, no
matter how many are available. Multithreading on a multi-CPU machine increases
concurrency. In single processor architecture, the CPU generally moves between each
thread so quickly that it to create an illusion of parallelism, but in reality only one thread
is running at a time.
User and Kernel Threads
• Threads may be provided at either the user level. The user threads, or by the
kernel, the kernal threads.
• User threads are supported above the kernel and are implemented by a thread
library at the user level. The library provides support for thread creation,
scheduling, and management with no support from the kernel. Because the kernel
is unaware of user-level threads, all thread creation and scheduling are done in
user space without the need for kernel intervention. Therefore, user-level threads
are generally fast to create and manage; they have drawbacks, however. For
instance, if the kernel is single-threaded, then any user-level thread performing a
blocking system call will cause the entire process to block, even if other threads
are available to run within the application.
18
• User-thread libraries include POSIX Pthreads, Mach C-threads, and Solaris 2 UI-
threads.
• Kernel threads are supported directly by the operating system: The kernel
performs thread creation, scheduling, and management in kernel space.
• Because thread management is done by the operating system, kernel threads are
generally slower to create and manage than by user threads.
• However, since the kernel is managing the threads, if a thread performs a
blocking system call, the kernel can schedule another thread in the application for
execution.
• Also, in a multiprocessor environment, the kernel can schedule threads on
different processors.
• Most contemporary operating systems-including Windows NT, Windows 2000,
Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel
threads.
2.7 MULTITHREADING MODELS
Many systems provide support for both user and kernel threads,
resulting in different multithreading models. We look at three common types of threading
implementation.
2.7.1 Many to one Model
• The many-to-one model maps many user-level threads to one kernel thread.
• Thread management is done in user space, so it is efficient, but the entire process
will block if a thread makes a blocking system call.
• Also, because only one thread can access the kernel at a time, multiple threads are
unable to run in parallel on multiprocessors.
19
Figure 2.5 Many to one model
In addition, user-level thread libraries implemented on operating systems that do not
support kernel threads use the many-to-one model.
2.7.2 One to one Model
The one-to-one model maps each user thread to a kernel thread.
It provides more concurrency than the many-to-one model by allowing another thread to
run when a thread makes a blocking system call; it also allows multiple threads to run in
parallel on multiprocessors.
The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread.
20
Figure 2.6 One to one model
Because the overhead of creating kernel threads can burden the performance of an
application, most implementations of this model restrict the number of threads supported
by the system.
Windows NT, Windows 2000, and OS/2 implement the one-to-one model.
2.7.3 Many to many Model
• The many-to-many model multiplexes many user-level threads to a smaller or
equal number of kernel threads.
• The number of kernel threads may be specific to either a particular application or
a particular machine (an application may be allocated more kernel threads on a
multiprocessor than on a uniprocessor).
• Whereas the many-to-one model allows the developer to create as many user
threads as she wishes, true concurrency is not gained because the kernel can
schedule only one thread at a time.
• The one-to-one model allows for greater concurrency, but the developer has to be
careful not to create too many threads within an application (and in some
instances may be limited in the number of threads he can create).
• The many-to-many model suffers from neither of these shortcomings: Developers
can create as many user threads as necessary, and the corresponding kernel
threads can run in parallel on a multiprocessor.
21
Also, when a thread performs a blocking system call, the kernel can schedule another
thread for execution. Solaris 2, IRIX, HP-UX, and Tru64 UNIX support this model.
Figure 2.7 Many to many model
Threading Issues
Some of the issues to be considered with Multi threaded programs
1 The fork and exec System Calls
• In a multithreaded program, the semantics of the fork and exec system calls
change.
• Some UNIX systems have chosen to use two versions of fork, one that duplicates
all threads and another that duplicates only the thread that invoked the fork
system call.
• If a thread invokes the exec system call, the program specified in the parameter to
exec will replace the entire process-including all threads and LWPs.
• Usage of the two versions of fork depends upon the application. If exec is called
immediately after forking, then duplicating all threads is unnecessary, as the
program specified in the parameters to exec will replace the process.
22
• In this instance, duplicating only the calling thread is appropriate. If, however, the
separate process does not call exec after forking, the separate process should
duplicate all threads.
2 Cancellation
Thread cancellation is the task of terminating a thread before it has completed.
For example, if multiple threads are concurrently searching through a database and one
thread returns the result, the remaining threads might be cancelled.
Another situation might occur when a user presses a button on a web browser
that stops a web page from loading any further. Often a web page is loaded in a separate
thread. When a user presses the stop button, the thread loading the page is cancelled.
A thread that is to be cancelled is often referred to as the target thread.
Cancellation of a target thread may occur in two different scenarios:
1. Asynchronous cancellation: One thread immediately terminates the target thread.
2. Deferred cancellation: The target thread can periodically check if it should terminate,
allowing the target thread an opportunity to terminate itself in an orderly fashion.
The difficulty with cancellation occurs in situations where resources have
been allocated to a cancelled thread or if a thread was cancelled while in the middle of
updating data it is sharing with other threads. This becomes especially troublesome with
asynchronous cancellation. The operating system will often reclaim system resources
from a cancelled thread, but often will not reclaim all resources.
Therefore, canceling a thread asynchronously may not free a necessary
system-wide resource. Alternatively, deferred cancellation works by one thread
indicating that a target thread is to be cancelled. However, cancellation will occur only
when the target thread checks to determine if it should be cancelled or not. This allows a
thread to check if it should be cancelled at a point when it can safely be cancelled.
Pthreads refers to such points as cancellation points.
Most operating systems allow a process or thread to be cancelled asynchronously.
However, the Pthread API provides deferred cancellation. This means that an operating
system implementing the Pthread API will allow deferred cancellation.
23
3 Signal Handling
A signal is used in UNIX systems to notify a process that a particular event has occurred.
A signal may be received either synchronously or asynchronously, depending upon the
source and the reason for the event being signaled.
Whether a signal is synchronous or asynchronous, all signals follow the same pattern:
1. A signal is generated by the occurrence of a particular event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
An example of a synchronous signal includes an illegal memory access or division by
zero.
• Synchronous signals are delivered to the same process that performed the
operation causing the signal (hence the reason they are considered synchronous).
• When a signal is generated by an event external to a running process, that process
receives the signal asynchronously. Examples of such signals include terminating
a process with specific keystrokes (such as <control><C>) or having a timer
expire.
Typically an asynchronous signal is sent to another process.
Every signal may be handled by one of two possible handlers:
1. A default signal handler
2. A user-defined signal handler
• Every signal has a default signal handler that is run by the kernel when handling
the signal.
• Both synchronous and asynchronous signals may be handled in different ways.
• Some signals may simply be ignored (such as changing the size of a window);
others may be handled by terminating the program (such as an illegal memory
access).
24
• Handling signals in single-threaded programs is straightforward; signals are
always delivered to a process. However, delivering signals is more complicated in
multithreaded programs, as a process may have several threads.
Where then should a signal be delivered?
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process.
• The method for delivering a signal depends upon the type of signal generated.
• For example, synchronous signals need to be delivered to the thread that
generated the signal and not to other threads in the process.
• Some asynchronous signals - such as a signal that terminates a process
(<control><C>, for example) -should be sent to all threads.
Thread Pools
• In the multi threading situation, whenever the server receives a request, it creates
a separate thread to service the request.
• While creating a thread, a multithreaded server has potential problems.
• The first concerns with / about the amount of time required to create the thread
prior to servicing the request, with the fact that this thread will be discarded once
it has completed its work.
• The second issue is more problematic: If we allow all concurrent requests to be
serviced in a new thread, we have not placed a bound on the number of threads
concurrently active in the system.
• Unlimited threads could exhaust system resources, such as CPU time or memory.
25
• One solution to this issue is to use thread pools.
• The general idea behind a thread pool is to create a number of threads at process
startup and place them into a pool, where they sit and wait for work. When a
server receives a request, it awakens a thread from this pool-if one is available-
passing it the request to service.
• Once the thread completes its service, it returns to the pool awaiting more work. If
the pool contains no available thread, the server waits until one becomes free.
In particular, the benefits of thread pools are:
1. It is usually faster to service a request with an existing thread than waiting to create a
thread.
2. A thread pool limits the number of threads that exist at any one point. This is
particularly important on systems that cannot support a large number of concurrent
threads.
The number of threads in the pool can be set based upon factors such as the number of
CPUs in the system, the amount of physical memory, and the expected number of
concurrent client requests.
Thread Specific Data
• Threads belonging to a process share the data of the process.
• This sharing of data provides one of the benefits of multithreaded programming.
• Each thread might need its own copy of certain data in some circumstances. We
will call such data thread-specific data.
• For example, in a transaction-processing system, we might service each
transaction in a separate thread.
• Furthermore, each transaction may be assigned a unique identifier. To associate
each thread with its unique identifier we could use thread-specific data. Most
thread libraries-including Win32 and Pthreads-provide some form of support for
thread-specific data. Java provides support as well.
26
2.8 COOPERATING PROCESSES AND SYNCHRONIZATION
It refers the sharing of resources between the processes.
A Synchronization using atomic operations to ensure correct cooperation between
processes.
B The ‘‘Too Much Milk’’ problem:
Person A Person B
3:00 Look in fridge. Out of milk.
3:05 Leave for store.
3:10 Arrive at store.
Look in fridge. Out of milk.
3:15 Leave store. Leave for store.
3:20 Arrive home, put milk away. Arrive at store.
3:25 Leave store. Arrive home. OH, NO!
One of the most important things in synchronization is to figure out what you want to
achieve. In the given problem: somebody gets milk, but we don’t get too much milk.
Mutual exclusion Mechanisms that ensure that only one person or process is doing
certain things at one time (others are excluded). E.g. only one person goes shopping
at a time.
Critical section A section of code, or collection of operations, in which only one
process may be executing at a given time. E.g. shopping.
There are many ways to achieve mutual exclusion. Most involve some sort of
locking
Mechanism: prevent someone from doing something. For example, before shopping,
leave a note on the refrigerator.
Three elements of locking:
1. Must lock before using. Leave note
2. Must unlock when done. Remove note
3. Must wait if locked. Don’t shop if note
1st attempt at computerized milk buying:
27
Processes A & B
1 if (No Milk) {
2 if (No Note) {
3 Leave Note;
4 Buy Milk;
5 Remove Note;
6}
7}
2.9 CRITICAL SECTION PROBLEM:
The Critical-Section Problem
1. n processes all competing to use some shared data
2. Each process has a code segment, called critical section, in which the shared data is
accessed.
3. Problem – ensure that when one process is executing in its critical section, no other
process is allowed to execute in its critical section.
4. Structure of process Pi
Repeat
Entry section
Critical section
Exit section
Remainder section
Until false;
Solution to Critical-Section Problem
1. Mutual Exclusion If process Pi is getting executed in its critical section, then no other
processes can get executed in their critical sections.
2. Progress If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes that
will enter the critical section next cannot be postponed indefinitely.
28
3. Bounded Waiting A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
_ Assume that each process executes at a nonzero speed.
_ No assumption concerning relative speed of the n processes.
Initial Attempts to Solve Problem
1. Only 2 processes, P0 and P1
2. General structure of process Pi (other process Pj)
Repeat
Entry section
Critical section
Exit section
Remainder section
Until false;
3. Processes may share some common variables to synchronize
Their actions.
There are 7 objectives.
1. Producer-consumer problem.
2. Implementation of critical section problems algorithm.
3. Modified producer-consumer problem.
4. Primitives for mutual exclusion.
5. Alternating policy.
6. Hardware assistance.
7. Semaphore.
2.9.1 Producer Consumer Problem
Begin
P.0. while flag =1 do;1*wait
P.1. output the number;
P.2. set flag=1;
End
29
Producer Process
More than one problem can’t be in critical region at a time.
Eg: Working in word, excel, spreadsheets. If we want to print these things at the same
time printer get confused.
STEPS
i. Let us assume that initially the flag=0.
ii. One of the produces processes (PA) executes instruction P.O. Because the flag=0,
it does not wait at P.0, but it goes to instruction P.1.
iii. PA outputs a number in the shared variable by executing instruction P.1
iv. At this moment, the time slice allocated to PA gets over and that process is moved
from running to ready state. The flag is still 0.
v. Another producer process PB is now scheduled.
vi. PB also executes its P,0 and finds the flag as 0, and therefore, goes to its P1.
vii. PB overwrites on the shared variable by instruction P.1 therefore, causing the
previous data to be lost.
2.9.2 Modified Producer- Consumer Problem
i. Initially flag=0.
ii. PA executes instruction P.0 and falls through to P.1 as the flag=0.
Begin
C.0 while flag=0 do; 1*wait
C.1 print the number;
C.2 set flag=0;
End
Consumer Process
Begin
C.0 while flag=0 do; 1*wait
C.1 set flag=0;
C.2 Print the number
End
Consumer Process
Begin
P.0 while flag=1 do; 1* wait
P.1 set flag =1;
P.2 Output the number
End
Producer Process
30
iii.
PA sets flag to 1 by instruction P.1
iv. The time slice for PA is over and the processor is allocated to another producer
process PB.
v. PB keeps waiting at the instruction P.0 because flag is now=1. This continues
until its time slice also is over, without doing anything useful. Hence, even if the
shared data item is empty, PB cannot output the number. This is clearly wasteful,
though it may not be a serious problem.
vi. A consumer process CA is now scheduled. It will fall through C.0 because
flag=1.
vii. CA will set flag to 0 by instruction C.1.
viii. CA will print the number by instruction C.2 before the producer has output.
Let P1 and P2 be the two processes.
P1, P2 = 1;
If P1,P2 flag=0, it enters the loop of sets flag=1
If P1 process time is expired then it is in critical region. Now P2 has flag 1 so it
waits and waits for long time so it is not suitable for problematic process.
2.9 .3 Primitives For Mutual Exclusion
Begin
P.0 while flag=1 do; 1*wait
P.S1 Begin Critical-Region;
P.1 output the numbers
P.2 set flag=1;
P.S2 End-Critical-Region
End
Producer Process
i. Let us assume that initially the flag=0.
Begin
C.0 while flag=0 do; 1*wait
C.S1 Begin Critical Region;
C.1 Print the number;
C.2 set flag=0;
C.S2 End-Critical-Region
End
Consumer Process
31
ii.
A producer process PA executes P.0 because flag=0, it falls through to P.S1. Again,
assuming that there is no other process in the critical region, it will fall through to
P.1
iii. PA outputs the number in a shared variable by executing P.1.
iv. Let us assume that at this moment the time slice for PA gets over, and it is moved
into the ‘Ready’ state from the ‘Running state’, the flag is still 0.
v. Another producer process PB now executes P.0. It finds that flag=0 and so falls
through to P.S1.
vi. Because PA is in the critical region already, PB is not allowed to proceed further,
thereby, avoiding the problem of race conditions. This is our assumption about
mutual exclusion primitives. E.g.: If one process is executing it does not allow
other process to execute. P1 process enters while flag =0. It enters inside critical
region. If time process is expired it resides in critical region. 3 rd
the 2nd
process
enter the critical region if its time process is expired. But critical region does not
allow this process because here mutual exclusion concept is follow. Thus until the
completion of process 1, process 2 has to wait. This is also not sufficient so we go
to alternating policy.
32
2.9.4 Alternating Policy
This is useful for accessing only one process at a time. Until the condition is true,
(P1P2) enters into critical region A because process id is set,
Set process_id P1=B
So it waits until this process is completed. Here it sets P1,P2 it as ‘B’. It is not
possible.
Let us assume that initially process- ID is set to “A” and process A is then scheduled.
This is done by the operating system.
1) Process A will execute instruction A.0 and fall through to A.1, because
Process-ID=”A”.
2) Process A will only execute the critical region and only then process-ID is set
to “B” at instruction A.2. Hence, even if the context switch takes place after
A.0 or even after A.1 but before A.2 and if process B is then scheduled
process B will continue to loop at instruction B.0 and will not enter the critical
region only if process-ID=”B”. And this can happen only in instruction on A.2
which in turn can happen only after Process a has executed its critical region
in instruction A.1. This is clear from the program for as given in figure.
33
Begin
A.0 while process-Id= “B” do; * wait
A.1 Critical Region-A;
A.2 Set process-Id=”B”
End
Begin
B.0 While process-Id=”A” do; * wait
B.1 critical Region-B;
B.2 Set Process-Id=”A”
End
PROCESS A
PROCESS B
2.9.5 Hardware Assistance
We cannot identify which result for which process. It may also be overwritten. Hence
we go to semaphore.
2.10 SEMAPHORE
It is the best solution for critical section problem. It is a protected variable. It can
be accessed and changed by using Down procedure and up procedure.
Begin
Initial-portion;
call Enter-critical-Region;
critical Region;
call Exit-critical-Region;
Remaining-portion;
End
34
DOWN(s)
UP(s)
Lack
Begin
S>0
S=S-1
Unlock
End
Move the current
PCB form the
Running state to
the semaphore
queue.
Exit
Figure 2.10 DOWN Procedure
35
In the above figure, the down and up form the mutual exclusion primitives for any
process. Hence the process has a critical region; it has to be encapsulated between the
down and up instruction. The downs and ups primitives ensure that only one process is in
its critical region. All other process is waiting to enter their respective critical region are
kept waiting in a queue is called semaphore queue.
2.11 PROCESS COORDINATION PROBLEMS:
Process may be divided into two types.
Lack
Begin
Semaphore
queue
empty?
S=S+1
Unlock
End
Move the current
PCB form the
Running state to
the semaphore
queue.
Figure 2.11 UP Procedure
36
1. Dependant process
2. Independent process.
A process is independent if it cannot affect or be affected by the other processes
executing in the system. It does not have sharing with other process . A process is
dependant or cooperating if it can be affected by the other processes executing in the
system. Process cooperation occurs for the following reasons.
1. Information sharing Since several users are sharing the same piece of
information, we must provide the environment to allow concurrent access.
2. Computation Speedup If we want a particular task to run faster, We must
break it into subtasks, each of will be executing in parallel with the others..
3. Modularity Dividing the system functions into separate processes or threads.
4. Convenience Each individual user may have many tasks and priority on which
to work at at one time. i.e. user may be editing, printing and compiling in parallel.
To illustrate the concept of cooperating processes, let us consider the Producer-Consumer
problem.
A producer can produce one item while the consumer is consuming other
item The producer will produce the item into the buffer. Consumer takes or consumes the
task from the buffer. Both Producer and consumer must be synchronized. The buffer
should not be empty or full. Because if the buffer is full ,the producer cannot produce the
item into the buffer. Similarly if the buffer is empty, the consumer cannot consume the
task from the buffer. So the producer and consumer must be synchronized. The following
code illustrates the concept of working procedure of producer – consumer problem.
The producer has local variable next produced in which the new item to be
produced is stored.
While (1)
{
/*produce an item in next produced*/
While (((in+1) %BUFFER_SIZE) ==out)
; /*do nothing*/
Buffer [in] =next produced;
In= (in+1) %BUFFER_SIZE;
}
37
The consumer process has local variable next consumed in which the item
to be consumed is stored.
38
While (1)
{
/*produce an item in next produced*/
While (in==out)
; /*do nothing*/
Next Consumed=buffer [out];
Out= (out+1) %BUFFER_SIZE;
/* consume the item in next Consumed*/
The following are the examples of coordination problems. }
1. Bounded buffer problem
2. Readers-writer problem
3. Dining-philosophers problem
4. The Sleeping Barber shop problem
5. Baboons Crossing a Canyon
6. Cigarette Smoker Problem
2.11.1 Bounded buffer problem
N element buffer, producer and consumers work with this buffer Consumers
cannot proceed till producer produced something
Producer cannot proceed if buffer == N
2.11.2 Readers-writer problem
Shared database, any number of readers can concurrently read content. Only one
writer can write at any one time (with exclusive access) Variations:
No reader will be kept waiting unless a writer has already received exclusive write
permissions once a writer is ready, it gets exclusive permission as
Soon as possible. Once a writer is waiting, no further reads are allowed.
• The Readers-writers problem dealed with data objects shared among several
concurrent processes.
• Some processes are readers, others writers
o writers require exclusive access
• akaz shared vs. exclusive locks
• Several variations:
o this discussion deals with first readers-writers problem
39
o No reader will be kept waiting unless a writer has already obtained
permission to use the shared object (readers don't need to wait for other
readers).
Readers-Writers solution
// shared data
Semaphore mutex, wrt;
// mutex serves as common mutual exclusion
// wrt serves as mutual exclusion for writers
// also used to in reader to signal that a writer can start
// Initially
Mutex = 1, wrt = 1, read count = 0
Writer process
Wait (wrt);
// …
// writing is performed
// …
Signal (wrt);
Reader process
Wait (mutex);
Read count++;
If (read count == 1)
Wait (rt);
Signal (mutex);
// …
// reading is performed
// …
Wait (mutex);
Read count--;
If (read count == 0)
Signal (wrt);
Signal (mutex):
40
2.11.3 Dining-philosophers problem
Classical Problems of Synchronization - Dining Philosophers Problem
There are N philosophers sitting around a circular table eating spaghetti and discussing
philosophy.
The problem is that each philosopher needs 2 chopsticks to eat, and there are only N
chopsticks, each one between each 2 philosophers.
Figure 2.12 Dining Philosophers
Design an algorithm that the philosophers can follow that insures that none starves as
long as each philosopher eventually stops eating, and such that the maximum number of
philosophers can eat at once.
Analysis
First, we notice that these philosophers are in a thinking-picking up chopsticks-eating-
putting down chopsticks cycle as shown below.
41
Here’s an approach to the Dining Phils1 that’s simple
Void philosopher () {
While (1) {
Sleep ();
get_left_chopstick ();
get_right_chopstick ();
Eat ();
put_left_chopstick ();
put_right_chopstick ();
}
}
If every philosopher picks up the left chopstick at the same time, noone gets to eat - ever.
How does a philosopher pick up chopsticks?
The problem is each chopstick is shared by two philosophers and hence a shared
resource.
We certainly do not want a philosopher to pick up a chopstick that has already been
picked up by his neighbor. This is a race condition.
To address this problem, we may consider each chopstick as a shared item protected by a
mutex lock. Each philosopher, before he can eat, locks his left chopstick and locks his
right chopstick.
If the acquisitions of both locks are successful, this philosopher now owns two locks
(hence two chopsticks), and can eat.
After finishes easting, this philosopher releases both chopsticks, and thinks! This
execution flow is shown below.
42
Figure 2.12.1 Flow Diagram
Some other suboptimal alternatives:
• Pick up the left chopstick, if the right chopstick isn’t available for a given time, put the
left chopstick down, wait and try again. (Big problem if all philosophers wait the same
time - we get the same failure mode as before, but repeated.) Even if each philosopher
waits a different random time, an unlucky philosopher may starve.
• Require all philosophers to acquire a binary semaphore before picking up any
chopsticks. This guarantees that no philosopher starves (assuming that the semaphore is
fair) but limits parallelism dramatically.
Tannenbaum’s solution to get Maximum concurrency
#define N 5 /* Number of philosphers */
#define RIGHT(i) (((i)+1) % N )
#define LEFT(i) (((i)==N) ? 0 : (i)+1)
typedef enum { THINKING, HUNGRY, EATING } phil_state;
phil_state state[N];
semaphore mutex =1;
semaphore s[N]; /* one per philosopher, all 0 */
void test(int i) {
if ( state[i] == HUNGRY &&
state[LEFT(i)] != EATING &&
state[RIGHT(i)] != EATING ) {state[i] = EATING; V(s[i]);}
43
}
void get_chopsticks(int i) {
P(mutex);
state[i] = HUNGRY;
test(i);
V(mutex);
P(s[i]);
}
void put_chopsticks(int i) {
P(mutex);
state[i]= THINKING;
test(LEFT(i));
test(RIGHT(i));
V(mutex);
}
void philosopher(int process) {
while(1) {
think();
get_chopsticks(process);
eat();
put_chopsticks(process);
} }
The magic is in the test routine. When a philosopher is hungry it uses test to try to eat. If
test fails, it waits on a semaphore until some other process sets its state to EATING.
Whenever a philosopher puts down chopsticks, it invokes test in its neighbors. (Note that
test does nothing if the process is not hungry, and that mutual exclusion prevents races.)
So this code is correct, but somewhat obscure.
And more importantly, it doesn’t encapsulate the philosopher - philosophers manipulate
the state of their neighbors directly.
44
Here’s a version that does not require a process to write another process’s state, and
gets equivalent parallelism.
#define N 5 /* Number of philosphers */
#define RIGHT(i) (((i)+1) %N)
#define LEFT(i) (((i)==N) ? 0 : (i)+1)
typedef enum { THINKING, HUNGRY, EATING } phil_state;
phil_state state[N];
semaphore mutex =1;
semaphore s[N]; /* one per philosopher, all 0 */
void get_chopsticks(int i) {
state[i] = HUNGRY;
while ( state[i] == HUNGRY ) {
P(mutex);
if ( state[i] == HUNGRY &&
state[LEFT] != EATING &&
state[RIGHT(i)] != EATING ) {
state[i] = EATING;
V(s[i]);
}
V(mutex);
P(s[i]);
}
}
void put_chopsticks(int i) {
P(mutex);
state[i]= THINKING;
if ( state[LEFT(i)] == HUNGRY ) V(s[LEFT(i)]);
if ( state[RIGHT(i)] == HUNGRY) V(s[RIGHT(i)]);
V(mutex);
}
void philosopher(int process) {
while(1) {
think();
45
get_chopsticks(process);
eat();
put_chopsticks();
}
}
If you really don’t want to touch other processes’ state at all, you can always do the V to
the left and right when a philosopher puts down the chopsticks. (There’s a case where a
condition variable is a nice interface.)
2.11.4 The Barbershop Problem
Figure 2.13 Barber shop
Three barbers work independently in a barber shop:
• The barbershop has 3 barber chairs, each of which is assigned to one barber.
• Each barber follows the same work plan:
• The barber sleeps (or daydreams) when no customer customer is waiting
(and is not in the barber's own chair).
• When the barber is asleep, the barber waits to be awaken by a new
customer. (A sign in the shop indicates which barber has been asleep
longest, so the customer will know which barber to wake up if multiple
barbers are asleep.)
• Once awake, the barber cuts the hair of a customer in the barber's chair.
• When the haircut is done, the customer pays the barber and then is free to
leave.
46
• After receiving payment, the barber calls the next waiting customer (if
any). If such a customer exists, that customer sits in the barber's chair and
the barber starts the next haircut. If no customer is waiting, the barber goes
back to sleep.
• Each customer follows the following sequence of events.
• When the customer first enters the barbershop, the customer leaves
immediately if more than 20 people are waiting (10 standing and 10
sitting). On the other hand, if the barbershop is not too full, the customer
enters and waits.
• If at least one barber is sleeping, the customer looks at a sign, wakes up
the barber who has been sleeping the longest, and sits in that barber's chair
(after the barber has stood up).
• If all the barbers are busy, the customer sits in a waiting-room chair, if one
is available. Otherwise, the customer remains standing until a waiting-
room chair becomes available.
• Customers keep track of their order, so the person sitting the longest is
always the next customer to get a haircut.
• Similarly, standing customers remember their order, so the person
standing the longest takes the next available waiting-room seat.
For this exercise,
you are to write a C program to simulate activity for this barbershop:
a. Simulate each barber and each customer as a separate process.
b. Altogether, 30 customers should try to enter.
c. Use a random number generator so that a new customer arrives every 1, 2, 3, or 4
seconds. (This might be accomplished by a statement such as sleep(1+(rand()
%4));
d. Similarly, each haircut lasts between 3 and 6 seconds.
e. Each barber should report when he/she starts each haircut and when he/she
finishes each haircut.
f. Each customer should report when he/she enters the barbershop. The customer
also should report if he/she decides to leave immediately.
47
g. Similarly, if the customer must stand or sit in the waiting room, the customer
should report when each activity begins.
h. Finally, the customer should report when the haircut begins and when the
customer finally exits the shop.
i. Semaphores and shared memory should be used for synchronization.
2.11.5 Baboons Crossing a Canyon
A student majoring in anthropology and minoring in computer science has
embarked on a research project to see if African baboons can be taught about deadlocks.
She locates a deep canyon and fastens a rope across it, so the baboons can cross hand-
over-hand.
Passage along the rope follows these rules:
• Several baboons can cross at the same time, provided that they are all going in the
same direction.
• If eastward moving and westward moving baboons ever get onto the rope at the
same time, a deadlock will result (the baboons will get stuck in the middle)
because it is impossible for one baboon to climb over another one while
suspended over the canyon.
• If a baboon wants to cross the canyon, he must check to see that no other baboon
is currently crossing in the opposite direction.
• Your solution should avoid starvation. When a baboon that wants to cross to the
east arrives at the rope and finds baboons crossing to the west, the baboon waits
until the rope in empty, but no more westward moving baboons are allowed to
start until at least one baboon has crossed the other way.
For this exercise,
you are to write a C program to simulate activity for this canyon crossing
problem:
a. Simulate each baboon as a separate process.
b. Altogether, 30 baboons will cross the canyon, with a random number generator
specifying whether they are eastward moving or westward moving (with equal
probability).
48
c. Use a random number generator, so the time between baboon arrivals is between
1 and 8 seconds.
d. Each baboon takes 1 second to get on the rope. (That is, the minimum inter-
baboon spacing is 1 second.)
e. All baboons travel at the same speed. Each traversal takes exactly 4 seconds,
after the baboon is on the rope.
f. Use semaphores for synchronization. You may also use shared memory.
(Additional communication via sockets is allowed, but do not use sockets unless
such communication is clearly needed.)
2.11.6 The Cigarette-Smokers Problem
Consider a system with three smoker processes and one agent process.
Each smoker continuously rolls a cigarette and then smokes it.
But to roll and smoke a cigarette, the smoker needs three ingredients: tobacco, paper, and
matches.
One of the smoker processes has paper, another has tobacco, and the third has matches.
The agent has an infinite supply of all three materials.
The agent places two of the ingredients on the table.
The smoker who has the remaining ingredient then makes and smokes a cigarette,
signaling the agent on completion.
The agent then puts out another two of the three ingredients, and the cycle repeats.
There are four processes in the system. Three represent the smokers, and one represents
the supplier.
Solution:
The cigarette smokers problem becomes solvable using binary semaphores, or mutexes.
Let us define an array of binary semaphores A, one for each smoker; and a binary
semaphore for the table, T.
Initialize the smokers' semaphores to zero and the table's semaphore to 1.
Then the arbiter's code is
49
while true {
wait(T);
choose smokers i and j nondeterministic ally , making the third smoker k;
signal(A[k]);
}
Code for smoker i is
while true {
wait(A[i]);
make a cigarette
signal(T);
smoke the cigarette;
}
/* The Cigarette-smokers problem uses semaphores to solve. */
typedef int semaphore;
semaphore items=1; /*used for mutual exclusive access to the table on which the two
ingredients are placed */
semaphore more=0;
semaphore temp=0; /* used to queue the waiting smokers */
int count =0; /*indicates the number is of waiting smokers */
boolean flags[0..2]=initially all false;
/*a flag true indicates if the corresponding item is on the table */
/* the three items needed for smoking are named as 0,1,2 */
/* process i has item i but needs the other two items, i.e (i-1) mod2 and (i+1)mod 2*/
for 0<=i<=2;
Smoker process i;
{
repeat
wait(items); /*enter critical section */
50
if (flag[i-1 % 2] and flag[+1 % 2])
{
flag[i-1 % 2]=false;
flag[i+1 % 2]=false;
SMOKE;
While(count >0)do
{
count --;
signal(temp);
}
signal(more);
}else{ /*both items needed for the smoking are not available */
count++;
signal(items);
wait(temp); /*wait for the next round */
}
until false;
}
Supplier process:
{
repeat
put any two items on the table and set the
corresponding flags to true;
signal(items);
wait(more);
until false;
}
A full C program dealing with smokers problem is as follows:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <conio.h>
51
enum Ingredients /* Enum representing the
ingredients */
{
None,
Paper,
Tobacco,
Matches
};
/* Structure representing a Smoker & Agent process
*/
typedef struct smoker
{
char SmokerID[25];
int Item;
}SMOKER;
typedef struct agent
{
char AgentID[25];
int Item1;
int Item2;
}AGENT;
char* GetIngredientName(int Item)
{
if(Item == Paper)
return "Paper";
else if(Item == Tobacco)
return "Tobacco";
else if(Item == Matches)
return "Matches";
}
void GetAgentIngredients(AGENT* agent)
{
/* Simulate random generation of ingredients*/
agent->Item1=random(3)+1;
52
while(1)
{
agent->Item2=random(3)+1;
if(agent->Item1 != agent->Item2)
break;
}
printf("nAgent Provides Ingredients- %s,%snn:,
GetIngredientName(agent-
>Item1),GetIngredientName(agent->Item2));
}
void
GiveIngredientToSmoker(AGENT*agent,SMOKE
R* smoker)
{
int index=0;
while(smoker[index].Item !=NULL)
{
if((smoker[index].Item !=agent-
>Item1)&&(smoker[index].Item !=
agent->Item2));
{
printf("nSmoker - %s"-is smoking his
cigarettenn", smoker[index].SmokerID);
agent->Item1=None;
agent->Item2=None;
break;
}
index++;
}
}
void main()
{
/*Create the processes required -1 Agent, 3
Smokers */
53
AGENT agent;
SMOKER smoker[4] =
{{SmokerWithPaper",Paper},
{"SmokerWithTobacco",Tobacco},
{"SmokerWithMatches",Matches},{"0",None}};
int userChoice=0;
strcpy(agent.AgentID,"Agent");
agent.Item1=None;
agent>item2=None;
while(1)
{
GetAgentIngredients(&agent);
GiveIngredientToSmoker(&agent,smoker);
printf("Press ESC to exit or any key to
continuenn");
UserChoice=getch();
If(UserChoice ==27)
break;
}
}/*Program Ends*/
54
{Problem :}
{Deadlock occurs if you use a semaphore for the individual ingredients. }
{Solution :}
{Use a semaphore for each combination of ingredients. }
VAR
tobacco_paper, tobacco_matches, paper_matches, done_smoking: SEMAPHORE;
PROCEDURE vendor
BEGIN
WHILE (making_money) DO
BEGIN
CASE (random(1, 3)) OF
1:
signal(tobacco_paper);
2:
signal(tobacco_matches);
3:
signal(paper_matches);
END;
wait(done_smoking);
END;
END;
PROCEDURE smoker1 { This smoker has matches }
BEGIN
WHILE (not_dead) DO
BEGIN
wait(tobacco_paper);
smoke;
signal(done_smoking);
END;
END;
PROCEDURE smoker2 { This smoker has paper }
BEGIN
WHILE (addicted_to_nicotine) DO
BEGIN
wait(tobacco_matches);
smoke;
signal(done_smoking);
END;
END;
PROCEDURE smoker3 { This smoker has tobacco }
BEGIN
WHILE (can_inhale) DO
55
BEGIN
wait(paper_matches);
smoke;
signal(done_smoking);
END;
END;
BEGIN
tobacco_paper := 0;
tobacco_matches := 0;
paper_matches := 0;
done_smoking := 0;
END.
2.12 DEADLOCK
DEADLOCK PROBLEM
A set of blocked processes each holding a resource and waiting
to acquire a resource held by another process in the set.
Eg:
 System has 2 tape drives.
 P1 and P2 each hold one tape drive and each needs another one.
Eg:
 semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
 Traffic only in one direction.
 Each section of a bridge can be viewed as a resource.
 If a deadlock occurs, it can be resolved if one car backs
56
up (preempt resources and rollback).
 Several cars may have to be backed up if a deadlock
Occurs.
 Starvation is possible.
2.13 DEADLOCK CHARACTERIZATION
2.13.1 Necessary conditions
A) Mutual exclusion
• At least one resource must be held in non-sharable mode ie only one
process at a time can use resource.
• If another process request resource delayed until resource has been
released.
b) Hold and wait
• A process must be holding at least one resource and waiting to acquire additional
resources that are currently held by other several processes.
c) No preemption
• Resources cannot be preempted ie resource can be released only voluntarily by
the p’s holding it and the process complete its tasks.
d) Circular waits
• Consider we have p no. of processes process, p={p0,p1….pn}
P0 is waiting for a resource that is held by p1.
P1 is waiting for a resource that is held by p2
P2 is waiting for a resource that is held by pn-1
Pn-1 waiting for a resource that is held by Pn
Pn is waiting for a resource that is held by pα
All four conditions must hold for a deadlock to occur.
2.13.2 Resource allocation graph (RAG)
Deadlocks are described graph called system resource allocation. Graph consists
of vertices--v.
Set of edges--E.
v- Partitioned into different nodes.
P= {p1, p2…pn}--active process.
57
R= {R1, R2 ….Rm}---resources.
A directed edge form process pi to rj
Pi--Rj is signifies Pi requested an instance of resource type Rj and is currently
waiting for that resource.
A directed edge from Rj to process Pi=> Rj->Pi is signifies resource Rj is allocated to
process i.
Circle--process.
Square-resources.
Figure 2.14: Resource Allocation Graph
When Pi required Rj required edge inserted in RAG. When this requirement
fulfilled, required edge is transferred to an assignment edge.
1) The sets P,R,E
P1= {p1, p2, p3}
R= {R1, R2, R3, R4}
E= {P1R1, P2-R3, R1-P2, R3-P3, R2-P1, R2-P2}
2) Resource instances
1. One instance of resource R1
2. Two instance of resource R2
R1 R3
R2
R4
.
.
P1 P2 P3
58
3. One instance of resource R3
4. Three instance of resource R4
3) Process state
1. Process p1 is holding on an resource R2 and is waiting for resource type R1.
2. Process P2 is holding an resource R1 and R2, waiting for R3.
3. Process P3 holding resource R3.
Give RAG no cycles then no process in system is deadlock. if graph contain cycle
deadlock may exist. In above figure if P3 requesting R2 we have cycle
P1-R1-P2-R3-P3-R2--P1
P2-R3-P3-R2-P2
Figure 2.15 RAG with cyclic
R1 R3
P1 P2 P3
R2 R4
59
If R2 is released by either P1/P2, P3R2 will break, no cycle is formed.
2.14 DEADLOCK PREVENTION
By ensuring deadlock prevention at least on the necessary condition of
deadlock cannot hold, so this time we can prevent deadlock.
2.14.1 Mutual exclusion
• mutual exclusion condition must hold for non sharable resources
• E.g. a printer cannot be simultaneously shared by several process.
• If sharable resource  mutual exclusion access so no deadlock if several ps
attempt to open a read only file at the same time they can be granted simultaneously
access to the file .
• we cannot prevent dead lock by denying mutual exclusion condition
2.14.2 Hold and wait
To prevent the Hold and Wait condition our process can allocate all
resources that it needs before starting execution, that way, it won't have to wait for
resources during execution (thus, preventing deadlocks).
Another strategy is to allow processes to allocate resources when they have none.
Again, this prevents deadlocks since a process must release whatever resources it has
before requesting more resources.
Each of these has some performance or resource utilization issues. If we
allocate all resources at the beginning of the program, we will hold them all while the
program executes. We may not need all of them all at once though, and so resources
end up underused (since no other process can use them while we hold them).
Another problem is starvation. Some processes may never get to execute since some
other process always has control of some popular resource.
2.14.3 No preemption
If a process is holding some resources request another resource that cannot
be immediately allocated to it then all resources currently being held is preempted
If a process requests some resources we first check if they are available and
if they are we allocate them, if not check whether they are allocated to some other
process that is waiting for some other resource
60
If so we prompt the desired resources from the waiting process and allocate them to the
requesting process
2.14.4 Circular wait
To prevent Circular Wait we can define an order by which processes allocate
resources.
For example, each resource type gets a number, and the processes can only allocate
resources in increasing order of those resource numbers.
We have p no of processes
P’s , p ={p0,p1….pn}
P is waiting for a resource that is held by p1 p2 p3 pn-1 pn
P0 pn-1
pn
2.15 DEADLOCK AVOIDANCE
Deadlock can be avoided by banker’s algorithm.
Deadlock avoidance deals with processes that declare before execution how many
resources they may need during their execution.
If given several processes and resources, we can allocate the resources in some
order as to prevent the deadlock, the system is said to be in a safe state, else, if a deadlock
is possible, the system is said to be in an unsafe state.
The idea of avoiding a deadlock is to simply now allow the system to enter an
unsafe state which may cause a deadlock. We can define what makes an unsafe state.
Eg:
Process Allocated resources
A B C
Maximum requirement
A B C
Available resources
A B C
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2 2 1 0
P3 3 0 2 9 0 2 5 3 2
P4 2 1 1 2 2 2 5 2 1
P5 0 0 2 4 3 3 5 2 2
61
The allocated resource is also but maximum requirement is 753 so the
requirement of more p is not executed. In the case of p2 maximum requirement is 322
allocated resources is 200. So we borrow from available resources 322
Ie: 322
122
-------
210
P3 process requires 902 but allocated is 302.
The max requirement is high and hence p3 is not exceeded
More available resource is
3 2 2
2 1 0
------------
5 3 2
P4 has 211 allocated resources and requirement is 222. Here, we need 011 resource so
521 is available currently p5 process requires 433 but allocated is 002. But we need
additional resource so borrowed from available resource and it becomes 552.
2.16 DEADLOCK DETECTION
If a system does not employ either deadlock prevention / deadlock avoidance
algorithm, and when a deadlock situation occurs system must provide
• Algorithm=)examines the state of the system to determine whether a
deadlock has occurred
• Algorithm=)to recover from the dead lock
Detection –recovery require overhead includes
• .Runtime costs of maintaining necessary in foundation and executing
detection algorithm
2.16.1 Single instance of each resource type
• RAG is used to removing nodes of type’s resource and collapsing
edges.
• Edge from p1 -)pj =) process pj waiting for process pj to release a
resource that pi needs
62
• Dead lock exists if and only if wait for graph contains cycle to
detect deadlock s/m maintain wait for graph and periodically
invoke algorithm for cycle is graph
2.16.2 Several instances of resources type
 The Wait for graph scheme is not applicable for RAG with
multiple instances of each resources type.
 The deadlock detection used here employs several time
varying data structures similar to those used in bankers available ,
allocation , request=)procedures
2.16.3 Detection algorithm usage
• How often deadlock occur.
• How many processes will be affected by deadlock when it happens?
• if deadlock occur frequently detection “alg should be invoked frequently
Resource allocated to deadlocked processes idle until deadlock broken
• if there are many diff resource types one req may cause many cycles in rag each
cycle conquered by most recent requirement.
• cpu utilization drops 40% recovery from deadlock:
• when detection algorithm determines that a deadlock exists several alternation
exists
• if a system is in deadlock
• breaking deadlock
• Abort more processes to break circular circuit
• Preempt source resources 1/ more deadlocked processes.
2.17 DEADLOCK RECOVERY
We can recover from a deadlock via two approaches: we either kill the
processes (that releases all resources for killed process) or take away resources.
63
Process Termination:
To eliminate deadlocks by aborting a process, we use two methods.
In both methods, the system reclaims all resources allocated to the terminated
processes.
2.17.1 Abort all deadlocked processes
This method clearly will break the deadlock cycle, but at a great expense.
These processes may have computed for a long time, and the results of these
partial computations must be discarded and probably computed.
2.17.2 Abort one process at a time until the deadlock cycle is eliminated
This method incurs considerable overhead, since, after each process is
aborted, a deadlock-detection algorithm must be invoked to determine whether any
processes are still deadlock.
When recovering from a deadlock via process termination, we have two
approaches. We can terminate all processes involved in a deadlock, or terminate them
one by one until the deadlock disappears.
Killing all processes is costly (since some processes may have been doing
something important for a long time) and will need to be re-executed again. Killing a
process at a time until deadlock is resolved is also costly, since we must rerun deadlock
detection algorithm every time we terminate a process to make sure we got rid of the
deadlock.
Also, some priority must be considered when terminating processes, since
we don't want to kill an important process when less important processes are available.
Priority might also include things like how many resources are being held by that
process, or how long it has to be executed, or how long it has to go before it completes,
or how many resources it needs to complete its job, etc.
64
2.18 RESOURCE PREEMPTION
To eliminate deadlocks using resource preemption, we successively
preempt some resources from processes and give these resources to other processes
until the deadlock cycle is broken. If preemption is required to deal with deadlocks,
then three issues need to address:
2.18.1 Selecting a victim
Which resources and which processes are to be preempted? If process is
terminated, we must determine the order of preemption to minimize cost. Cost factors
depend on the number of resources a deadlock process is holding and amount of time a
deadlocked process consumed during its execution.
2.18.2 Rollback
If we preempt a resource from a process, it cannot continue with its normal
execution; it is missing some needed resource. So we must rollback the process to some
safe state, and restart it from that state.
2.18.3 Starvation
This approach takes resources from waiting processes and gives them to
other processes. Obviously, the victim process cannot continue regularly, and we have a
choice of how to handle it. We can either terminate that process, or roll it back to some
previous state so that it can request the resources again.
Again, there are many factors that determine which process we choose as the
victim.
Note: that if the system has resource preemption, by definition, a deadlock cannot occur.
The type of resource preemption we are talking about here is non-normal preemption that
only occurs when a deadlock detection mechanism detected a deadlock.
65
2.19 COMBINED APPROACH TO DEADLOCK HANDLING
Researcher have argued that none of the basic approaches for handling
deadlocks (prevention, avoidance, and detection) alone is appropriate for the
entire spectrum of resource-allocation problems encountered in operating
systems.
• One possibility is to combine the three basic approaches, allowing the use of the
optimal approach for each class of resources in the system.
• The proposed method is based on the notion that resources can be partitioned into
classes that are hierarchically ordered.
• A resource-ordering technique is applied to the classes. Within each class, the
most appropriate technique for handling deadlocks can be used.
• It is easy to show that a system that employs this strategy will not be subjected to
deadlocks.
• Indeed, a deadlock cannot involve more than one class, since the resource-
ordering technique is used. Within each class, one of the basic approaches is used.
Consequently, the system is not subject to deadlocks.
• To illustrate this technique, we consider a system that consists of the following
four classes of resources:
• Internal resources Prevention through resource ordering can be used, since run-
time choices between pending requests are unnecessary.
• Central memory Prevention through preemption can be used, since a job can
always be swapped out, and the central memory can be preempted.
• Job resources Avoidance can be used, since the information needed about
resource requirements can be obtained from the job-control cards.
• Swappable space Pre allocation can be used, since the maximum storage
requirements are usually known.
66
Points to Remember
• A Process is a program in execution.
• A process may be in any one of New, Running, Waiting, Ready, Terminated
states.
• Each process in the Operating System is associated with Process Control
Block(PCB).
• Switching the CPU from one Process to another is called as Context Switch.
• A thread is called Light Weight Process (LWP).
• A thread is a basic unit of CPU utilization.
• User threads are supported above the kernel and are implemented by a thread
library at the user level, whereas Kernel threads are supported directly by the
operating system.
• Fork is a system call by which a new process is created.
• Exec is also a system call, which is used after a fork by one of the two processes
to replace the process memory space with a new program.
• The thread cancellation is the task of terminating a thread before it has completed.
Thread that is to be cancelled is often referred to as the target thread.
• When one process is executing in its critical section, no other process can allowed
to execute in its critical section.
• A semaphore 'S' is a synchronization tool which is an integer value that, apart
from initialization, is accessed only through two standard atomic operations; wait
and signal.
• A process is deadlocked if it is waiting for an event that will never occur.
Typically, more than one process will be involved in a deadlock
• Four necessary and sufficient conditions for deadlock are
o Mutual Exclusion
o Hold and Wait
o No preemption
o Circular Wait
67
SHORT QUESTIONS
1. Define Operating System?
2. What is Multiprogramming?
3. Write the services of operating systems?
4. Write the drawbacks of layered approach
5. Define kernel? And how it differs from Microkernel?
6. Differentiate Process and program
7. Define active and passive process
8. What are the states of process
9. Define Process Control Block
10. Write the contents of PCB
11. Define Thread.
12. Define Multithread.
13. What is critical Section problem
14. Define Semaphore
15. Write are the procedures used for semaphores?
16. What is Coordination process
17. In which, situation process coordination occur
18. Write the procedure for Producer-Consumer Problem
19. Define Interprocess communication
20. What are called interprocess communication
21. What are the basic functions of an operating system?
22. Differentiate Multiprogramming and Batch Processing
23. What is Timesharing?
24. What is the purpose of Command interpreter? Why is it usually separate from the
kernel?
25. What is the purpose of system calls?
26. What is the purpose of system programs?
27. What is a Real-Time System?
28. Explain the following terms
1. Multi tasking
2. Multi programming
3. Multi threading
68
DESCRIPTIVE QUESTIONS
NOVEMBER 2007
1. (a) Explain the process control block ? (8)
(b) Describe the several methods for implementing message passing system? (7)
2. (a) What is Semaphore ? Explain the implementation of semaphores? (7)
(b) Explain Banker’s algorithm with example (8)
MAY 2007
1. a) Explain with a schematic diagram of process control block(PCB) (8)
b) Write note on Schedulers (7)
2. Explain the methods of handling a deadlock (15)
NOVEMBER 2008
1. Explain the schematic diagram of scheduling queues. (15)
2. (a) Explain about Deadlock Detection and avoidance (8)
(b) Explain Dining philosophers with a schematic diagram (7)
MAY 2008
1. (a) Explain the operations in processes (8)
(b) Give detailed discussions on process schedulers? (7)
2. (a) Explain Synchronization hardware (8)
(b) “How do you recover from deadlock”? (7)
MAY 2009
1. (A) what is critical Section problem? Explain any two algorithms that are
applicable to two processes at a time (10)
(b) Explain resource allocation graph algorithm with example (5)
2. (A) Explain the issues of threading (7)
(b) What are the four necessary conditions to prevent the occurrence of a deadlock?
69
(8)
70

Weitere ähnliche Inhalte

Was ist angesagt?

Structure of operating system
Structure of operating systemStructure of operating system
Structure of operating systemGayathriS578276
 
Operating system 24 mutex locks and semaphores
Operating system 24 mutex locks and semaphoresOperating system 24 mutex locks and semaphores
Operating system 24 mutex locks and semaphoresVaibhav Khanna
 
Classical problem of synchronization
Classical problem of synchronizationClassical problem of synchronization
Classical problem of synchronizationShakshi Ranawat
 
CPU Scheduling algorithms
CPU Scheduling algorithmsCPU Scheduling algorithms
CPU Scheduling algorithmsShanu Kumar
 
Simulation & Modeling - Smilulation Queuing System
Simulation & Modeling - Smilulation Queuing SystemSimulation & Modeling - Smilulation Queuing System
Simulation & Modeling - Smilulation Queuing SystemMaruf Rion
 
Multi core-architecture
Multi core-architectureMulti core-architecture
Multi core-architecturePiyush Mittal
 
Operating System-Concepts of Process
Operating System-Concepts of ProcessOperating System-Concepts of Process
Operating System-Concepts of ProcessShipra Swati
 
Locking base concurrency control
  Locking base concurrency control  Locking base concurrency control
Locking base concurrency controlPrakash Poudel
 
Kernel mode vs user mode in linux
Kernel mode vs user mode in linuxKernel mode vs user mode in linux
Kernel mode vs user mode in linuxSiddique Ibrahim
 
First-Come-First-Serve (FCFS)
First-Come-First-Serve (FCFS)First-Come-First-Serve (FCFS)
First-Come-First-Serve (FCFS)nikeAthena
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMSkoolkampus
 
Operating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemOperating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemVaibhav Khanna
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor schedulingShashank Kapoor
 
1.8. equivalence of finite automaton and regular expressions
1.8. equivalence of finite automaton and regular expressions1.8. equivalence of finite automaton and regular expressions
1.8. equivalence of finite automaton and regular expressionsSampath Kumar S
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OSMsAnita2
 

Was ist angesagt? (20)

Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
 
Structure of operating system
Structure of operating systemStructure of operating system
Structure of operating system
 
Operating system 24 mutex locks and semaphores
Operating system 24 mutex locks and semaphoresOperating system 24 mutex locks and semaphores
Operating system 24 mutex locks and semaphores
 
Process Management
Process ManagementProcess Management
Process Management
 
Classical problem of synchronization
Classical problem of synchronizationClassical problem of synchronization
Classical problem of synchronization
 
CPU Scheduling algorithms
CPU Scheduling algorithmsCPU Scheduling algorithms
CPU Scheduling algorithms
 
Simulation & Modeling - Smilulation Queuing System
Simulation & Modeling - Smilulation Queuing SystemSimulation & Modeling - Smilulation Queuing System
Simulation & Modeling - Smilulation Queuing System
 
Multi core-architecture
Multi core-architectureMulti core-architecture
Multi core-architecture
 
Operating System-Concepts of Process
Operating System-Concepts of ProcessOperating System-Concepts of Process
Operating System-Concepts of Process
 
Ch1-Operating System Concepts
Ch1-Operating System ConceptsCh1-Operating System Concepts
Ch1-Operating System Concepts
 
Locking base concurrency control
  Locking base concurrency control  Locking base concurrency control
Locking base concurrency control
 
Kernel mode vs user mode in linux
Kernel mode vs user mode in linuxKernel mode vs user mode in linux
Kernel mode vs user mode in linux
 
First-Come-First-Serve (FCFS)
First-Come-First-Serve (FCFS)First-Come-First-Serve (FCFS)
First-Come-First-Serve (FCFS)
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMS
 
Operating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemOperating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating system
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 
1.8. equivalence of finite automaton and regular expressions
1.8. equivalence of finite automaton and regular expressions1.8. equivalence of finite automaton and regular expressions
1.8. equivalence of finite automaton and regular expressions
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS
 
Io t
Io tIo t
Io t
 
4.6 halting problem
4.6 halting problem4.6 halting problem
4.6 halting problem
 

Ähnlich wie Operating Systems Unit Two - Fourth Semester - Engineering

OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process ConceptsMukesh Chinta
 
UNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfUNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfaakritii765
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptMohammad Almuiet
 
Operating system
Operating systemOperating system
Operating systemMark Muhama
 
Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationMani Deepak Choudhry
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process managementArnav Chowdhury
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its conceptsKaran Thakkar
 
operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptgezaegebre1
 

Ähnlich wie Operating Systems Unit Two - Fourth Semester - Engineering (20)

Chapter 3.pdf
Chapter 3.pdfChapter 3.pdf
Chapter 3.pdf
 
Ch03- PROCESSES.ppt
Ch03- PROCESSES.pptCh03- PROCESSES.ppt
Ch03- PROCESSES.ppt
 
OS-Process.pdf
OS-Process.pdfOS-Process.pdf
OS-Process.pdf
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
 
UNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdfUNIT-2-Process-Management.pdf
UNIT-2-Process-Management.pdf
 
UNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdfUNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdf
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.ppt
 
Operating system
Operating systemOperating system
Operating system
 
Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...Process management- This ppt contains all required information regarding oper...
Process management- This ppt contains all required information regarding oper...
 
OS (1).pptx
OS (1).pptxOS (1).pptx
OS (1).pptx
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process management
 
Completeosnotes
CompleteosnotesCompleteosnotes
Completeosnotes
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
 
UNIT I-Processes.pptx
UNIT I-Processes.pptxUNIT I-Processes.pptx
UNIT I-Processes.pptx
 
Complete Operating System notes
Complete Operating System notesComplete Operating System notes
Complete Operating System notes
 
Processing management
Processing managementProcessing management
Processing management
 
Process
ProcessProcess
Process
 
operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.ppt
 
Operating Systems
Operating Systems Operating Systems
Operating Systems
 

Mehr von Yogesh Santhan

Career Enhancement Trainings
Career Enhancement TrainingsCareer Enhancement Trainings
Career Enhancement TrainingsYogesh Santhan
 
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATE
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATEHUMAR RESOURCES HR RECRUITER RESUME TEMPLATE
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATEYogesh Santhan
 
Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
 
Operating Systems Unit One - Fourth Semester - Engineering
Operating Systems Unit One - Fourth Semester - EngineeringOperating Systems Unit One - Fourth Semester - Engineering
Operating Systems Unit One - Fourth Semester - EngineeringYogesh Santhan
 
Operating Systems lab Programs Algorithm - Fourth Semester - Engineering
Operating Systems lab Programs Algorithm - Fourth Semester - EngineeringOperating Systems lab Programs Algorithm - Fourth Semester - Engineering
Operating Systems lab Programs Algorithm - Fourth Semester - EngineeringYogesh Santhan
 
Operating Systems lab Programs - Fourth Semester - Engineering
Operating Systems lab Programs - Fourth Semester - EngineeringOperating Systems lab Programs - Fourth Semester - Engineering
Operating Systems lab Programs - Fourth Semester - EngineeringYogesh Santhan
 
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED - 2
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED -  2A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED -  2
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED - 2Yogesh Santhan
 
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...Yogesh Santhan
 
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...Yogesh Santhan
 
Template transfer or change of ownership – no objection letter - mobile number
Template   transfer or change of ownership – no objection letter - mobile numberTemplate   transfer or change of ownership – no objection letter - mobile number
Template transfer or change of ownership – no objection letter - mobile numberYogesh Santhan
 
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...Yogesh Santhan
 
Research Questionnaire - Employee Referral Program
Research Questionnaire - Employee Referral ProgramResearch Questionnaire - Employee Referral Program
Research Questionnaire - Employee Referral ProgramYogesh Santhan
 
Cluster based approach for Service Discovery using Pattern Recognition
Cluster based approach for Service Discovery using Pattern RecognitionCluster based approach for Service Discovery using Pattern Recognition
Cluster based approach for Service Discovery using Pattern RecognitionYogesh Santhan
 

Mehr von Yogesh Santhan (15)

Career Enhancement Trainings
Career Enhancement TrainingsCareer Enhancement Trainings
Career Enhancement Trainings
 
Tamil Resume Template
Tamil Resume TemplateTamil Resume Template
Tamil Resume Template
 
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATE
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATEHUMAR RESOURCES HR RECRUITER RESUME TEMPLATE
HUMAR RESOURCES HR RECRUITER RESUME TEMPLATE
 
Desire - An Angler
Desire - An AnglerDesire - An Angler
Desire - An Angler
 
Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - Engineering
 
Operating Systems Unit One - Fourth Semester - Engineering
Operating Systems Unit One - Fourth Semester - EngineeringOperating Systems Unit One - Fourth Semester - Engineering
Operating Systems Unit One - Fourth Semester - Engineering
 
Operating Systems lab Programs Algorithm - Fourth Semester - Engineering
Operating Systems lab Programs Algorithm - Fourth Semester - EngineeringOperating Systems lab Programs Algorithm - Fourth Semester - Engineering
Operating Systems lab Programs Algorithm - Fourth Semester - Engineering
 
Operating Systems lab Programs - Fourth Semester - Engineering
Operating Systems lab Programs - Fourth Semester - EngineeringOperating Systems lab Programs - Fourth Semester - Engineering
Operating Systems lab Programs - Fourth Semester - Engineering
 
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED - 2
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED -  2A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED -  2
A STUDY ON CUSTOMER’S SATISFACTION MARKS CARGO PRIVATE LIMITED - 2
 
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
 
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
INTERNSHIP ON EXPORT-IMPORT PROCEDURES AT MARKS CARGO PRIVATE LIMITED, PUDUCH...
 
Template transfer or change of ownership – no objection letter - mobile number
Template   transfer or change of ownership – no objection letter - mobile numberTemplate   transfer or change of ownership – no objection letter - mobile number
Template transfer or change of ownership – no objection letter - mobile number
 
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...
 
Research Questionnaire - Employee Referral Program
Research Questionnaire - Employee Referral ProgramResearch Questionnaire - Employee Referral Program
Research Questionnaire - Employee Referral Program
 
Cluster based approach for Service Discovery using Pattern Recognition
Cluster based approach for Service Discovery using Pattern RecognitionCluster based approach for Service Discovery using Pattern Recognition
Cluster based approach for Service Discovery using Pattern Recognition
 

Kürzlich hochgeladen

NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...Amil baba
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxDr. Sarita Anand
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxUmeshTimilsina1
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024Elizabeth Walsh
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxPooja Bhuva
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jisc
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and ModificationsMJDuyan
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxJisc
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxPooja Bhuva
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxannathomasp01
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - Englishneillewis46
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxPooja Bhuva
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17Celine George
 

Kürzlich hochgeladen (20)

NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
 

Operating Systems Unit Two - Fourth Semester - Engineering

  • 1. II UNIT A process is the unit of work in most system. Such system consists of collection of processes. One impediment to our discussion of operating systems is the question of what to call all the CPU activities. The objective of multiprogramming is to have some process running at all times, so as to maximize CPU utilization. The process in the system concurrently and they must be created and deleted dynamically. The concurrent process executing in the operating system may be either independent process or cooperative process. This chapter introduces many concepts associated with multithreaded computer system. The another aspect of software and hardware feature can make the program easier and improve system efficiency by hardware synchronization. It also describes the prevention from occurrence of the deadlock. 1
  • 2. 2.1 PROCESS MANAGEMENT 2.1 PROCESS CONCEPT A Process is a program in execution. A process is more than the program code , which is sometimes known as the text section. It include the program counter-represents the current activity and the contents of the processor’s registers. A program is a passive entity, such as the contents of a file stored on disk. Whereas, a process is an active entity, with a program counter specifying the next instruction is to be executed. 2.1.1 Process States As a process executes, it changes state. The state of a process is defined in the current activity of the process. Each process may be in one of the following states. 1. NEW: The process is being created. 2. RUNNING: Instructions are being executed. 3. WAITING: The process is waiting for some event to occur (such as an I/O completion reception of a signal.). 4. READY: The process is waiting to be assigned to a processor for execution . 5. TERMINATED:The process has finished execution. 2.1.2 Process Control Block Each process is represented by process control Block or Task control Block. It contains many pieces of information associated with it. They are 1. Process State: The state may be new, ready, running, and waiting, halted and so on 2 Figure 2.1 Process state
  • 3. 2. Program Counter: If the counter indicates the address of next instruction that is to be executed 3. CPU Registers: The registers are accumulators, index register, stack pointers, general purpose register, any condition code information 4 CPU Scheduling information: It includes process priority, pointer to scheduling queues and any other scheduling parameters 5 Memory management information: It includes value of base and limit registers, the page tables, or the segment tables. 6 Accounting Information: It includes the amount of CPU and real time used, time limits , account numbers, job or process number and so on. 7 I/O Status information: It includes the list of I/O devices allocated to this process, a list of open files and so on. Pointer Process state Process number Program counter Registers Memory limits List of open files 3 Figure 2.2 Process control block
  • 4. Save state into PCB0 Reload state from PCB1 Save state into PCB1 Reload state from PCB0 Executing Executing Idle Interrupt or system call Idle Executin g Idle Figure 2.3 CPU Switch from process to process 4
  • 5. If the program is ready, it will be executed. If any interruption occurs, its status can be stored into PCB0. It reloads the state from PCB1 which can be executed. If any interrupt occurs, it saves the state into PCB1 and reloads the state from PCB0 (first process). Now this process will be executed. Like this way CPU switches from one process to another. 2.2 PROCESS SCHEDULING The main objective of multiprogramming is to have some process running at all times to maximize CPU utilization. The main objective of time-sharing is to switch the CPU among different processes so frequently that users can interact with each program while it is running. A system with one CPU can only have one running process at any time. As user jobs enter the system, they are put on a queue called as the “job pool”. This consists of all jobs in the system. The processes that reside in main memory that are ready to be executed are put in the “Ready Queue”. A queue has a “header” node which contains pointers to the first and the last PCBs in the list. There are also other queues in the system like device queue which is the list of the processes waiting for a particular device. Each device has its own queue. A general queuing diagram is given below: 5
  • 6. Figure 2.8 Queuing diagram for Process control Block Device queue like Magnetic tape, disk and terminal. Each will be containing “header” node which points first and last link. Because it has pointer. List is nothing but, process control block connected as list format The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. A common representation of process scheduling is a queuing diagram. Two types of queues are present. 1. ready queue 2. Set of device queues. Figure 2.9 Queuing diagram for process ---- Represents process ----- Flow of processes 6
  • 7. ----- Queue New process is initially put in the ready queue. It waits in the ready queue until it is selected for execution. Following events could occur. 1. The process could issue an I/O request, placed in an I/O queue. 2. The process could create a new sub process and wait for its termination. 3. The process removed from CPU, as a result of an interrupt and are put back in the ready queue. Schedulers The selection of processes from the queues is managed by schedulers. There are two kinds: • Long-term scheduler selects processes from a mass-storage (i.e., hard disk) device where they are spooled and loads them into memory for execution. Also referred to as job scheduler, it selects a job to run and creates a new process. • Short term Scheduler (CPU Scheduler) is responsible for scheduling ready processes that are already loaded in memory and are ready to execute. The long-term scheduler brings the processes into memory and hands them over to the CPU scheduler. It controls the number of processes that the CPU scheduler is handling thus maintains the degree of multiprogramming which corresponds to the number of processes in memory. The long-term scheduler has to make careful selection among I/O-bound and CPU-bound processes. I/O bound processes spend more of their time doing I/O then computation, while CPU-bound processes spend more time on computation than I/O. 7
  • 8. A long-term scheduler should pick a relatively good mix of I/O- and CPU-bound processes so that the system resources are better utilized. If all processes are I/O-bound, the ready queue will almost always be empty. If all processes are CPU-bound, I/O queue will be almost always empty. A balanced combination should be selected for system efficiency. Context Switching Switching the CPU from one process to another requires saving (storage) the state of the running process into its PCB and loading the state of the latter from its PCB. This is known as the context switch. Switching involves copying registers, local and global data, file buffers and other information to the PCB. 8
  • 9. 2.3 OPERATIONS ON PROCESSES Major operations on processes are creation and termination. Process Creation The long-term scheduler creates a job’s first process as the job is selected for execution from the job pool. A process may create new processes via various system calls. So created processes are called children processes while the creating process is referred to as the mother or parent process. On UNIX, the system call is fork (). Creating a process involves creating a PCB for it and scheduling for its execution. Process Execution Depending on OS policy, a newly created process may inherit its resources from its parent or it may acquire its own resources from the OS. When a child process is restricted to the parent’s resources, new processes do not overload the system. At the same time some initialization data may be passed from the parent to the child process. In Unix OS, each process has a different process identifier (a process number as referred to above) and each child process executes in the address space of the parent. This eases communication between parent and child processes. When a new process (child) is created, either the parent runs concurrently with its child or parent waits until the child terminates. Process Termination Having completed its execution and sent its output to its parent, a process terminates by signaling the OS that it’s finished. On Unix, this is accomplished via the exit() system call. The OS de-allocates memory, reclaims resources such as I/O buffers, open files that were allocated to that process. On some systems, when a parent process terminates, the OS also terminates all children processes. Likewise if: • The child has exceeded its usage of resources it has been allocated; or • The task assigned to the child is no longer required; or • The OS does not allow a child to continue after its parent terminates, Then the parent may terminate its child process. The concept of terminating the children processes by a terminated parent is known as cascading termination. 9
  • 10. 2.4 COOPERATING PROCESSES Processes that are running simultaneously may be either independent or cooperating. An independent process is not affected nor it can affect other processes executing in the system. A cooperating process can affect the state of other processes by sharing memory, sending signals, etc.... Advantages of process cooperation 1. Information sharing Accessing some information sources like a shared database by multiple processes simultaneously may be essential. 2. Computation speed-up Computer systems with multiple CPUs may allow decomposition of the task into subtasks and then the parallel execution of them provides speedup. 3. Modularity The system can be constructed in a modular fashion. 4. Convenience A user may be willing to use different system tools like editing, printing, compiling, etc. in parallel. Example Consider the producer- consumer problem which is a typical example of cooperating processes where cooperation is demonstrating the classical inter-process communication problem. The idea is that an operating system may have many processes that need to communicate. Imagine a program that prints output somewhere internally which is later consumed by a printer driver. In the case of unbounded-buffer producer-consumer problem, there is no restriction on the size of the buffer. On the other hand, bounded-buffer producer-consumer problem assumes that there is a fixed buffer size. A producer process produces information that is consumed by a consumer process. Producer places its production into a buffer and consumer takes its consumption from the buffer. When buffer is full, producer must wait until consumer consumes at least an item; likewise, when buffer is empty, consumer must wait until producer places at least an item into the buffer. Consider a shared-memory solution to the bounded-buffer problem. 10
  • 11. Producer code: While (1) { /*produce an item in next produced*/ While (((in+1) %BUFFER_SIZE) ==out) ; /*do nothing*/ Buffer [in] =next produced; In= (in+1) %BUFFER_SIZE; } Consumer code: While (1) { /*produce an item in next produced*/ While (in==out) ; /*do nothing*/ Next Consumed=buffer [out]; Out= (out+1) %BUFFER_SIZE; /* consume the item in next Consumed*/ Shared buffer is an array that is used circularly as follows: 2.5 INTERPROCESS COMMUNICATION Cooperating process can communicate in a shared –memory environment. It requires a common buffer pool for sharing. Cooperating processes can communicate via interprocess communication (IPC) facility.IPC provides a mechanism to allow processes 11 Buffer empty when: in = out Buffer full when: in + 1 mod n = out out in.. x y z   ... 0 1 2 n-1 3 .
  • 12. to communicate and to synchronize their actions without sharing the same address space. It includes 4 objectives. They are 1 Message –passing system 2. Naming 3. Sunchronization 4. Buffering 2.5.1 Message –passing system The function of Message –passing system is to allow the processes to communicate with one another without the need to resort the shared data. Communication can be established with the help of message passing mechanism.IPC contains two operations 1 SEND 2 RECEIVE Messages sent by a process can be of fixed or variable size. If processes P and Q want to communicate, they must send messages and receive the messages from each other called communication link. To implement link has following operations. 1. Direct or indirect communication 2. Symmetric or asymmetric communication 3. Automatic or explicit buffering 4. send by copy or send by reference 5. Fixed sized or variable sized messages. 2.5.2 Naming For the processes that want to communicate with each other; it needs reference with each other. They can use either direct or indirect communication. 1 Direct Communication Each process that wants to communicate must explicitly name the recipient or sender of the communication. Here send and receive are defined as Send (P, message)-send a message to process P. Receive (Q, message)-receive a message from process Q. A communication link has following properties. 12
  • 13. 1. A link is established automatically between every pair of processes that want to communicate. Processes need identity of each process. 2. A link is associated with exactly two processes 3. Exactly one link exists between each pair of processes. 13
  • 14. 2 Indirect Communications The messages are sent to and receive d from mailboxes, or ports. A mailbox can be viewed as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has unique identification. Two processes can communicate only if they share a mailbox. The send and receive are defined as, Send (A, Message)-Send a message to mailbox A. Receive (A, message)-Receive a message from mailbox A. In this scheme a communication link has following properties. 1.A link is established between a pair of processes only if both members of the pair have a shared mailbox. 1. A link may be associated with more than two processes. 2. A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox. 2.5.3 Synchronization Communication between processes takes place by calls to send and receive parameters. Message passing may be either blocking or non-blocking is also known as synchronous and asynchronous. 1 Blocking end The sending process is blocked until the message is received by the receiving process or by the mailbox. 2 Non-blocking end The sending process sends the message and resumes operation. 3 Blocking receive The receiver blocks until a message is available. 4 Non-blocking receive The receiver receives either a valid message or a null. 2.5.4 Buffering Messages exchanged by communicating processes reside in a temporary queue. This will be implemented by three ways. 1. Zero capacity The queue has maximum length 0; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. 14
  • 15. 2. Bounded Capacity The queue has finite length n. So n messages can reside in it. If the queue is not full when a new message is sent, and the sender can continue execution without waiting. The link has a finite capacity, however. If the link is full, the sender must block until space is available in the queue.. 3. Unbounded capacity The queue has potentially infinite length; thus, any number of messages can wait in it. The sender never blocks. 2.6 THREADS It is a logical extension of a multiprogramming. It is a process that performs a single thread of execution. Example: If a process is running a word-processor program, a single thread of instruction is being executed. This single thread of control allows the process to perform only one task at a time. Example: the user could not simultaneously type in characters and run the spell checker at the same time. Threads A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional (or heavyweight) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time. 15
  • 16. Figure2.4: Single-threaded and multi threaded process Motivation Many software packages that run on modern desktop PCs are multithreaded. (i) An application is implemented as a separate process with several threads of control. (ii) A web browser might have one thread display images or text while another thread retrieves data from the network. (iii) A word processor may have a thread for displaying graphics, another thread for reading keystrokes from the user, and a third thread for performing spelling and grammar checking in the background. 16
  • 17. In certain situations a single application may be required to perform several similar tasks. For example, a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have more number (perhaps hundreds) of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. One solution is to have the server run as a single process that accepts requests. When the server receives a request, it creates a separate process to service that request. Process creation is very heavyweight. If the new process will perform the same tasks as the existing process, why incur all that overhead? It is generally more efficient for one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process. The server would create a separate thread that would listen for client requests; when a request is made, rather than creating another process, it would create another thread to service the request. Threads also play a important role in remote procedure call (RPC) systems. • RPCs allow inter-process communication by providing a communication mechanism similar to ordinary function or procedure calls. • Typically, RPC servers are multithreaded. • When a server receives a message, it services the message using a separate thread. This allows the server to service several concurrent requests. Benefits The benefits of multithreaded programming can be broken down into four major categories: 1 Responsiveness Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded web browser could still allow user interaction in one thread while an image is being loaded in another thread. 17
  • 18. 2 Resource sharing By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. 3 Economy Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of the process to which they belong, it is more economical to create and context switch threads. It can be difficult to gauge empirically the difference in overhead for creating and maintaining a process rather than a thread, but in general it is much more time consuming to create and manage processes than threads. In Solaris 2, creating a process is about 30 times slower than in creating a thread, and the context switching is about five times slower. 4 Utilization of multiprocessor architectures The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can run only on one CPU, no matter how many are available. Multithreading on a multi-CPU machine increases concurrency. In single processor architecture, the CPU generally moves between each thread so quickly that it to create an illusion of parallelism, but in reality only one thread is running at a time. User and Kernel Threads • Threads may be provided at either the user level. The user threads, or by the kernel, the kernal threads. • User threads are supported above the kernel and are implemented by a thread library at the user level. The library provides support for thread creation, scheduling, and management with no support from the kernel. Because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user space without the need for kernel intervention. Therefore, user-level threads are generally fast to create and manage; they have drawbacks, however. For instance, if the kernel is single-threaded, then any user-level thread performing a blocking system call will cause the entire process to block, even if other threads are available to run within the application. 18
  • 19. • User-thread libraries include POSIX Pthreads, Mach C-threads, and Solaris 2 UI- threads. • Kernel threads are supported directly by the operating system: The kernel performs thread creation, scheduling, and management in kernel space. • Because thread management is done by the operating system, kernel threads are generally slower to create and manage than by user threads. • However, since the kernel is managing the threads, if a thread performs a blocking system call, the kernel can schedule another thread in the application for execution. • Also, in a multiprocessor environment, the kernel can schedule threads on different processors. • Most contemporary operating systems-including Windows NT, Windows 2000, Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel threads. 2.7 MULTITHREADING MODELS Many systems provide support for both user and kernel threads, resulting in different multithreading models. We look at three common types of threading implementation. 2.7.1 Many to one Model • The many-to-one model maps many user-level threads to one kernel thread. • Thread management is done in user space, so it is efficient, but the entire process will block if a thread makes a blocking system call. • Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors. 19
  • 20. Figure 2.5 Many to one model In addition, user-level thread libraries implemented on operating systems that do not support kernel threads use the many-to-one model. 2.7.2 One to one Model The one-to-one model maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread. 20
  • 21. Figure 2.6 One to one model Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict the number of threads supported by the system. Windows NT, Windows 2000, and OS/2 implement the one-to-one model. 2.7.3 Many to many Model • The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. • The number of kernel threads may be specific to either a particular application or a particular machine (an application may be allocated more kernel threads on a multiprocessor than on a uniprocessor). • Whereas the many-to-one model allows the developer to create as many user threads as she wishes, true concurrency is not gained because the kernel can schedule only one thread at a time. • The one-to-one model allows for greater concurrency, but the developer has to be careful not to create too many threads within an application (and in some instances may be limited in the number of threads he can create). • The many-to-many model suffers from neither of these shortcomings: Developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor. 21
  • 22. Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution. Solaris 2, IRIX, HP-UX, and Tru64 UNIX support this model. Figure 2.7 Many to many model Threading Issues Some of the issues to be considered with Multi threaded programs 1 The fork and exec System Calls • In a multithreaded program, the semantics of the fork and exec system calls change. • Some UNIX systems have chosen to use two versions of fork, one that duplicates all threads and another that duplicates only the thread that invoked the fork system call. • If a thread invokes the exec system call, the program specified in the parameter to exec will replace the entire process-including all threads and LWPs. • Usage of the two versions of fork depends upon the application. If exec is called immediately after forking, then duplicating all threads is unnecessary, as the program specified in the parameters to exec will replace the process. 22
  • 23. • In this instance, duplicating only the calling thread is appropriate. If, however, the separate process does not call exec after forking, the separate process should duplicate all threads. 2 Cancellation Thread cancellation is the task of terminating a thread before it has completed. For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be cancelled. Another situation might occur when a user presses a button on a web browser that stops a web page from loading any further. Often a web page is loaded in a separate thread. When a user presses the stop button, the thread loading the page is cancelled. A thread that is to be cancelled is often referred to as the target thread. Cancellation of a target thread may occur in two different scenarios: 1. Asynchronous cancellation: One thread immediately terminates the target thread. 2. Deferred cancellation: The target thread can periodically check if it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion. The difficulty with cancellation occurs in situations where resources have been allocated to a cancelled thread or if a thread was cancelled while in the middle of updating data it is sharing with other threads. This becomes especially troublesome with asynchronous cancellation. The operating system will often reclaim system resources from a cancelled thread, but often will not reclaim all resources. Therefore, canceling a thread asynchronously may not free a necessary system-wide resource. Alternatively, deferred cancellation works by one thread indicating that a target thread is to be cancelled. However, cancellation will occur only when the target thread checks to determine if it should be cancelled or not. This allows a thread to check if it should be cancelled at a point when it can safely be cancelled. Pthreads refers to such points as cancellation points. Most operating systems allow a process or thread to be cancelled asynchronously. However, the Pthread API provides deferred cancellation. This means that an operating system implementing the Pthread API will allow deferred cancellation. 23
  • 24. 3 Signal Handling A signal is used in UNIX systems to notify a process that a particular event has occurred. A signal may be received either synchronously or asynchronously, depending upon the source and the reason for the event being signaled. Whether a signal is synchronous or asynchronous, all signals follow the same pattern: 1. A signal is generated by the occurrence of a particular event. 2. A generated signal is delivered to a process. 3. Once delivered, the signal must be handled. An example of a synchronous signal includes an illegal memory access or division by zero. • Synchronous signals are delivered to the same process that performed the operation causing the signal (hence the reason they are considered synchronous). • When a signal is generated by an event external to a running process, that process receives the signal asynchronously. Examples of such signals include terminating a process with specific keystrokes (such as <control><C>) or having a timer expire. Typically an asynchronous signal is sent to another process. Every signal may be handled by one of two possible handlers: 1. A default signal handler 2. A user-defined signal handler • Every signal has a default signal handler that is run by the kernel when handling the signal. • Both synchronous and asynchronous signals may be handled in different ways. • Some signals may simply be ignored (such as changing the size of a window); others may be handled by terminating the program (such as an illegal memory access). 24
  • 25. • Handling signals in single-threaded programs is straightforward; signals are always delivered to a process. However, delivering signals is more complicated in multithreaded programs, as a process may have several threads. Where then should a signal be delivered? In general, the following options exist: 1. Deliver the signal to the thread to which the signal applies. 2. Deliver the signal to every thread in the process. 3. Deliver the signal to certain threads in the process. 4. Assign a specific thread to receive all signals for the process. • The method for delivering a signal depends upon the type of signal generated. • For example, synchronous signals need to be delivered to the thread that generated the signal and not to other threads in the process. • Some asynchronous signals - such as a signal that terminates a process (<control><C>, for example) -should be sent to all threads. Thread Pools • In the multi threading situation, whenever the server receives a request, it creates a separate thread to service the request. • While creating a thread, a multithreaded server has potential problems. • The first concerns with / about the amount of time required to create the thread prior to servicing the request, with the fact that this thread will be discarded once it has completed its work. • The second issue is more problematic: If we allow all concurrent requests to be serviced in a new thread, we have not placed a bound on the number of threads concurrently active in the system. • Unlimited threads could exhaust system resources, such as CPU time or memory. 25
  • 26. • One solution to this issue is to use thread pools. • The general idea behind a thread pool is to create a number of threads at process startup and place them into a pool, where they sit and wait for work. When a server receives a request, it awakens a thread from this pool-if one is available- passing it the request to service. • Once the thread completes its service, it returns to the pool awaiting more work. If the pool contains no available thread, the server waits until one becomes free. In particular, the benefits of thread pools are: 1. It is usually faster to service a request with an existing thread than waiting to create a thread. 2. A thread pool limits the number of threads that exist at any one point. This is particularly important on systems that cannot support a large number of concurrent threads. The number of threads in the pool can be set based upon factors such as the number of CPUs in the system, the amount of physical memory, and the expected number of concurrent client requests. Thread Specific Data • Threads belonging to a process share the data of the process. • This sharing of data provides one of the benefits of multithreaded programming. • Each thread might need its own copy of certain data in some circumstances. We will call such data thread-specific data. • For example, in a transaction-processing system, we might service each transaction in a separate thread. • Furthermore, each transaction may be assigned a unique identifier. To associate each thread with its unique identifier we could use thread-specific data. Most thread libraries-including Win32 and Pthreads-provide some form of support for thread-specific data. Java provides support as well. 26
  • 27. 2.8 COOPERATING PROCESSES AND SYNCHRONIZATION It refers the sharing of resources between the processes. A Synchronization using atomic operations to ensure correct cooperation between processes. B The ‘‘Too Much Milk’’ problem: Person A Person B 3:00 Look in fridge. Out of milk. 3:05 Leave for store. 3:10 Arrive at store. Look in fridge. Out of milk. 3:15 Leave store. Leave for store. 3:20 Arrive home, put milk away. Arrive at store. 3:25 Leave store. Arrive home. OH, NO! One of the most important things in synchronization is to figure out what you want to achieve. In the given problem: somebody gets milk, but we don’t get too much milk. Mutual exclusion Mechanisms that ensure that only one person or process is doing certain things at one time (others are excluded). E.g. only one person goes shopping at a time. Critical section A section of code, or collection of operations, in which only one process may be executing at a given time. E.g. shopping. There are many ways to achieve mutual exclusion. Most involve some sort of locking Mechanism: prevent someone from doing something. For example, before shopping, leave a note on the refrigerator. Three elements of locking: 1. Must lock before using. Leave note 2. Must unlock when done. Remove note 3. Must wait if locked. Don’t shop if note 1st attempt at computerized milk buying: 27
  • 28. Processes A & B 1 if (No Milk) { 2 if (No Note) { 3 Leave Note; 4 Buy Milk; 5 Remove Note; 6} 7} 2.9 CRITICAL SECTION PROBLEM: The Critical-Section Problem 1. n processes all competing to use some shared data 2. Each process has a code segment, called critical section, in which the shared data is accessed. 3. Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. 4. Structure of process Pi Repeat Entry section Critical section Exit section Remainder section Until false; Solution to Critical-Section Problem 1. Mutual Exclusion If process Pi is getting executed in its critical section, then no other processes can get executed in their critical sections. 2. Progress If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 28
  • 29. 3. Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. _ Assume that each process executes at a nonzero speed. _ No assumption concerning relative speed of the n processes. Initial Attempts to Solve Problem 1. Only 2 processes, P0 and P1 2. General structure of process Pi (other process Pj) Repeat Entry section Critical section Exit section Remainder section Until false; 3. Processes may share some common variables to synchronize Their actions. There are 7 objectives. 1. Producer-consumer problem. 2. Implementation of critical section problems algorithm. 3. Modified producer-consumer problem. 4. Primitives for mutual exclusion. 5. Alternating policy. 6. Hardware assistance. 7. Semaphore. 2.9.1 Producer Consumer Problem Begin P.0. while flag =1 do;1*wait P.1. output the number; P.2. set flag=1; End 29
  • 30. Producer Process More than one problem can’t be in critical region at a time. Eg: Working in word, excel, spreadsheets. If we want to print these things at the same time printer get confused. STEPS i. Let us assume that initially the flag=0. ii. One of the produces processes (PA) executes instruction P.O. Because the flag=0, it does not wait at P.0, but it goes to instruction P.1. iii. PA outputs a number in the shared variable by executing instruction P.1 iv. At this moment, the time slice allocated to PA gets over and that process is moved from running to ready state. The flag is still 0. v. Another producer process PB is now scheduled. vi. PB also executes its P,0 and finds the flag as 0, and therefore, goes to its P1. vii. PB overwrites on the shared variable by instruction P.1 therefore, causing the previous data to be lost. 2.9.2 Modified Producer- Consumer Problem i. Initially flag=0. ii. PA executes instruction P.0 and falls through to P.1 as the flag=0. Begin C.0 while flag=0 do; 1*wait C.1 print the number; C.2 set flag=0; End Consumer Process Begin C.0 while flag=0 do; 1*wait C.1 set flag=0; C.2 Print the number End Consumer Process Begin P.0 while flag=1 do; 1* wait P.1 set flag =1; P.2 Output the number End Producer Process 30
  • 31. iii. PA sets flag to 1 by instruction P.1 iv. The time slice for PA is over and the processor is allocated to another producer process PB. v. PB keeps waiting at the instruction P.0 because flag is now=1. This continues until its time slice also is over, without doing anything useful. Hence, even if the shared data item is empty, PB cannot output the number. This is clearly wasteful, though it may not be a serious problem. vi. A consumer process CA is now scheduled. It will fall through C.0 because flag=1. vii. CA will set flag to 0 by instruction C.1. viii. CA will print the number by instruction C.2 before the producer has output. Let P1 and P2 be the two processes. P1, P2 = 1; If P1,P2 flag=0, it enters the loop of sets flag=1 If P1 process time is expired then it is in critical region. Now P2 has flag 1 so it waits and waits for long time so it is not suitable for problematic process. 2.9 .3 Primitives For Mutual Exclusion Begin P.0 while flag=1 do; 1*wait P.S1 Begin Critical-Region; P.1 output the numbers P.2 set flag=1; P.S2 End-Critical-Region End Producer Process i. Let us assume that initially the flag=0. Begin C.0 while flag=0 do; 1*wait C.S1 Begin Critical Region; C.1 Print the number; C.2 set flag=0; C.S2 End-Critical-Region End Consumer Process 31
  • 32. ii. A producer process PA executes P.0 because flag=0, it falls through to P.S1. Again, assuming that there is no other process in the critical region, it will fall through to P.1 iii. PA outputs the number in a shared variable by executing P.1. iv. Let us assume that at this moment the time slice for PA gets over, and it is moved into the ‘Ready’ state from the ‘Running state’, the flag is still 0. v. Another producer process PB now executes P.0. It finds that flag=0 and so falls through to P.S1. vi. Because PA is in the critical region already, PB is not allowed to proceed further, thereby, avoiding the problem of race conditions. This is our assumption about mutual exclusion primitives. E.g.: If one process is executing it does not allow other process to execute. P1 process enters while flag =0. It enters inside critical region. If time process is expired it resides in critical region. 3 rd the 2nd process enter the critical region if its time process is expired. But critical region does not allow this process because here mutual exclusion concept is follow. Thus until the completion of process 1, process 2 has to wait. This is also not sufficient so we go to alternating policy. 32
  • 33. 2.9.4 Alternating Policy This is useful for accessing only one process at a time. Until the condition is true, (P1P2) enters into critical region A because process id is set, Set process_id P1=B So it waits until this process is completed. Here it sets P1,P2 it as ‘B’. It is not possible. Let us assume that initially process- ID is set to “A” and process A is then scheduled. This is done by the operating system. 1) Process A will execute instruction A.0 and fall through to A.1, because Process-ID=”A”. 2) Process A will only execute the critical region and only then process-ID is set to “B” at instruction A.2. Hence, even if the context switch takes place after A.0 or even after A.1 but before A.2 and if process B is then scheduled process B will continue to loop at instruction B.0 and will not enter the critical region only if process-ID=”B”. And this can happen only in instruction on A.2 which in turn can happen only after Process a has executed its critical region in instruction A.1. This is clear from the program for as given in figure. 33 Begin A.0 while process-Id= “B” do; * wait A.1 Critical Region-A; A.2 Set process-Id=”B” End Begin B.0 While process-Id=”A” do; * wait B.1 critical Region-B; B.2 Set Process-Id=”A” End PROCESS A PROCESS B
  • 34. 2.9.5 Hardware Assistance We cannot identify which result for which process. It may also be overwritten. Hence we go to semaphore. 2.10 SEMAPHORE It is the best solution for critical section problem. It is a protected variable. It can be accessed and changed by using Down procedure and up procedure. Begin Initial-portion; call Enter-critical-Region; critical Region; call Exit-critical-Region; Remaining-portion; End 34
  • 35. DOWN(s) UP(s) Lack Begin S>0 S=S-1 Unlock End Move the current PCB form the Running state to the semaphore queue. Exit Figure 2.10 DOWN Procedure 35
  • 36. In the above figure, the down and up form the mutual exclusion primitives for any process. Hence the process has a critical region; it has to be encapsulated between the down and up instruction. The downs and ups primitives ensure that only one process is in its critical region. All other process is waiting to enter their respective critical region are kept waiting in a queue is called semaphore queue. 2.11 PROCESS COORDINATION PROBLEMS: Process may be divided into two types. Lack Begin Semaphore queue empty? S=S+1 Unlock End Move the current PCB form the Running state to the semaphore queue. Figure 2.11 UP Procedure 36
  • 37. 1. Dependant process 2. Independent process. A process is independent if it cannot affect or be affected by the other processes executing in the system. It does not have sharing with other process . A process is dependant or cooperating if it can be affected by the other processes executing in the system. Process cooperation occurs for the following reasons. 1. Information sharing Since several users are sharing the same piece of information, we must provide the environment to allow concurrent access. 2. Computation Speedup If we want a particular task to run faster, We must break it into subtasks, each of will be executing in parallel with the others.. 3. Modularity Dividing the system functions into separate processes or threads. 4. Convenience Each individual user may have many tasks and priority on which to work at at one time. i.e. user may be editing, printing and compiling in parallel. To illustrate the concept of cooperating processes, let us consider the Producer-Consumer problem. A producer can produce one item while the consumer is consuming other item The producer will produce the item into the buffer. Consumer takes or consumes the task from the buffer. Both Producer and consumer must be synchronized. The buffer should not be empty or full. Because if the buffer is full ,the producer cannot produce the item into the buffer. Similarly if the buffer is empty, the consumer cannot consume the task from the buffer. So the producer and consumer must be synchronized. The following code illustrates the concept of working procedure of producer – consumer problem. The producer has local variable next produced in which the new item to be produced is stored. While (1) { /*produce an item in next produced*/ While (((in+1) %BUFFER_SIZE) ==out) ; /*do nothing*/ Buffer [in] =next produced; In= (in+1) %BUFFER_SIZE; } 37
  • 38. The consumer process has local variable next consumed in which the item to be consumed is stored. 38
  • 39. While (1) { /*produce an item in next produced*/ While (in==out) ; /*do nothing*/ Next Consumed=buffer [out]; Out= (out+1) %BUFFER_SIZE; /* consume the item in next Consumed*/ The following are the examples of coordination problems. } 1. Bounded buffer problem 2. Readers-writer problem 3. Dining-philosophers problem 4. The Sleeping Barber shop problem 5. Baboons Crossing a Canyon 6. Cigarette Smoker Problem 2.11.1 Bounded buffer problem N element buffer, producer and consumers work with this buffer Consumers cannot proceed till producer produced something Producer cannot proceed if buffer == N 2.11.2 Readers-writer problem Shared database, any number of readers can concurrently read content. Only one writer can write at any one time (with exclusive access) Variations: No reader will be kept waiting unless a writer has already received exclusive write permissions once a writer is ready, it gets exclusive permission as Soon as possible. Once a writer is waiting, no further reads are allowed. • The Readers-writers problem dealed with data objects shared among several concurrent processes. • Some processes are readers, others writers o writers require exclusive access • akaz shared vs. exclusive locks • Several variations: o this discussion deals with first readers-writers problem 39
  • 40. o No reader will be kept waiting unless a writer has already obtained permission to use the shared object (readers don't need to wait for other readers). Readers-Writers solution // shared data Semaphore mutex, wrt; // mutex serves as common mutual exclusion // wrt serves as mutual exclusion for writers // also used to in reader to signal that a writer can start // Initially Mutex = 1, wrt = 1, read count = 0 Writer process Wait (wrt); // … // writing is performed // … Signal (wrt); Reader process Wait (mutex); Read count++; If (read count == 1) Wait (rt); Signal (mutex); // … // reading is performed // … Wait (mutex); Read count--; If (read count == 0) Signal (wrt); Signal (mutex): 40
  • 41. 2.11.3 Dining-philosophers problem Classical Problems of Synchronization - Dining Philosophers Problem There are N philosophers sitting around a circular table eating spaghetti and discussing philosophy. The problem is that each philosopher needs 2 chopsticks to eat, and there are only N chopsticks, each one between each 2 philosophers. Figure 2.12 Dining Philosophers Design an algorithm that the philosophers can follow that insures that none starves as long as each philosopher eventually stops eating, and such that the maximum number of philosophers can eat at once. Analysis First, we notice that these philosophers are in a thinking-picking up chopsticks-eating- putting down chopsticks cycle as shown below. 41
  • 42. Here’s an approach to the Dining Phils1 that’s simple Void philosopher () { While (1) { Sleep (); get_left_chopstick (); get_right_chopstick (); Eat (); put_left_chopstick (); put_right_chopstick (); } } If every philosopher picks up the left chopstick at the same time, noone gets to eat - ever. How does a philosopher pick up chopsticks? The problem is each chopstick is shared by two philosophers and hence a shared resource. We certainly do not want a philosopher to pick up a chopstick that has already been picked up by his neighbor. This is a race condition. To address this problem, we may consider each chopstick as a shared item protected by a mutex lock. Each philosopher, before he can eat, locks his left chopstick and locks his right chopstick. If the acquisitions of both locks are successful, this philosopher now owns two locks (hence two chopsticks), and can eat. After finishes easting, this philosopher releases both chopsticks, and thinks! This execution flow is shown below. 42
  • 43. Figure 2.12.1 Flow Diagram Some other suboptimal alternatives: • Pick up the left chopstick, if the right chopstick isn’t available for a given time, put the left chopstick down, wait and try again. (Big problem if all philosophers wait the same time - we get the same failure mode as before, but repeated.) Even if each philosopher waits a different random time, an unlucky philosopher may starve. • Require all philosophers to acquire a binary semaphore before picking up any chopsticks. This guarantees that no philosopher starves (assuming that the semaphore is fair) but limits parallelism dramatically. Tannenbaum’s solution to get Maximum concurrency #define N 5 /* Number of philosphers */ #define RIGHT(i) (((i)+1) % N ) #define LEFT(i) (((i)==N) ? 0 : (i)+1) typedef enum { THINKING, HUNGRY, EATING } phil_state; phil_state state[N]; semaphore mutex =1; semaphore s[N]; /* one per philosopher, all 0 */ void test(int i) { if ( state[i] == HUNGRY && state[LEFT(i)] != EATING && state[RIGHT(i)] != EATING ) {state[i] = EATING; V(s[i]);} 43
  • 44. } void get_chopsticks(int i) { P(mutex); state[i] = HUNGRY; test(i); V(mutex); P(s[i]); } void put_chopsticks(int i) { P(mutex); state[i]= THINKING; test(LEFT(i)); test(RIGHT(i)); V(mutex); } void philosopher(int process) { while(1) { think(); get_chopsticks(process); eat(); put_chopsticks(process); } } The magic is in the test routine. When a philosopher is hungry it uses test to try to eat. If test fails, it waits on a semaphore until some other process sets its state to EATING. Whenever a philosopher puts down chopsticks, it invokes test in its neighbors. (Note that test does nothing if the process is not hungry, and that mutual exclusion prevents races.) So this code is correct, but somewhat obscure. And more importantly, it doesn’t encapsulate the philosopher - philosophers manipulate the state of their neighbors directly. 44
  • 45. Here’s a version that does not require a process to write another process’s state, and gets equivalent parallelism. #define N 5 /* Number of philosphers */ #define RIGHT(i) (((i)+1) %N) #define LEFT(i) (((i)==N) ? 0 : (i)+1) typedef enum { THINKING, HUNGRY, EATING } phil_state; phil_state state[N]; semaphore mutex =1; semaphore s[N]; /* one per philosopher, all 0 */ void get_chopsticks(int i) { state[i] = HUNGRY; while ( state[i] == HUNGRY ) { P(mutex); if ( state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT(i)] != EATING ) { state[i] = EATING; V(s[i]); } V(mutex); P(s[i]); } } void put_chopsticks(int i) { P(mutex); state[i]= THINKING; if ( state[LEFT(i)] == HUNGRY ) V(s[LEFT(i)]); if ( state[RIGHT(i)] == HUNGRY) V(s[RIGHT(i)]); V(mutex); } void philosopher(int process) { while(1) { think(); 45
  • 46. get_chopsticks(process); eat(); put_chopsticks(); } } If you really don’t want to touch other processes’ state at all, you can always do the V to the left and right when a philosopher puts down the chopsticks. (There’s a case where a condition variable is a nice interface.) 2.11.4 The Barbershop Problem Figure 2.13 Barber shop Three barbers work independently in a barber shop: • The barbershop has 3 barber chairs, each of which is assigned to one barber. • Each barber follows the same work plan: • The barber sleeps (or daydreams) when no customer customer is waiting (and is not in the barber's own chair). • When the barber is asleep, the barber waits to be awaken by a new customer. (A sign in the shop indicates which barber has been asleep longest, so the customer will know which barber to wake up if multiple barbers are asleep.) • Once awake, the barber cuts the hair of a customer in the barber's chair. • When the haircut is done, the customer pays the barber and then is free to leave. 46
  • 47. • After receiving payment, the barber calls the next waiting customer (if any). If such a customer exists, that customer sits in the barber's chair and the barber starts the next haircut. If no customer is waiting, the barber goes back to sleep. • Each customer follows the following sequence of events. • When the customer first enters the barbershop, the customer leaves immediately if more than 20 people are waiting (10 standing and 10 sitting). On the other hand, if the barbershop is not too full, the customer enters and waits. • If at least one barber is sleeping, the customer looks at a sign, wakes up the barber who has been sleeping the longest, and sits in that barber's chair (after the barber has stood up). • If all the barbers are busy, the customer sits in a waiting-room chair, if one is available. Otherwise, the customer remains standing until a waiting- room chair becomes available. • Customers keep track of their order, so the person sitting the longest is always the next customer to get a haircut. • Similarly, standing customers remember their order, so the person standing the longest takes the next available waiting-room seat. For this exercise, you are to write a C program to simulate activity for this barbershop: a. Simulate each barber and each customer as a separate process. b. Altogether, 30 customers should try to enter. c. Use a random number generator so that a new customer arrives every 1, 2, 3, or 4 seconds. (This might be accomplished by a statement such as sleep(1+(rand() %4)); d. Similarly, each haircut lasts between 3 and 6 seconds. e. Each barber should report when he/she starts each haircut and when he/she finishes each haircut. f. Each customer should report when he/she enters the barbershop. The customer also should report if he/she decides to leave immediately. 47
  • 48. g. Similarly, if the customer must stand or sit in the waiting room, the customer should report when each activity begins. h. Finally, the customer should report when the haircut begins and when the customer finally exits the shop. i. Semaphores and shared memory should be used for synchronization. 2.11.5 Baboons Crossing a Canyon A student majoring in anthropology and minoring in computer science has embarked on a research project to see if African baboons can be taught about deadlocks. She locates a deep canyon and fastens a rope across it, so the baboons can cross hand- over-hand. Passage along the rope follows these rules: • Several baboons can cross at the same time, provided that they are all going in the same direction. • If eastward moving and westward moving baboons ever get onto the rope at the same time, a deadlock will result (the baboons will get stuck in the middle) because it is impossible for one baboon to climb over another one while suspended over the canyon. • If a baboon wants to cross the canyon, he must check to see that no other baboon is currently crossing in the opposite direction. • Your solution should avoid starvation. When a baboon that wants to cross to the east arrives at the rope and finds baboons crossing to the west, the baboon waits until the rope in empty, but no more westward moving baboons are allowed to start until at least one baboon has crossed the other way. For this exercise, you are to write a C program to simulate activity for this canyon crossing problem: a. Simulate each baboon as a separate process. b. Altogether, 30 baboons will cross the canyon, with a random number generator specifying whether they are eastward moving or westward moving (with equal probability). 48
  • 49. c. Use a random number generator, so the time between baboon arrivals is between 1 and 8 seconds. d. Each baboon takes 1 second to get on the rope. (That is, the minimum inter- baboon spacing is 1 second.) e. All baboons travel at the same speed. Each traversal takes exactly 4 seconds, after the baboon is on the rope. f. Use semaphores for synchronization. You may also use shared memory. (Additional communication via sockets is allowed, but do not use sockets unless such communication is clearly needed.) 2.11.6 The Cigarette-Smokers Problem Consider a system with three smoker processes and one agent process. Each smoker continuously rolls a cigarette and then smokes it. But to roll and smoke a cigarette, the smoker needs three ingredients: tobacco, paper, and matches. One of the smoker processes has paper, another has tobacco, and the third has matches. The agent has an infinite supply of all three materials. The agent places two of the ingredients on the table. The smoker who has the remaining ingredient then makes and smokes a cigarette, signaling the agent on completion. The agent then puts out another two of the three ingredients, and the cycle repeats. There are four processes in the system. Three represent the smokers, and one represents the supplier. Solution: The cigarette smokers problem becomes solvable using binary semaphores, or mutexes. Let us define an array of binary semaphores A, one for each smoker; and a binary semaphore for the table, T. Initialize the smokers' semaphores to zero and the table's semaphore to 1. Then the arbiter's code is 49
  • 50. while true { wait(T); choose smokers i and j nondeterministic ally , making the third smoker k; signal(A[k]); } Code for smoker i is while true { wait(A[i]); make a cigarette signal(T); smoke the cigarette; } /* The Cigarette-smokers problem uses semaphores to solve. */ typedef int semaphore; semaphore items=1; /*used for mutual exclusive access to the table on which the two ingredients are placed */ semaphore more=0; semaphore temp=0; /* used to queue the waiting smokers */ int count =0; /*indicates the number is of waiting smokers */ boolean flags[0..2]=initially all false; /*a flag true indicates if the corresponding item is on the table */ /* the three items needed for smoking are named as 0,1,2 */ /* process i has item i but needs the other two items, i.e (i-1) mod2 and (i+1)mod 2*/ for 0<=i<=2; Smoker process i; { repeat wait(items); /*enter critical section */ 50
  • 51. if (flag[i-1 % 2] and flag[+1 % 2]) { flag[i-1 % 2]=false; flag[i+1 % 2]=false; SMOKE; While(count >0)do { count --; signal(temp); } signal(more); }else{ /*both items needed for the smoking are not available */ count++; signal(items); wait(temp); /*wait for the next round */ } until false; } Supplier process: { repeat put any two items on the table and set the corresponding flags to true; signal(items); wait(more); until false; } A full C program dealing with smokers problem is as follows: #include <stdio.h> #include <string.h> #include <stdlib.h> #include <conio.h> 51
  • 52. enum Ingredients /* Enum representing the ingredients */ { None, Paper, Tobacco, Matches }; /* Structure representing a Smoker & Agent process */ typedef struct smoker { char SmokerID[25]; int Item; }SMOKER; typedef struct agent { char AgentID[25]; int Item1; int Item2; }AGENT; char* GetIngredientName(int Item) { if(Item == Paper) return "Paper"; else if(Item == Tobacco) return "Tobacco"; else if(Item == Matches) return "Matches"; } void GetAgentIngredients(AGENT* agent) { /* Simulate random generation of ingredients*/ agent->Item1=random(3)+1; 52
  • 53. while(1) { agent->Item2=random(3)+1; if(agent->Item1 != agent->Item2) break; } printf("nAgent Provides Ingredients- %s,%snn:, GetIngredientName(agent- >Item1),GetIngredientName(agent->Item2)); } void GiveIngredientToSmoker(AGENT*agent,SMOKE R* smoker) { int index=0; while(smoker[index].Item !=NULL) { if((smoker[index].Item !=agent- >Item1)&&(smoker[index].Item != agent->Item2)); { printf("nSmoker - %s"-is smoking his cigarettenn", smoker[index].SmokerID); agent->Item1=None; agent->Item2=None; break; } index++; } } void main() { /*Create the processes required -1 Agent, 3 Smokers */ 53
  • 54. AGENT agent; SMOKER smoker[4] = {{SmokerWithPaper",Paper}, {"SmokerWithTobacco",Tobacco}, {"SmokerWithMatches",Matches},{"0",None}}; int userChoice=0; strcpy(agent.AgentID,"Agent"); agent.Item1=None; agent>item2=None; while(1) { GetAgentIngredients(&agent); GiveIngredientToSmoker(&agent,smoker); printf("Press ESC to exit or any key to continuenn"); UserChoice=getch(); If(UserChoice ==27) break; } }/*Program Ends*/ 54
  • 55. {Problem :} {Deadlock occurs if you use a semaphore for the individual ingredients. } {Solution :} {Use a semaphore for each combination of ingredients. } VAR tobacco_paper, tobacco_matches, paper_matches, done_smoking: SEMAPHORE; PROCEDURE vendor BEGIN WHILE (making_money) DO BEGIN CASE (random(1, 3)) OF 1: signal(tobacco_paper); 2: signal(tobacco_matches); 3: signal(paper_matches); END; wait(done_smoking); END; END; PROCEDURE smoker1 { This smoker has matches } BEGIN WHILE (not_dead) DO BEGIN wait(tobacco_paper); smoke; signal(done_smoking); END; END; PROCEDURE smoker2 { This smoker has paper } BEGIN WHILE (addicted_to_nicotine) DO BEGIN wait(tobacco_matches); smoke; signal(done_smoking); END; END; PROCEDURE smoker3 { This smoker has tobacco } BEGIN WHILE (can_inhale) DO 55
  • 56. BEGIN wait(paper_matches); smoke; signal(done_smoking); END; END; BEGIN tobacco_paper := 0; tobacco_matches := 0; paper_matches := 0; done_smoking := 0; END. 2.12 DEADLOCK DEADLOCK PROBLEM A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Eg:  System has 2 tape drives.  P1 and P2 each hold one tape drive and each needs another one. Eg:  semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A) Bridge Crossing Example  Traffic only in one direction.  Each section of a bridge can be viewed as a resource.  If a deadlock occurs, it can be resolved if one car backs 56
  • 57. up (preempt resources and rollback).  Several cars may have to be backed up if a deadlock Occurs.  Starvation is possible. 2.13 DEADLOCK CHARACTERIZATION 2.13.1 Necessary conditions A) Mutual exclusion • At least one resource must be held in non-sharable mode ie only one process at a time can use resource. • If another process request resource delayed until resource has been released. b) Hold and wait • A process must be holding at least one resource and waiting to acquire additional resources that are currently held by other several processes. c) No preemption • Resources cannot be preempted ie resource can be released only voluntarily by the p’s holding it and the process complete its tasks. d) Circular waits • Consider we have p no. of processes process, p={p0,p1….pn} P0 is waiting for a resource that is held by p1. P1 is waiting for a resource that is held by p2 P2 is waiting for a resource that is held by pn-1 Pn-1 waiting for a resource that is held by Pn Pn is waiting for a resource that is held by pα All four conditions must hold for a deadlock to occur. 2.13.2 Resource allocation graph (RAG) Deadlocks are described graph called system resource allocation. Graph consists of vertices--v. Set of edges--E. v- Partitioned into different nodes. P= {p1, p2…pn}--active process. 57
  • 58. R= {R1, R2 ….Rm}---resources. A directed edge form process pi to rj Pi--Rj is signifies Pi requested an instance of resource type Rj and is currently waiting for that resource. A directed edge from Rj to process Pi=> Rj->Pi is signifies resource Rj is allocated to process i. Circle--process. Square-resources. Figure 2.14: Resource Allocation Graph When Pi required Rj required edge inserted in RAG. When this requirement fulfilled, required edge is transferred to an assignment edge. 1) The sets P,R,E P1= {p1, p2, p3} R= {R1, R2, R3, R4} E= {P1R1, P2-R3, R1-P2, R3-P3, R2-P1, R2-P2} 2) Resource instances 1. One instance of resource R1 2. Two instance of resource R2 R1 R3 R2 R4 . . P1 P2 P3 58
  • 59. 3. One instance of resource R3 4. Three instance of resource R4 3) Process state 1. Process p1 is holding on an resource R2 and is waiting for resource type R1. 2. Process P2 is holding an resource R1 and R2, waiting for R3. 3. Process P3 holding resource R3. Give RAG no cycles then no process in system is deadlock. if graph contain cycle deadlock may exist. In above figure if P3 requesting R2 we have cycle P1-R1-P2-R3-P3-R2--P1 P2-R3-P3-R2-P2 Figure 2.15 RAG with cyclic R1 R3 P1 P2 P3 R2 R4 59
  • 60. If R2 is released by either P1/P2, P3R2 will break, no cycle is formed. 2.14 DEADLOCK PREVENTION By ensuring deadlock prevention at least on the necessary condition of deadlock cannot hold, so this time we can prevent deadlock. 2.14.1 Mutual exclusion • mutual exclusion condition must hold for non sharable resources • E.g. a printer cannot be simultaneously shared by several process. • If sharable resource  mutual exclusion access so no deadlock if several ps attempt to open a read only file at the same time they can be granted simultaneously access to the file . • we cannot prevent dead lock by denying mutual exclusion condition 2.14.2 Hold and wait To prevent the Hold and Wait condition our process can allocate all resources that it needs before starting execution, that way, it won't have to wait for resources during execution (thus, preventing deadlocks). Another strategy is to allow processes to allocate resources when they have none. Again, this prevents deadlocks since a process must release whatever resources it has before requesting more resources. Each of these has some performance or resource utilization issues. If we allocate all resources at the beginning of the program, we will hold them all while the program executes. We may not need all of them all at once though, and so resources end up underused (since no other process can use them while we hold them). Another problem is starvation. Some processes may never get to execute since some other process always has control of some popular resource. 2.14.3 No preemption If a process is holding some resources request another resource that cannot be immediately allocated to it then all resources currently being held is preempted If a process requests some resources we first check if they are available and if they are we allocate them, if not check whether they are allocated to some other process that is waiting for some other resource 60
  • 61. If so we prompt the desired resources from the waiting process and allocate them to the requesting process 2.14.4 Circular wait To prevent Circular Wait we can define an order by which processes allocate resources. For example, each resource type gets a number, and the processes can only allocate resources in increasing order of those resource numbers. We have p no of processes P’s , p ={p0,p1….pn} P is waiting for a resource that is held by p1 p2 p3 pn-1 pn P0 pn-1 pn 2.15 DEADLOCK AVOIDANCE Deadlock can be avoided by banker’s algorithm. Deadlock avoidance deals with processes that declare before execution how many resources they may need during their execution. If given several processes and resources, we can allocate the resources in some order as to prevent the deadlock, the system is said to be in a safe state, else, if a deadlock is possible, the system is said to be in an unsafe state. The idea of avoiding a deadlock is to simply now allow the system to enter an unsafe state which may cause a deadlock. We can define what makes an unsafe state. Eg: Process Allocated resources A B C Maximum requirement A B C Available resources A B C P1 0 1 0 7 5 3 3 3 2 P2 2 0 0 3 2 2 2 1 0 P3 3 0 2 9 0 2 5 3 2 P4 2 1 1 2 2 2 5 2 1 P5 0 0 2 4 3 3 5 2 2 61
  • 62. The allocated resource is also but maximum requirement is 753 so the requirement of more p is not executed. In the case of p2 maximum requirement is 322 allocated resources is 200. So we borrow from available resources 322 Ie: 322 122 ------- 210 P3 process requires 902 but allocated is 302. The max requirement is high and hence p3 is not exceeded More available resource is 3 2 2 2 1 0 ------------ 5 3 2 P4 has 211 allocated resources and requirement is 222. Here, we need 011 resource so 521 is available currently p5 process requires 433 but allocated is 002. But we need additional resource so borrowed from available resource and it becomes 552. 2.16 DEADLOCK DETECTION If a system does not employ either deadlock prevention / deadlock avoidance algorithm, and when a deadlock situation occurs system must provide • Algorithm=)examines the state of the system to determine whether a deadlock has occurred • Algorithm=)to recover from the dead lock Detection –recovery require overhead includes • .Runtime costs of maintaining necessary in foundation and executing detection algorithm 2.16.1 Single instance of each resource type • RAG is used to removing nodes of type’s resource and collapsing edges. • Edge from p1 -)pj =) process pj waiting for process pj to release a resource that pi needs 62
  • 63. • Dead lock exists if and only if wait for graph contains cycle to detect deadlock s/m maintain wait for graph and periodically invoke algorithm for cycle is graph 2.16.2 Several instances of resources type  The Wait for graph scheme is not applicable for RAG with multiple instances of each resources type.  The deadlock detection used here employs several time varying data structures similar to those used in bankers available , allocation , request=)procedures 2.16.3 Detection algorithm usage • How often deadlock occur. • How many processes will be affected by deadlock when it happens? • if deadlock occur frequently detection “alg should be invoked frequently Resource allocated to deadlocked processes idle until deadlock broken • if there are many diff resource types one req may cause many cycles in rag each cycle conquered by most recent requirement. • cpu utilization drops 40% recovery from deadlock: • when detection algorithm determines that a deadlock exists several alternation exists • if a system is in deadlock • breaking deadlock • Abort more processes to break circular circuit • Preempt source resources 1/ more deadlocked processes. 2.17 DEADLOCK RECOVERY We can recover from a deadlock via two approaches: we either kill the processes (that releases all resources for killed process) or take away resources. 63
  • 64. Process Termination: To eliminate deadlocks by aborting a process, we use two methods. In both methods, the system reclaims all resources allocated to the terminated processes. 2.17.1 Abort all deadlocked processes This method clearly will break the deadlock cycle, but at a great expense. These processes may have computed for a long time, and the results of these partial computations must be discarded and probably computed. 2.17.2 Abort one process at a time until the deadlock cycle is eliminated This method incurs considerable overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlock. When recovering from a deadlock via process termination, we have two approaches. We can terminate all processes involved in a deadlock, or terminate them one by one until the deadlock disappears. Killing all processes is costly (since some processes may have been doing something important for a long time) and will need to be re-executed again. Killing a process at a time until deadlock is resolved is also costly, since we must rerun deadlock detection algorithm every time we terminate a process to make sure we got rid of the deadlock. Also, some priority must be considered when terminating processes, since we don't want to kill an important process when less important processes are available. Priority might also include things like how many resources are being held by that process, or how long it has to be executed, or how long it has to go before it completes, or how many resources it needs to complete its job, etc. 64
  • 65. 2.18 RESOURCE PREEMPTION To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken. If preemption is required to deal with deadlocks, then three issues need to address: 2.18.1 Selecting a victim Which resources and which processes are to be preempted? If process is terminated, we must determine the order of preemption to minimize cost. Cost factors depend on the number of resources a deadlock process is holding and amount of time a deadlocked process consumed during its execution. 2.18.2 Rollback If we preempt a resource from a process, it cannot continue with its normal execution; it is missing some needed resource. So we must rollback the process to some safe state, and restart it from that state. 2.18.3 Starvation This approach takes resources from waiting processes and gives them to other processes. Obviously, the victim process cannot continue regularly, and we have a choice of how to handle it. We can either terminate that process, or roll it back to some previous state so that it can request the resources again. Again, there are many factors that determine which process we choose as the victim. Note: that if the system has resource preemption, by definition, a deadlock cannot occur. The type of resource preemption we are talking about here is non-normal preemption that only occurs when a deadlock detection mechanism detected a deadlock. 65
  • 66. 2.19 COMBINED APPROACH TO DEADLOCK HANDLING Researcher have argued that none of the basic approaches for handling deadlocks (prevention, avoidance, and detection) alone is appropriate for the entire spectrum of resource-allocation problems encountered in operating systems. • One possibility is to combine the three basic approaches, allowing the use of the optimal approach for each class of resources in the system. • The proposed method is based on the notion that resources can be partitioned into classes that are hierarchically ordered. • A resource-ordering technique is applied to the classes. Within each class, the most appropriate technique for handling deadlocks can be used. • It is easy to show that a system that employs this strategy will not be subjected to deadlocks. • Indeed, a deadlock cannot involve more than one class, since the resource- ordering technique is used. Within each class, one of the basic approaches is used. Consequently, the system is not subject to deadlocks. • To illustrate this technique, we consider a system that consists of the following four classes of resources: • Internal resources Prevention through resource ordering can be used, since run- time choices between pending requests are unnecessary. • Central memory Prevention through preemption can be used, since a job can always be swapped out, and the central memory can be preempted. • Job resources Avoidance can be used, since the information needed about resource requirements can be obtained from the job-control cards. • Swappable space Pre allocation can be used, since the maximum storage requirements are usually known. 66
  • 67. Points to Remember • A Process is a program in execution. • A process may be in any one of New, Running, Waiting, Ready, Terminated states. • Each process in the Operating System is associated with Process Control Block(PCB). • Switching the CPU from one Process to another is called as Context Switch. • A thread is called Light Weight Process (LWP). • A thread is a basic unit of CPU utilization. • User threads are supported above the kernel and are implemented by a thread library at the user level, whereas Kernel threads are supported directly by the operating system. • Fork is a system call by which a new process is created. • Exec is also a system call, which is used after a fork by one of the two processes to replace the process memory space with a new program. • The thread cancellation is the task of terminating a thread before it has completed. Thread that is to be cancelled is often referred to as the target thread. • When one process is executing in its critical section, no other process can allowed to execute in its critical section. • A semaphore 'S' is a synchronization tool which is an integer value that, apart from initialization, is accessed only through two standard atomic operations; wait and signal. • A process is deadlocked if it is waiting for an event that will never occur. Typically, more than one process will be involved in a deadlock • Four necessary and sufficient conditions for deadlock are o Mutual Exclusion o Hold and Wait o No preemption o Circular Wait 67
  • 68. SHORT QUESTIONS 1. Define Operating System? 2. What is Multiprogramming? 3. Write the services of operating systems? 4. Write the drawbacks of layered approach 5. Define kernel? And how it differs from Microkernel? 6. Differentiate Process and program 7. Define active and passive process 8. What are the states of process 9. Define Process Control Block 10. Write the contents of PCB 11. Define Thread. 12. Define Multithread. 13. What is critical Section problem 14. Define Semaphore 15. Write are the procedures used for semaphores? 16. What is Coordination process 17. In which, situation process coordination occur 18. Write the procedure for Producer-Consumer Problem 19. Define Interprocess communication 20. What are called interprocess communication 21. What are the basic functions of an operating system? 22. Differentiate Multiprogramming and Batch Processing 23. What is Timesharing? 24. What is the purpose of Command interpreter? Why is it usually separate from the kernel? 25. What is the purpose of system calls? 26. What is the purpose of system programs? 27. What is a Real-Time System? 28. Explain the following terms 1. Multi tasking 2. Multi programming 3. Multi threading 68
  • 69. DESCRIPTIVE QUESTIONS NOVEMBER 2007 1. (a) Explain the process control block ? (8) (b) Describe the several methods for implementing message passing system? (7) 2. (a) What is Semaphore ? Explain the implementation of semaphores? (7) (b) Explain Banker’s algorithm with example (8) MAY 2007 1. a) Explain with a schematic diagram of process control block(PCB) (8) b) Write note on Schedulers (7) 2. Explain the methods of handling a deadlock (15) NOVEMBER 2008 1. Explain the schematic diagram of scheduling queues. (15) 2. (a) Explain about Deadlock Detection and avoidance (8) (b) Explain Dining philosophers with a schematic diagram (7) MAY 2008 1. (a) Explain the operations in processes (8) (b) Give detailed discussions on process schedulers? (7) 2. (a) Explain Synchronization hardware (8) (b) “How do you recover from deadlock”? (7) MAY 2009 1. (A) what is critical Section problem? Explain any two algorithms that are applicable to two processes at a time (10) (b) Explain resource allocation graph algorithm with example (5) 2. (A) Explain the issues of threading (7) (b) What are the four necessary conditions to prevent the occurrence of a deadlock? 69