Multi-threaded programming allows a process to have multiple threads of control that can perform tasks concurrently. Threads share resources like memory and files within a process. This allows a process to perform multiple tasks simultaneously like updating a display, fetching data, or answering network requests. Creating threads is lighter weight than creating entire processes. Common thread models include one-to-one, many-to-one, and many-to-many mappings of threads to kernels. Popular thread libraries are Pthreads, Win32 threads, and Java threads. Scheduling, synchronization, and other concurrency issues must be addressed for multi-threaded programming.
2. Overview
A Thread is a basic unit of CPU utilization.it comprises a thread ID a program
counter,a register set,and stack.
It shares with other thread belonging to the same process its code
section,data section,and other operating system resources ,such as open files
and signals.
A traditional process has single thread of control.if a process has multiple
threads of control ,it can perform more than one task at a time.
Fig 4.1 illustrate the difference between traditional single thread process and
a multithreaded process.
3. Motivation
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be implemented by separate threads
Update display
Fetch data
Spell checking
Answer a network request
Process creation is heavy-weight while thread creation is light-weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
6. BENEFITS
Responsiveness – may allow continued execution if part of process is blocked,
especially important for user interfaces
Resource Sharing – threads share resources of process, easier than shared
memory or message passing
Economy – cheaper than process creation, thread switching lower overhead
than context switching
Scalability – process can take advantage of multiprocessor architectures
7. Multicore Programming
Multicore or multiprocessor systems putting pressure on programmers, challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
Types of parallelism
Data parallelism – distributes subsets of the same data across multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread performing unique operation
As # of threads grows, so does architectural support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core
9. In general five areas present challenges in programming for multi core
systems:
1) Dividing Activities
2) Balance
3) Data Splitting
4) Data dependency
5) Testing and Debugging
10. Multithreading Models
Many-to-One
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one
may be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
11. One to One Model
The one to one model creates a separate kernel thread to handle each user
thread.
One to one model overcomes the problems listed above blocking system calls
and the splitting of process across multiple CPU’s.
It is involving more significant ,more overhead , and slowing the system.
This model places limits on thread created.
Linux and Windows from 95 to XP implement one to one model for thread.
12. Many to Many Model
This model multiplex any number of user threads on to on equal or smaller
number of kernel threads combining the best features of the one to one and
Many to one.
User has no restriction on number of threads.
Blocking kernel system calls do not block the entire process.
Process can be split across multiple processors.
This model is also called as two-tier model.
It is supported by operating system such as HP-UX and UNIX.
14. Thread Libraries
It provides the programmer with an API for the creation and management of threads.
• Two ways of implementation:
1) First Approach
Provides a library entirely in user space with no kernel support.
All code and data structures for the library exist in the user space.
2) Second Approach
Implements a kernel-level library supported directly by the OS.
Code and data structures for the library exist in kernel space.
• Three main thread libraries: 1) POSIX Pthreads
2) Win32 and
3) Java.
15. 1) Pthread
The POSIX standard (IEEE 1003) defines the specification for pThreads not the
implementation.
Pthreads are available in solaries,Linnux,Mac OSX.
Global variables are shared among all the threads.
One thread can wait for other to rejoin before continuing.
Pthread begin execution in a specified function,in this ex runner().function
pThread_create() function is used to create a thread.
16.
17. 2)Win32
Similar to pThreads. Examine the code example to see the differences, which are
mostly syntactic & nomenclature.
Here summation() function is used to perform the separate thread function.
CreateThread() is the function to create a thread.
18.
19. 3)Java Thread
• Threads are the basic model of program-execution in
→ Java program and
→ Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread of control.
• Two techniques for creating threads:
1) Create a new class that is derived from the Thread class and override its run()
method.
1) Define a class that implements the Runnable interface. The Runnable interface is
defined as follows:
20.
21. Threading Issues
1) fork() and exec() System-calls.
• fork() is used to create a separate, duplicate process.
• When a thread programs calls fork()
• 1) The New process can be a copy of the parent with all threads
• 2) The new process is a copy of the single thread only.
• If the thread invokes the exec() system call the program specified in the parameter to
• exec() will be executed by the thread created.
• 2) Cancellation
• Terminating the thread before it has completed its task is called thread cancellation.The
• thread to be cancelled is called target thread.
• Ex: Multiple threads required in loading a webpage is suddenly cancelled if the browser
window is closed.
Threads that are no longer needed may be cancelled in one of two ways:
1) Asynchrous Cancellation-cancel the thread immediately.
2) Deffered Cancellation: The target thread periodically check wheather it has to terminate
thus given an opportunity to the thread to terminate itself.
22. 3) Signal Handling
A signal is used to notify a process that a particular event has occurred.
All signals follow the same pattern-
1) Asignal is generated by the occurrence of aparticular event.
2) Agenerated signal is delivered to aprocess.
3) Once delivered, the signal must be handled.
A signal can be invoked in 2 ways : synchronous or asynchronous.
1) synchronous signal : signal delivered to the same program ex:illegal memory access,
divide by zero error.
2) Asynchronous signal : signal send to another program ctrl c.
• Every signal can be handled by one of two possible handlers:
1) A Default Signal Handler
Run by the kernel when handling the signal.
2) A User-defined Signal Handler
Overrides the default signal handler.
23. 4)Thread pool
In multithreading process thread is created for every service.
Ex: in web server ,thread is created to service every client request.
A limit has to be placed on the number of active threads in the system. Unlimited thread
creation may exhaust system resources.
Threads are allocated from the pool when a request comes, and returned to
the pool when no longer needed(after the completion of request).
When no threads are available in the pool, the process may have to wait until
one becomes available.
Benefits:
Thread creation time is not taken. The service is done by the thread existing in the pool.
The thread pool limits the number of threads in the system. This is important on systems that
cannot support a large number of concurrent threads.
The ( maximum ) number of threads available in a thread pool may be determined by
parameters like the number of CPUs in the system, the amount of memory and the expected
number of client request.
24. 5) Thread specific data:
Data of a thread, which is not shared with other threads is called thread specific data.
Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-
specific data.
Example if threads are used for transactions and each transaction has an ID. This
unique ID is a specific data of the thread.
6) Schedular activation:
Scheduler Activation is the technique used for communication between the user- thread
library and the kernel.
It works as follows:
the kernel must inform an application about certain events. This procedure is known as
an upcall.
Upcalls are handled by the thread library with an upcall handler, and upcall handlers
must run on a virtual processor.
25. PROCESS SCHEDULING
Basic Concepts:
In a single-processor system, only one process can run at a time; other processes
must wait until the CPU is free.
The objective of multiprogramming is to have some process running at all times
in processor, to maximize CPU utilization.
CPU-I/0 Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. The state of
process under execution is called CPU burst and the state of process under I/O
request & its handling is called I/O burst.
Processes alternate between these two states. Process execution begins with a CPU
burst.That is followed by an I/O burst, which is followed by another CPU burst,
then another I/O burst, and so on eventually the cpu burst end with system request
to terminate execution as shown in fig:
28. CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes
from the ready queue to be executed.
The selection process is carried out by the short-term scheduler (or CPU scheduler).
The scheduler selects a process from the processes in memory that are ready to execute
and allocates the CPU to that process.
A ready queue can be implemented as a FIFO queue, a priority queue, a tree, or simply
an unordered linked list.
All the processes in the ready queue are lined up waiting for a chance to run on the
CPU. The records in the queues are generally process control blocks (PCBs) of the
processes.
Non - Preemptive Scheduling once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.
Preemptive Scheduling The process under execution, may be released from the CPU,
in the middle of execution due to some inconsistent state of the process.
29. Preemptive scheduling:
CPU-scheduling decisions may take place under the following four circumstances:
When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait for the termination of one of the child
processes)
When a process switches from the running state to the ready state (ioi example, when
an interrupt occurs)
When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
When a process terminates.
30. Dispatcher
Another component involved in the CPU-scheduling function is the
dispatcher. The dispatcher is the module that gives control of the CPU to the
process selected by the short- term scheduler. This function involves the
following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, since it is invoked during
every process switch. The time it takes for the dispatcher to stop one process
and start another running is known as the dispatch latency.
31. Scheduler Criteria
• Different CPU-scheduling algorithms
→ have different properties and
→ may favor one class of processes over another.
The criteria include the following:
1) CPU Utilization
We must keep the CPU as busy as possible.
In a real system, it ranges from 40% to 90%.
2) Throughput
Number of processes completed per time unit.
For long processes, throughput may be 1 process per hour;
For short transactions, throughput might be 10 processes per second.
3) Turnaround Time
The interval from the time of submission of a process to the time of completion.
Turnaround time is the sum of the periods spent
→ waiting to get into memory
→ waiting in the ready-queue
→ executing on the CPU and
→ doing I/O.
32. 4) Waiting time - The total amount of time the process spends waiting in the ready
queue.
5) Response time - The time taken from the submission of a request until the first
response is produced is called the response time. It is the time taken to start responding.
In interactive system, response time is given criterion.
33. Scheduling Algorithms
1) First Come - First Served scheduling
2) Shortest-job-First scheduling
3) Priority scheduling
4) Round Robin Scheduling
5) Multilevel Queue Scheduling
6) Multilevel Feedback Queue Scheduling