Boost PC performance: How more available memory can improve productivity
Chapter 3 chapter reading task
1. OPERATING SYSTEM:
CHAPTER 3 TASK
B Y:
•A M I R U L R A M Z A N I B I N R A D Z I D
•M U H A M M A D N U R I L H A M B I N I B R A H I M
•M U H D A M I R U D D I N B I N A B A S
3. INTRODUCTION OF PROCESS
• Also called a task
• Execution of an individual program
• Can be traced
- list the sequence of instructions that execute
4. MEANING OF PROCESS
A program in execution.
An instance of a program running on a computer.
The entity that can be assigned to and executed on a processor.
5. PROCESS, ON THE OTHER HAND, INCLUDES:
Current value of Program Counter (PC)
Contents of the processors registers
Value of the variables
The process stack (SP) which typically contains temporary data such
as subroutine parameter, return address, and temporary variables.
A data section that contains global variables.
6. While program is executing, this process
can be characterized by some
elements.
The information in the preceding list is
stored in a data structure, typically called
a process control block ( Figure 3.1 ),
that is created and managed by the OS.
7. Process control block is that it contains sufficient information so that it is
possible to interrupt a running process and later resume execution as if the
interruption had not occurred.
Process = Program code + Associated data +
PCB
11. PROCESS STATES
1) Process Creation
REASONS DESCRIOTIONS
New batch job. The OS is provided with a batch job
control stream, usually on tape
or disk. When the OS is prepared to
take on new work, it will read the
next sequence of job control
commands.
Interactive log-on. A user at a terminal logs on to the
system.
Created by OS to provide a service. The OS can create a process to
perform a function on behalf of a user
program, without the user having to
wait (e.g., a process to control
printing).
Spawned by existing process For purposes of modularity or to
exploit parallelism, a user program
can dictate the creation of a number of
12. PROCESS STATES
2) Process Termination
REASONS DESCRIPTION
Normal completion The process executes an OS service call to
indicate that it has completed
running.
Time limit exceeded The process has run longer than the
specified total time limit. There are a
number of possibilities for the type of time
that is measured. These include
total elapsed time (“wall clock time”), amount
of time spent executing, and,
in the case of an interactive process, the
amount of time since the user last
provided any input.
Memory unavailable The process requires more memory than the
system can provide.
Bounds violation The process tries to access a memory
location that it is not allowed to access.
16. PROCESS DESCRIPTION
Operating System Control Structures
An OS maintains tables for managing processes
and resources.
These are the tables:
Memory tables
I/O tables
File tables
Process tables
18. PROCESS CONTROL
Modes of Execution
User mode
System mode, control mode, or kernel mode
Process Creation
1)assigns a unique process identifier to the new process
2)allocates space for the process
3)initializes the process control block
4)sets the appropriate linkages
5)creates or expands other data structures
Process Switching
Interrupt
Trap
Supervisor call
20. INTRODUCTION OF THREAD
Basic unit of execution
Single sequential flow of control within a program
Threads are bound to a single process
Each process may have multiple threads of control within it.
21. DIFFERENCE BETWEEN PROCESS AND THREAD
Process Thread
Process is heavy weight or
resource intensive.
Thread is light weight taking lesser
resources than a process.
Process switching needs
interaction with operating system.
Thread switching does not need to
interact with operating system.
In multiple processing
environments each process
executes the same code but has its
own memory and file resources
All threads can share same set of
open files, child processes.
If one process is blocked then no
other process can execute until the
first process is unblocked.
While one thread is blocked and
waiting, second thread in the same
task can run.
Multiple processes without using
threads use more resources.
Multiple threaded processes use
fewer resources.
In multiple processes each process
operates independently of the
others.
One thread can read, write or
change another thread’s data.
23. THREAD
An execution state (running, ready, etc.)
Saved thread context when not running
Has an execution stack
Some per-thread static storage for local variables
Access to the memory and resources of its process
All threads of a process share this
A file open with one thread, is available to others
24. THREAD STATES
• Spawn: Typically, when a new process is spawned, a thread for that process
is also spawned. Subsequently, a thread within a process may spawn another
thread within the same process, providing an instruction pointer and
arguments or the new thread. The new thread is provided with its own
register context and stack space and placed on the ready queue.
• Block: When a thread needs to wait for an event, it will block (saving its user
registers, program counter, and stack pointers). The processor may now turn
to the execution of another ready thread in the same or a different process.
• Unblock: When the event for which a thread is blocked occurs, the thread is
moved to the Ready queue.
• Finish: When a thread completes, its register context and stacks are
deallocated.
25. TREAD STATES
Figure 4.3 shows a program that
performs two remote procedure
calls (RPCs) to two different host
to obtain combined result.
26. TYPE OF THREAD
1) User Level Threads
All thread management is done by the application
The kernel is not aware of the existence of threads
OS only schedules the process, not the threads within process.
Programmer using a thread library to manage threads (create, delete,
schedule).
27. 1) USER LEVEL THREADS
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
28. TYPE OF THREAD
2) Kernel Level Threads
All thread management is done by the kernel
Kernel maintains context information for the process and the threads
No thread library but an API to the kernel thread facility
Switching between threads requires the kernel
Scheduling is done on a thread basis
Ex. W2K, Linux, and OS/2
29. 2) KERNEL LEVEL THREADS
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of
thread of the same process.
Kernel routines themselves can multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user
threads.
Transfer of control from one thread to another within same process requires a
mode switch to the Kernel.
31. Thread Advantages
Threads are memory efficient. Many threads can be efficiently contained within a single
EXE, while each process can incur the overhead of an entire EXE.
Threads share a common program space, which among other things, means that
messages can be passed by queuing only a pointer to the message. Since processes
do not share a common program space, the kernel must either copy the entire
message from process A's program space to process B's program space - a
tremendous disadvantage for large messages, or provide some mechanism by which
process B can access the message.
Thread task switching time is faster, since a thread has less context to save than a
process.
With threads the kernel is linked in with the user code to create a single EXE. This
means that all the kernel data structures like the ready queue are available for viewing
with a debugger. This is not the case with a process, since the process is an
autonomous application and the kernel is separate, which makes for a less flexible
environment
Economy- it is more economical to create and context switch threads.
Utilization of multiprocessor architectures to a greater scale and efficiency.
32. Thread Disadvantages
Threads are typically not loadable. That is, to add a new thread, you must add the new
thread to the source code, then compile and link to create the new executable.
Processes are loadable, thus allowing a multi-tasking system to be characterized
dynamically. For example, depending upon system conditions, certain processes can
be loaded and run to characterize the system. However, the same can be
accomplished with threads by linking in all the possible threads required by the system,
but only activating those that are needed, given the conditions. The really big
advantage of load ability is that the process concept allows processes (applications) to
be developed by different companies and offered as tools to be loaded and used by
others in their multi-tasking applications.
Threads can walk over the data space of other threads. This cannot happen with
processes. If an attempt is made to walk on another process an exception error will
occur.
33. Threading Issues
Fork() creates a separate duplicate and exec() replaces the entire process with the
program specified by its parameter. However, we need to consider what happens in a
multi-threaded process. Exec() works in the same manor – replacing the entire process
including any threads (kernel threads are reclaimed by the kernel), but if a thread calls
fork(), should all threads be duplicated, or is the new process single threaded?
Some UNIX systems work around this by having two versions of fork(), a fork() that
duplicates all threads and a fork() that duplicates only the calling thread, and usages of
these two versions depends entirely on the application. If exec() is called immediately
after a fork(), then duplicating all threads is not necessary, but if exec() is not called, all
threads are copied (an OS overhead is implied to copy all the kernel threads as well).
35. SYMMETRIC MULTIPROCESSING
Traditionally, the computer has been viewed as a sequential machine.
A processor executes instructions one at a time in sequence
Each instruction is a sequence of operations
Two popular approaches to providing parallelism
Symmetric MultiProcessors (SMPs)
Clusters (ch 16)
36. CATEGORIES OF
COMPUTER SYSTEMS
Single Instruction Single Data (SISD) stream
Single processor executes a single instruction stream to operate on data stored
in a single memory
Single Instruction Multiple Data (SIMD) stream
Each instruction is executed on a different set of data by the different
processors
Multiple Instruction Single Data (MISD) stream (Never implemented)
A sequence of data is transmitted to a set of processors, each of execute a
different instruction sequence
Multiple Instruction Multiple Data (MIMD)
A set of processors simultaneously execute different instruction sequences on
different data sets
37. SYMMETRIC MULTIPROCESSING
DEFINITION
• A computer architecture that provides fast
performance by making multiple CPUs
available to complete individual
processes simultaneously.
40. SYMMETRIC MULTIPROCESSING
HOW IT WORKS?
1. There are multiple processors.
- Each has access to a shared main memory and the I/O
devices
2. Main Memory (MM) operating under a single OS with two or
more homogeneous processors.
- each processor has cache memory (or cache)
> to speed-up the MM data access
> to reduce the system bus traffic.
3. Processors interconnected using interconnection
mechanism
- buses, crossbar switches or on-chip mesh networks.
41. SYMMETRIC MULTIPROCESSING
HOW IT WORKS?
4. The memory is organized oftenly
- multiple simultaneous accesses to separate blocks of memory
are possible.
5. SMP systems allow any processor to work on any task
- no matter where data for that task located in memory
- Easily move tasks between processors to balance the workload
- may view the system like a multiprogramming uniprocessor
system.
- can construct applications that use multiple processes without
regard to whether a single processor or multiple processors will be
available.
6. SMP provide all the functionality of a multiprogramming system
plus additional features
- to accommodate multiple processors.
42. SYMMETRIC MULTIPROCESSING
KEY DESIGN
• Simultaneous concurrent processes or threads
> Kernel routines need to be re-entrant to allow several processors to execute the
same kernel code simultaneously.
> to avoid deadlock or invalid operations.
• Scheduling
> Avoid conflicts.
> If kernel-level multithreading is used, then the opportunity exists
• Synchronization
> To provide effective synchronization.
• Memory management
> Deal with all issues found on uniprocessor computers
> OS needs to exploit the available hardware parallelism to achieve the best
performance. The paging mechanisms coordinated to enforce consistency
• Reliability and fault tolerance
> To avoid processor failure.
> Must recognize the loss of a processor and restructure management tables
accordingly.
43. SYMMETRIC MULTIPROCESSING
ADVANTAGES
1. In symmetric multiprocessing, any processor can run any type of
thread. The processors communicate with each other through shared
memory than in ASMP where the operating system typically sets
aside one or more processors for its exclusive use.
2. SMP systems provide better load-balancing and fault tolerance.
Because the operating system threads can run on any processor, the
chance of hitting a CPU bottleneck is greatly reduced. All processors
are allowed to run a mixture of application and operating system code
where in ASMP, bottleneck problem is very high.
3. A processor failure in the SMP model only reduces the computing
capacity of the system, than ASMP model where the processor that
fails is an operating system processor, the whole computer can go
down.
44. SYMMETRIC MULTIPROCESSING
DISADVANTAGES
1. SMP systems are inherently more complex than ASMP systems.
2. A tremendous amount of coordination must take place within the
operating system to keep everything synchronized than ASMP where
using the normal and simple processing way.
46. MICROKERNEL
DEFINITION
• Small OS core that provide the foundation for modular extension .
• Micro Kernel was popularized by Mach OS, which is now the core
of the Macintosh Mac OS X operating system.
• The philosophy underlying the microkernel is that only absolutely
essential core OS functions should be in the kernel .
48. MICROKERNEL DESIGN
It validate the message.
Pass message between components.
Grants access from the hardware.
Message Exchange
49. MICROKERNEL DESIGN
- It prevents message passing unless
exchange is allowed.
- Steps:
1. Application wishes to open a File.
2. Micro Kernel send message to file system
server.
3. If the application wishes to create a process
or thread.
4. Micro Kernel send a message to the
process server.
Protection Function
50. MICROKERNEL DESIGN
Low-level memory management - Mapping each virtual page to a physical page
frame
Most memory management tasks occur in user space
Memory Management
51. MICROKERNEL DESIGN
Communication between processes or threads in a microkernel OS is via
messages.
A message includes:
A header that identifies the sending and receiving process and
A body that contains direct data, a pointer to a block of data, or some control
information about the process.
Interprocess Communication
52. MICROKERNAL DESIGN
Within a microkernel it is possible to handle hardware interrupts as messages and
to include I/O ports in address spaces.
a particular user-level process is assigned to the interrupt and the kernel maintains the
mapping.
I/O and interrupt
management
53. MICROKERNEL - ADVANTAGES
Uniform Interfaces
all services are provided by means of message passing.
Extensibility
allowing the addition of new services.
Flexibility
not only can new features be added to the OS, but also existing features can be
subtracted to produce a smaller, more efficient implementation.
Portability
Intel’s near monopoly of many segments of the computer platform market is
unlikely to be sustained indefinitely. Thus, portabilitybecomes an attractive feature
of an OS. Changes needed to port the system to a new processor are changed in
the microkernel and not in other services.
54. MICROKERNEL - ADVANTAGES
Reliability
The larger the size of a software product, the more difficult it is to ensure its
reliability. Although modular design helps to enhance reliability, even greater gains
can be achieved with a micro kernel architecture. A small microkernel can be
rigorously tested.
Distributed system support
When a message is sent from a client to a server process, the message must
include an identifier of the requested service. If a distributed system (e.g., a
cluster) is configured so that all processes and services have unique identifiers,
then in effect there is a single system image at the microkernel level. A process
can send a message without knowing on which computer the target service
resides.
Support for object-oriented operating systems (OOOSS)
A number of microkernel design efforts are moving in the direction of object
orientation.
Hinweis der Redaktion
It is useful to see where SMP architectures fit into the overall category of parallel processors.
Figure A:Operating systems developed in the mid to late 1950s were designed with little concern about structure. The problems caused by mutual dependence and interaction were grossly underestimated. In these monolithic operating systems, virtually any procedure can call any other procedure – the approach was unsustainable as operating systems grew to massive proportions.Modular programming techniques were needed to handle this scale of software development. layered operating systems were developed in which functions are organized hierarchically and interaction only takes place between adjacent layers.M ost or all of the layers execute in kernel mode.PROBLEM: Major changes in one layer can have numerous effects on code in adjacent layers - many difficult to traceAnd security is difficult to build in because of the many interactions between adjacent layers.Figure BIn a Microkernel - only absolutely essential core OS functions should be in the kernel. Less essential services and applications are built on the microkernel and execute in user mode.Common characteristic is that many services that traditionally have been part of the OS are now external subsystems that interact with the kernel and with each other; these include device drivers, file systems, virtual memory manager, windowing system, and security services.The microkernel functions as a message exchange: It validates messages, passes them between components, and grants access to hardware.The microkernel also performs a protection function; it prevents message passing unless exchange is allowed.
The basic form of communication between processes or threads in a microkernel OS is messages. A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some control information about the process.Typically, we can think of IPC as being based on ports associated with processes.
The basic form of communication between processes or threads in a microkernel OS is messages. A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some control information about the process.Typically, we can think of IPC as being based on ports associated with processes.
The microkernel has to control the hardware concept of address space to make it possible to implement protection at the process level. Providingthe microkernel is responsible for mapping each virtual page to a physical frame, the majority of memory management can be implemented outside the kernel(protection of the address space of one process from another and the page replacement algorithm and other paging logic etc)
The basic form of communication between processes or threads in a microkernel OS is messages. A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some control information about the process.Typically, we can think of IPC as being based on ports associated with processes.
With a microkernel architecture, it is possible to handle hardware interrupts as messages and to include I/O ports in addressspaces.