Memory Management
Swapping
Fragmentation
Types of Fragmentation
Paging
Demand Paging
Segmentation
Virtual Memory
Von Neumann Architecture
overlay
Process and Thread
Process Synchronization
Deadlock
CPU Scheduling
process scheduling
Memory Management
Swapping
Fragmentation
Types of Fragmentation
Paging
Demand Paging
Segmentation
Virtual Memory
Von Neumann Architecture
overlay
Process and Thread
Process Synchronization
Deadlock
CPU Scheduling
process scheduling
5. Memory Management
Memory management is the functionality of an
operating system which handles or manages
primary memory and moves processes back
and forth between main memory and disk
during execution.
It decides which process will get memory at
what time. It tracks whenever some memory
gets freed or unallocated and correspondingly
it updates the status.
7. Swapping
Swapping is a mechanism in which a process
can be swapped temporarily out of main
memory (or move) to secondary storage (disk)
and make that memory available to other
processes. At some later time, the system
swaps back the process from the secondary
storage.
Swapping is also known as a technique for
memory compaction.
10. Fragmentation
As processes are loaded and removed from
memory, the free memory space is broken into
little pieces. It happens after sometimes that
processes cannot be allocated to memory
blocks considering their small size and
memory blocks remains unused. This problem
is known as Fragmentation.
Fragmentation is of two types
• External Fragmentation
• Internal Fragmentation
12. Types of Fragmentation
External fragmentation
Total memory space is enough to satisfy a request or
to reside a process in it, but it is not contiguous, so it
cannot be used.
Internal fragmentation
Memory block assigned to process is bigger. Some
portion of memory is left unused, as it cannot be
used by another process
15. Paging
Paging is a memory management technique in
which process address space is broken into
blocks of the same size called pages (size is
power of 2, between 512 bytes and 8192
bytes). The size of the process is measured in
the number of pages
18. Demand Paging
A demand paging system is quite similar to a
paging system with swapping where
processes reside in secondary memory and
pages are loaded only on demand, not in
advance. When a context switch occurs, the
operating system does not copy any of the old
program’s pages out to the disk or any of the
new program’s pages into the main memory
Instead, it just begins executing the new
program after loading the first page and
fetches that program’s pages as they are
21. Segmentation
Segmentation is a memory management
technique in which each job is divided into
several segments of different sizes, one for
each module that contains pieces that perform
related functions. Each segment is actually a
different logical address space of the program.
When a process is to be executed, its
corresponding segmentation are loaded into
non-contiguous memory though every
segment is loaded into a contiguous block of
available memory.
22. Overlays
overlay is a technique to run a program that is
bigger than the size of the physical memory by
keeping only those instructions and data that
are needed at any given time . Divide the
program into modules in such a way that not
all modules need to be in the memory at the
same time.
23. Overlays
The concept of overlays is that whenever a
process is running it will not use the complete
program at the same time, it will use only
some part of it. Then overlays concept says
that whatever part you required, you load it an
once the part is done, then you just unload it,
means just pull it back and get the new part
you required and run it.
Formally,
24. Overlays
The process of transferring a block of
program code or other data into internal
memory, replacing what is already stored”.
Sometimes it happens that compare to the
size of the biggest partition, the size of the
program will be even more, then, in that case,
you should go with overlays.
26. virtual memory
A computer can address more memory than
the amount physically installed on the system.
This extra memory is actually called virtual
memory and it is a section of a hard disk
that's set up to emulate the computer's RAM.
Virtual memory serves two purposes. First, it
allows us to extend the use of physical
memory by using disk. Second, it allows us to
have memory protection, because each virtual
address is translated to a physical address.
29. Von Neumann Architecture
The Von Neumann architecture was first proposed
by a computer scientist John von Neumann. In this
architecture, one data path exists for both
instruction and data.
As a result, the CPU does one operation at a time.
It either fetches an instruction from memory, or
performs read/write operation on data.
So an instruction fetch and a data operation
cannot occur simultaneously, sharing a common
bus.
So this is why a lot of time consuming
32. Process and Thread
Process Thread
A process is a program under execution
i.e. an active program.
A thread is a lightweight process that
can be managed independently by a
scheduler.
Processes require more time for context
switching as they are more heavy.
Threads require less time for context
switching as they are lighter than
processes.
Processes are totally independent and
don’t share memory.
A thread may share some memory with
its peer threads.
Communication between processes
requires more time than between
threads.
Communication between threads
requires less time than between
processes
If a process gets blocked, remaining
processes can continue execution.
If a user level thread gets blocked, all of
its peer threads also get blocked.
Processes require more resources than
threads.
Threads generally need less resources
than processes.
Individual processes are independent of
each other.
Threads are parts of a process and so
are dependent.
34. User Level thread (ULT)
Is implemented in the user level library, they
are not created using the system calls. Thread
switching does not need to call OS and to
cause interrupt to Kernel. Kernel doesn’t know
about the user level thread and manages them
as if they were single-threaded processes
35. Advantage of User Level
thread (ULT)
Can be implemented on an OS that does’t
support multithreading.
Simple representation since thread has only
program counter, register set, stack space.
Simple to create since no intervention of
kernel.
Thread switching is fast since no OS calls
need to be made.
36. Disadvantages of ULT –
No or less co-ordination among the
threads and Kernel.
If one thread causes a page fault, the
entire process blocks.
37. Kernel Level Thread (KLT) –
Kernel knows and manages the threads. Instead
of thread table in each process, the kernel itself
has thread table (a master one) that keeps track
of all the threads in the system. In addition
kernel also maintains the traditional process
table to keep track of the processes. OS kernel
provides system call to create and manage
threads.
38. Advantages of KLT –
Since kernel has full knowledge about
the threads in the system, scheduler may
decide to give more time to processes
having large number of threads.
Good for applications that frequently
block
39. Disadvantages of KLT –
Slow and inefficient.
It requires thread control block so it is
an overhead.
41. Deadlock
If two or more process are waiting on
happening on some event which never
happens , then we say these process can
involved in deadlock then that state is called
deadlock
42. In the above diagram, the process 1 has
resource 1 and needs to acquire resource 2.
Similarly process 2 has resource 2 and needs
to acquire resource 1. Process 1 and process
2 are in deadlock as each of them needs the
other’s resource to complete their execution
but neither of them is willing to relinquish their
44. Deadlock Continue
Mutual Exclusion
There should be a resource that can only be
held by one process at a time. In the diagram
below, there is a single instance of Resource 1
and it is held by Process 1 only.
45. Deadlock Continue
Hold and Wait
A process can hold multiple resources and still
request more resources from other processes
which are holding them. In the diagram given
below, Process 2 holds Resource 2 and
Resource 3 and is requesting the Resource 1
which is held by Process 1.
46. No Preemption
A resource cannot be preempted from a process
by force. A process can only release a resource
voluntarily. In the diagram below, Process 2
cannot preempt Resource 1 from Process 1. It
will only be released when Process 1
relinquishes it voluntarily after its execution is
complete. For example printer Scenario
Deadlock Continue
47. Deadlock Continue
Circular Wait
A process is waiting for the resource held by the
second process, which is waiting for the
resource held by the third process and so on, till
the last process is waiting for a resource held by
the first process. This forms a circular chain. For
example: Process 1 is allocated Resource2 and
it is requesting Resource 1. Similarly, Process 2
is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop
50. CPU scheduler
CPU scheduling is a process which allows one
process to use the CPU while the execution of
another process is on hold(in waiting state)
due to unavailability of any resource like I/O
etc, thereby making full use of CPU. The aim
of CPU scheduling is to make the system
efficient, fast and fair.
51. CPU scheduler cont..
Whenever the CPU becomes idle, the
operating system must select one of the
processes in the ready queue to be executed.
The selection process is carried out by the
short-term scheduler (or CPU scheduler). The
scheduler selects from among the processes
in memory that are ready to execute, and
allocates the CPU to one of them
CPU scheduling decisions may take place
under the following four circumstances:
52. CPU scheduler cont..
1. When a process switches from the running
state to the waiting state(for I/O request or
invocation of wait for the termination of one of
the child processes).
2. When a process switches from the running
state to the ready state (for example, when
an interrupt occurs).
3. When a process switches from the waiting
state to the ready state(for example,
completion of I/O).
4. When a process terminates.
54. Process scheduling
The process scheduling is the activity of the
process manager that handles the removal of
the running process from the CPU and the
selection of another process on the basis of a
particular strategy.
Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one
process to be loaded into the executable
memory at a time and the loaded process
shares the CPU using time multiplexing.
56. process scheduling cont..
These algorithms are either non-preemptive
or preemptive.
Non-preemptive algorithms are designed so
that once a process enters the running state, it
cannot be preempted until it completes its
allotted time,
preemptive scheduling is based on priority
where a scheduler may preempt a low priority
running process anytime when a high priority
process enters into a ready state
57. First Come First Serve (FCFS)
Jobs are executed on first come, first serve
basis.
It is a non-preemptive,
Easy to understand and implement.
Its implementation is based on FIFO queue.
58. Shortest Job Next (SJN)
This is also known as shortest job first, or SJF
This is a non-preemptive
Best approach to minimize waiting time.
The processer should know in advance how
much time process will take.
59. Priority Based Scheduling
Priority scheduling is a non-preemptive algorithm
and one of the most common scheduling
algorithms in batch systems.
Each process is assigned a priority. Process with
highest priority is to be executed first and so on.
Processes with same priority are executed on
first come first served basis.
Priority can be decided based on memory
requirements, time requirements or any other
resource requirement.