SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Downloaden Sie, um offline zu lesen
Lab 7: Page tables
Advanced Operating Systems

Zubair Nabi
zubair.nabi@itu.edu.pk

March 27, 2013
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Permissions

Each PTE contains associated flags
Flag

PTE_P
PTE_W
PTE_U
PTE_PWT
PTE_PCD
PTE_A
PTE_D
PTE_PS

Description
Whether the page is present
Whether the page can be written to
Whether user programs can access the page
Whether write through or write back
Whether caching is disabled
Whether the page has been accessed
Whether the page is dirty
Page size
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
Code: kmap

1
2
3
4
5
6
7
8
9
10
11

static struct kmap {
void

∗virt;

uint phys_start ;
uint phys_end ;
int perm;
} kmap [] = {
{ (void ∗)KERNBASE , 0, EXTMEM , PTE_W }, // I/O space
{ (void ∗)KERNLINK , V2P( KERNLINK ), V2P(data), 0}, // kern text
{ (void ∗)data , V2P(data), PHYSTOP , PTE_W }, // kern data
{ (void ∗)DEVSPACE , DEVSPACE , 0, PTE_W }, // more devices
};
Code: setupkvm
1
2
3
4
5
6
7
8
9
10
11
12
13
14

pde_t∗ setupkvm (void) {
pde_t

∗pgdir ;
∗k;

struct kmap

if(( pgdir = ( pde_t ∗)kalloc ()) == 0)
return 0;
memset (pgdir , 0, PGSIZE );

for(k = kmap; k < &kmap[ NELEM (kmap )]; k++)
if( mappages (pgdir , k−>virt , k−>phys_end

− k−>phys_start ,

(uint)k−>phys_start , k−>perm) < 0)
return 0;
return pgdir ;
}
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
walkpgdir

1

Uses the upper 10 bits of the virtual address to find the PDE

2

Uses the next 10 bits to find the PTE
walkpgdir

1

Uses the upper 10 bits of the virtual address to find the PDE

2

Uses the next 10 bits to find the PTE
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
exec

• Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
• Initializes instructions, data, and stack
exec

• Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
• Initializes instructions, data, and stack
Today’s task

• Most operating systems implement “anticipatory paging” in which
on a page fault, the next few consecutive pages are also loaded
to preemptively reduce page faults
• Chalk out a design to implement this strategy in xv6
Reading(s)

• Chapter 2, “Page tables” from “xv6: a simple, Unix-like teaching
operating system”

Weitere ähnliche Inhalte

Was ist angesagt?

The Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOsThe Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOsDivye Kapoor
 
The structure of process
The structure of processThe structure of process
The structure of processAbhaysinh Surve
 
Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems Ahmed El-Arabawy
 
Internal representation of files ppt
Internal representation of files pptInternal representation of files ppt
Internal representation of files pptAbhaysinh Surve
 
Linux Initialization Process (1)
Linux Initialization Process (1)Linux Initialization Process (1)
Linux Initialization Process (1)shimosawa
 
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityOMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityAndrew Case
 
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo RickliOSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo RickliNETWAYS
 
Memory forensics
Memory forensicsMemory forensics
Memory forensicsSunil Kumar
 
De-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory AnalysisDe-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory AnalysisAndrew Case
 
Ganesh naik linux_kernel_internals
Ganesh naik linux_kernel_internalsGanesh naik linux_kernel_internals
Ganesh naik linux_kernel_internalsGanesh Naik
 
Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals Ahmed El-Arabawy
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemHungWei Chiu
 
Workshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with VolatilityWorkshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with VolatilityAndrew Case
 
Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files Ahmed El-Arabawy
 
Lec 10-linux-review
Lec 10-linux-reviewLec 10-linux-review
Lec 10-linux-reviewabinaya m
 
(120513) #fitalk an introduction to linux memory forensics
(120513) #fitalk   an introduction to linux memory forensics(120513) #fitalk   an introduction to linux memory forensics
(120513) #fitalk an introduction to linux memory forensicsINSIGHT FORENSIC
 
Hadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHanborq Inc.
 
Linux Memory Analysis with Volatility
Linux Memory Analysis with VolatilityLinux Memory Analysis with Volatility
Linux Memory Analysis with VolatilityAndrew Case
 

Was ist angesagt? (20)

The Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOsThe Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOs
 
The structure of process
The structure of processThe structure of process
The structure of process
 
Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems
 
Internal representation of files ppt
Internal representation of files pptInternal representation of files ppt
Internal representation of files ppt
 
Linux Initialization Process (1)
Linux Initialization Process (1)Linux Initialization Process (1)
Linux Initialization Process (1)
 
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityOMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
 
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo RickliOSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
 
Memory forensics
Memory forensicsMemory forensics
Memory forensics
 
De-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory AnalysisDe-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory Analysis
 
Introduction to UNIX
Introduction to UNIXIntroduction to UNIX
Introduction to UNIX
 
Ganesh naik linux_kernel_internals
Ganesh naik linux_kernel_internalsGanesh naik linux_kernel_internals
Ganesh naik linux_kernel_internals
 
Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystem
 
Workshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with VolatilityWorkshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with Volatility
 
Basic Linux Internals
Basic Linux InternalsBasic Linux Internals
Basic Linux Internals
 
Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files
 
Lec 10-linux-review
Lec 10-linux-reviewLec 10-linux-review
Lec 10-linux-review
 
(120513) #fitalk an introduction to linux memory forensics
(120513) #fitalk   an introduction to linux memory forensics(120513) #fitalk   an introduction to linux memory forensics
(120513) #fitalk an introduction to linux memory forensics
 
Hadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHadoop HDFS Detailed Introduction
Hadoop HDFS Detailed Introduction
 
Linux Memory Analysis with Volatility
Linux Memory Analysis with VolatilityLinux Memory Analysis with Volatility
Linux Memory Analysis with Volatility
 

Andere mochten auch

AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!Zubair Nabi
 
AOS Lab 5: System calls
AOS Lab 5: System callsAOS Lab 5: System calls
AOS Lab 5: System callsZubair Nabi
 
AOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversAOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversZubair Nabi
 
AOS Lab 11: Virtualization
AOS Lab 11: VirtualizationAOS Lab 11: Virtualization
AOS Lab 11: VirtualizationZubair Nabi
 
AOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itAOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itZubair Nabi
 
Topic 13: Cloud Stacks
Topic 13: Cloud StacksTopic 13: Cloud Stacks
Topic 13: Cloud StacksZubair Nabi
 
AOS Lab 6: Scheduling
AOS Lab 6: SchedulingAOS Lab 6: Scheduling
AOS Lab 6: SchedulingZubair Nabi
 
Topic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and VirtualizationTopic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and VirtualizationZubair Nabi
 
Topic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and NetworkingTopic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and NetworkingZubair Nabi
 
The Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanThe Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanZubair Nabi
 
AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!Zubair Nabi
 
MapReduce Application Scripting
MapReduce Application ScriptingMapReduce Application Scripting
MapReduce Application ScriptingZubair Nabi
 
MapReduce and DBMS Hybrids
MapReduce and DBMS HybridsMapReduce and DBMS Hybrids
MapReduce and DBMS HybridsZubair Nabi
 
Raabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldRaabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldZubair Nabi
 
The Big Data Stack
The Big Data StackThe Big Data Stack
The Big Data StackZubair Nabi
 

Andere mochten auch (15)

AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!
 
AOS Lab 5: System calls
AOS Lab 5: System callsAOS Lab 5: System calls
AOS Lab 5: System calls
 
AOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversAOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device Drivers
 
AOS Lab 11: Virtualization
AOS Lab 11: VirtualizationAOS Lab 11: Virtualization
AOS Lab 11: Virtualization
 
AOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itAOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on it
 
Topic 13: Cloud Stacks
Topic 13: Cloud StacksTopic 13: Cloud Stacks
Topic 13: Cloud Stacks
 
AOS Lab 6: Scheduling
AOS Lab 6: SchedulingAOS Lab 6: Scheduling
AOS Lab 6: Scheduling
 
Topic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and VirtualizationTopic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and Virtualization
 
Topic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and NetworkingTopic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and Networking
 
The Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanThe Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in Pakistan
 
AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!
 
MapReduce Application Scripting
MapReduce Application ScriptingMapReduce Application Scripting
MapReduce Application Scripting
 
MapReduce and DBMS Hybrids
MapReduce and DBMS HybridsMapReduce and DBMS Hybrids
MapReduce and DBMS Hybrids
 
Raabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldRaabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing World
 
The Big Data Stack
The Big Data StackThe Big Data Stack
The Big Data Stack
 

Ähnlich wie AOS Lab 7: Page tables

Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniquesnikhilrana24112003
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfHarika Pudugosula
 
Virtual memory translation.pptx
Virtual memory translation.pptxVirtual memory translation.pptx
Virtual memory translation.pptxRAJESH S
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page tableguestff64339
 
address-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).pptaddress-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).ppt1556AyeshaShaikh
 
Linux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKBLinux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKBshimosawa
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdfHarika Pudugosula
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page tableduvvuru madhuri
 
Memory map
Memory mapMemory map
Memory mapaviban
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql serverPrashant Kumar
 
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Hsien-Hsin Sean Lee, Ph.D.
 
cPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB AlchemycPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB AlchemyRyan Robson
 
AltaVista Search Engine Architecture
AltaVista Search Engine ArchitectureAltaVista Search Engine Architecture
AltaVista Search Engine ArchitectureChangshu Liu
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
SQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTPSQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTPTony Rogerson
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 pagingVaibhav Khanna
 

Ähnlich wie AOS Lab 7: Page tables (20)

Ppt
PptPpt
Ppt
 
02-OS-review.pptx
02-OS-review.pptx02-OS-review.pptx
02-OS-review.pptx
 
Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniques
 
Os4
Os4Os4
Os4
 
Os4
Os4Os4
Os4
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdf
 
Virtual memory translation.pptx
Virtual memory translation.pptxVirtual memory translation.pptx
Virtual memory translation.pptx
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page table
 
address-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).pptaddress-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).ppt
 
Linux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKBLinux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKB
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdf
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page table
 
Memory map
Memory mapMemory map
Memory map
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql server
 
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
 
cPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB AlchemycPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB Alchemy
 
AltaVista Search Engine Architecture
AltaVista Search Engine ArchitectureAltaVista Search Engine Architecture
AltaVista Search Engine Architecture
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
SQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTPSQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTP
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 paging
 

Mehr von Zubair Nabi

Lab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using MininetLab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using MininetZubair Nabi
 
Topic 12: NoSQL in Action
Topic 12: NoSQL in ActionTopic 12: NoSQL in Action
Topic 12: NoSQL in ActionZubair Nabi
 
Lab 4: Interfacing with Cassandra
Lab 4: Interfacing with CassandraLab 4: Interfacing with Cassandra
Lab 4: Interfacing with CassandraZubair Nabi
 
Topic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and StorageTopic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and StorageZubair Nabi
 
Topic 11: Google Filesystem
Topic 11: Google FilesystemTopic 11: Google Filesystem
Topic 11: Google FilesystemZubair Nabi
 
Lab 3: Writing a Naiad Application
Lab 3: Writing a Naiad ApplicationLab 3: Writing a Naiad Application
Lab 3: Writing a Naiad ApplicationZubair Nabi
 
Topic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative ArchitecturesTopic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative ArchitecturesZubair Nabi
 
Topic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce ParadigmTopic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce ParadigmZubair Nabi
 
Lab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPILab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPIZubair Nabi
 
Topic 6: MapReduce Applications
Topic 6: MapReduce ApplicationsTopic 6: MapReduce Applications
Topic 6: MapReduce ApplicationsZubair Nabi
 

Mehr von Zubair Nabi (11)

Lab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using MininetLab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using Mininet
 
Topic 12: NoSQL in Action
Topic 12: NoSQL in ActionTopic 12: NoSQL in Action
Topic 12: NoSQL in Action
 
Lab 4: Interfacing with Cassandra
Lab 4: Interfacing with CassandraLab 4: Interfacing with Cassandra
Lab 4: Interfacing with Cassandra
 
Topic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and StorageTopic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and Storage
 
Topic 11: Google Filesystem
Topic 11: Google FilesystemTopic 11: Google Filesystem
Topic 11: Google Filesystem
 
Lab 3: Writing a Naiad Application
Lab 3: Writing a Naiad ApplicationLab 3: Writing a Naiad Application
Lab 3: Writing a Naiad Application
 
Topic 9: MR+
Topic 9: MR+Topic 9: MR+
Topic 9: MR+
 
Topic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative ArchitecturesTopic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative Architectures
 
Topic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce ParadigmTopic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce Paradigm
 
Lab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPILab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPI
 
Topic 6: MapReduce Applications
Topic 6: MapReduce ApplicationsTopic 6: MapReduce Applications
Topic 6: MapReduce Applications
 

Kürzlich hochgeladen

Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbuapidays
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024The Digital Insurer
 

Kürzlich hochgeladen (20)

Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 

AOS Lab 7: Page tables

  • 1. Lab 7: Page tables Advanced Operating Systems Zubair Nabi zubair.nabi@itu.edu.pk March 27, 2013
  • 2. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 3. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 4. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 5. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 6. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 7. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 8. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 9. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 10. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 11. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 12. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 13. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 14. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 15. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 16. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 17. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 18. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 19. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 20. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 21. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 22. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 23. Permissions Each PTE contains associated flags Flag PTE_P PTE_W PTE_U PTE_PWT PTE_PCD PTE_A PTE_D PTE_PS Description Whether the page is present Whether the page can be written to Whether user programs can access the page Whether write through or write back Whether caching is disabled Whether the page has been accessed Whether the page is dirty Page size
  • 24. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 25. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 26. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 27. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 28. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 29. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 30. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 31. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 32. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 33. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 34. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 35. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 36. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 37. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 38. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 39. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 40. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 41. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 42. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 43. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 44. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 45. Code: kmap 1 2 3 4 5 6 7 8 9 10 11 static struct kmap { void ∗virt; uint phys_start ; uint phys_end ; int perm; } kmap [] = { { (void ∗)KERNBASE , 0, EXTMEM , PTE_W }, // I/O space { (void ∗)KERNLINK , V2P( KERNLINK ), V2P(data), 0}, // kern text { (void ∗)data , V2P(data), PHYSTOP , PTE_W }, // kern data { (void ∗)DEVSPACE , DEVSPACE , 0, PTE_W }, // more devices };
  • 46. Code: setupkvm 1 2 3 4 5 6 7 8 9 10 11 12 13 14 pde_t∗ setupkvm (void) { pde_t ∗pgdir ; ∗k; struct kmap if(( pgdir = ( pde_t ∗)kalloc ()) == 0) return 0; memset (pgdir , 0, PGSIZE ); for(k = kmap; k < &kmap[ NELEM (kmap )]; k++) if( mappages (pgdir , k−>virt , k−>phys_end − k−>phys_start , (uint)k−>phys_start , k−>perm) < 0) return 0; return pgdir ; }
  • 47. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 48. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 49. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 50. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 51. walkpgdir 1 Uses the upper 10 bits of the virtual address to find the PDE 2 Uses the next 10 bits to find the PTE
  • 52. walkpgdir 1 Uses the upper 10 bits of the virtual address to find the PDE 2 Uses the next 10 bits to find the PTE
  • 53. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 54. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 55. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 56. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 57. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 58. exec • Creates the user part of an address space from the program binary, in Executable and Linkable Format (ELF) • Initializes instructions, data, and stack
  • 59. exec • Creates the user part of an address space from the program binary, in Executable and Linkable Format (ELF) • Initializes instructions, data, and stack
  • 60. Today’s task • Most operating systems implement “anticipatory paging” in which on a page fault, the next few consecutive pages are also loaded to preemptively reduce page faults • Chalk out a design to implement this strategy in xv6
  • 61. Reading(s) • Chapter 2, “Page tables” from “xv6: a simple, Unix-like teaching operating system”