SlideShare ist ein Scribd-Unternehmen logo
1 von 114
IS 311
File Organization
Dr. Yasser ALHABIBI
Misr University for Science and Technology
Faculty of Information Technology
Fall Semester
2010/2011
List of References
Course notes
Available (handed to students part by part).
Essential books (text books)
• Operating Systems: Internals and Design Principles, 6/E by williams stalling
Publisher Prentice Hall Co.
• Fundamentals of Database Systems, By Elmasri R and Navathe S. B., Publisher
Benjamin Cummings
• Database Management, By F R McFadden & J A Hoofer,
Publisher Benjamin Cummings
• An Introduction to Database Systems, By C. J. Date,
Publisher Addison-Wesley
Recommended books
• Other books that are related to Operating Systems, and Database Management
Systems
Course Outline
• Basic file concepts: Storage characteristics, Magnetic tape, Hard
magnetic disks, Blocking and extents
• Physical data transfer considerations: Buffered input/output,
Memory, Mapped files, File systems
• Sequential chronological files: Description and Primitive
operations, File functions, Algorithms
• Ordered files: Description and Primitive operations, Algorithms
• Direct Access files: Description and Primitive operations, Hashing
functions, Algorithms for handling synonyms
• Indexed sequential files: Description and Primitive operations,
Algorithms
• Indexes: concepts, Linear indexes, Tree indexes, Description of
heterogeneous and homogeneous Trees, Tree indexing
algorithms.
Course format/policies
Course Format
• Lectures will be largely interactive. You will be
called on randomly to answer questions.
• 20-minutes fairly simple quiz on classes (Weeks:
3, 5, 7, 11,13, and 15) just before break.
• Course at Windows Live Group will host lecture
slides, quiz answers, homework, general
announcements.
Grading
• Breakdown
– 40% final
– 20% Mid Term
– 20% weekly in-class quizzes
– 20% bi-weekly assignments (Lab Activity)
• May share ideas for weekly assignment
but must be write up individually.
• Will drop lowest quiz score.
Homework Submission
• Homework due every other Saturday before
midnight – explicit submission instructions later.
• Please do not ever ask for an extension.
• Late assignments incur 10% per day penalty up
to 3 days. After 3 days, no credit.
• Solutions will be posted after all homework are
submitted (or 3 days, whichever comes first)
• Under special circumstances, you may be
excused from an assignment or quiz. Must talk
to me ahead of time.
Course Strategy
• Course will move quickly
• Each subject builds on previous ones
– More important to be consistent than
occasionally heroic
• Start by hacking, back up to cover more
fundamental topics
Weeks 01 & 02
Reference:
Operating Systems: Internals and Design Principles, 6/E by williams stalling,,
Publisher Prentice Hall Co. [Chapter 11]
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Categories of
I/O Devices
• Difficult area of OS design
– Difficult to develop a consistent solution due
to a wide variety of devices and applications
• Three Categories:
– Human readable
– Machine readable
– Communications
Human readable
• Devices used to communicate with the
user
– Printers and
– terminals
• Video display
• Keyboard
• Mouse etc
Machine readable
• Used to communicate with electronic
equipment
– Disk drives
– USB keys
– Sensors
– Controllers
– Actuators
Communication
• Used to communicate with remote devices
– Digital line drivers
– Modems
Differences in
I/O Devices
• Devices differ in a number of areas
– Data Rate
– Application
– Complexity of Control
– Unit of Transfer
– Data Representation
– Error Conditions
Data Rate
• May be
massive
difference
between the
data transfer
rates of
devices
Application
– Disk used to store files requires file
management software
– Disk used to store virtual memory pages
needs special hardware and software to
support it
– Terminal used by system administrator may
have a higher priority
See Note Page
Complexity of control
• A printer requires a relatively simple
control interface.
• A disk is much more complex.
• This complexity is filtered to some extent
by the complexity of the I/O module that
controls the device.
Unit of transfer
• Data may be transferred as
– a stream of bytes or characters (e.g., terminal
I/O)
– or in larger blocks (e.g., disk I/O).
Data representation
• Different data encoding schemes are used
by different devices,
– including differences in character code and
parity conventions. Rule, principle
Equivalence, similarity
Error Conditions
• The nature of errors differ widely from one
device to another.
• Aspects include:
– the way in which they are reported,
– their consequences,
– the available range of responses
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Techniques for performing I/O
ORGANIZATION OF THE I/O FUNCTION
• Programmed I/O
• Interrupt-driven I/O
• Direct memory access (DMA)
See Note Page
Evolution of the
I/O Function
1. Processor directly controls a peripheral
device
2. Controller or I/O module is added
– Processor uses programmed I/O without
interrupts
– Processor does not need to handle details of
external devices
See Note Page
Evolution of the
I/O Function cont…
3. Controller or I/O module with interrupts
– Efficiency improves as processor does not
spend time waiting for an I/O operation to be
performed
3. Direct Memory Access
– Blocks of data are moved into memory
without involving the processor
– Processor involved at beginning and end only
See Note Page
Evolution of the
I/O Function cont…
5. I/O module is a separate processor
– CPU directs the I/O processor to execute an
I/O program in main memory.
5. I/O processor
– I/O module has its own local memory
– Commonly used to control communications
with interactive terminals
See Note Page
Direct Memory Address
• Processor delegates I/O
operation to the DMA
module
• DMA module transfers data
directly to or from memory
• When complete DMA
module sends an interrupt
signal to the processor
DMA Configurations:
Single Bus
• DMA can be configured in several ways
• Shown here, all modules share the same
system bus
See Note Page
DMA Configurations:
Integrated DMA & I/O
• Direct Path between DMA and I/O modules
• This substantially cuts the required bus cycles
See Note Page
DMA Configurations:
I/O Bus
• Reduces the number of I/O interfaces in the
DMA module
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Goals: Efficiency
• Most I/O devices extremely slow
compared to main memory
• Use of multiprogramming allows for some
processes to be waiting on I/O while
another process executes
• I/O cannot keep up with processor speed
– Swapping used to bring in ready processes
– But this is an I/O operation itself
See Note Page
Generality
• For simplicity and freedom from error it is
desirable to handle all I/O devices in a
uniform manner
• Hide most of the details of device I/O in
lower-level routines
• Difficult to completely generalize, but can
use a hierarchical modular design of I/O
functions
See Note Page
Hierarchical design
• A hierarchical philosophy leads to
organizing an OS into layers
• Each layer relies on the next lower layer to
perform more primitive functions
• It provides services to the next higher
layer.
• Changes in one layer should not require
changes in other layers
See Note Page
Figure 11.4 A Model of I/O Organization
Local peripheral device
• Logical I/O:
– Deals with the device as a logical
resource
• Device I/O:
– Converts requested operations into
sequence of I/O instructions
• Scheduling and Control
– Performs actual queuing and control
operations
See Note Page
Communications Port
• Similar to previous but the logical
I/O module is replaced by a
communications architecture,
– This consist of a number of layers.
– An example is TCP/IP,
File System
• Directory management
– Concerned with user operations
affecting files
• File System
– Logical structure and operations
• Physical organisation
– Converts logical names to physical
addresses
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
I/O Buffering
• Processes must wait for I/O to complete
before proceeding
– To avoid deadlock certain pages must remain
in main memory during I/O
• It may be more efficient to perform input
transfers in advance of requests being
made and to perform output transfers
some time after the request is made.
See Note Page
Block-oriented Buffering
• Information is stored in fixed sized blocks
• Transfers are made a block at a time
– Can reference data b block number
• Used for disks and USB keys
See Note Page
Stream-Oriented
Buffering
• Transfer information as a stream of bytes
• Used for terminals, printers,
communication ports, mouse and other
pointing devices, and most other devices
that are not secondary storage
See Note Page
No Buffer
• Without a buffer, the OS directly access
the device as and when it needs
Single Buffer
• Operating system assigns a buffer in main
memory for an I/O request
See Note Page
Block Oriented
Single Buffer
• Input transfers made to buffer
• Block moved to user space when needed
• The next block is moved into the buffer
– Read ahead or Anticipated Input
• Often a reasonable assumption as data is
usually accessed sequentially
See Note Page
Stream-oriented
Single Buffer
• Line-at-time or Byte-at-a-time
• Terminals often deal with one line at a time
with carriage return signaling the end of
the line
• Byte-at-a-time suites devices where a
single keystroke may be significant
– Also sensors and controllers
See Note Page
Double Buffer
• Use two system buffers instead of one
• A process can transfer data to or from one
buffer while the operating system empties
or fills the other buffer
Circular Buffer
• More than two buffers are used
• Each individual buffer is one unit in a
circular buffer
• Used when I/O operation must keep up
with process
See Note Page
Buffer Limitations
• Buffering smoothes out peaks in I/O
demand.
– But with enough demand eventually all buffers
become full and their advantage is lost
• However, when there is a variety of I/O
and process activities to service, buffering
can increase the efficiency of the OS and
the performance of individual processes.
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
See Note Page
Disk Performance
Parameters
• The actual details of disk I/O operation
depend on many things
– A general timing diagram of disk I/O transfer
is shown here.
See Note Page
See Note Page
Positioning the
Read/Write Heads
• When the disk drive is operating, the disk
is rotating at constant speed.
• Track selection involves moving the head
in a movable-head system or electronically
selecting one head on a fixed-head
system.
See Note Page
Disk Performance
Parameters
• Access Time is the sum of:
– Seek time: The time it takes to position the
head at the desired track
– Rotational delay or rotational latency: The
time its takes for the beginning of the sector to
reach the head
• Transfer Time is the time taken to
transfer the data.
See Note Page
Disk Scheduling
Policies
• To compare various schemes, consider a
disk head is initially located at track 100.
– assume a disk with 200 tracks and that the
disk request queue has random requests in it.
• The requested tracks, in the order
received by the disk scheduler, are
– 55, 58, 39, 18, 90, 160, 150, 38, 184.
First-in, first-out (FIFO)
• Process request sequentially
• Fair to all processes
• Approaches random scheduling in
performance if there are many processes
See Note Page
Priority
• Goal is not to optimize disk use but to
meet other objectives
• Short batch jobs may have higher priority
• Provide good interactive response time
• Longer jobs may have to wait an
excessively long time
• A poor policy for database systems
See Note Page
Last-in, first-out
• Good for transaction processing systems
– The device is given to the most recent user so
there should be little arm movement
• Possibility of starvation since a job may
never regain the head of the line
See Note Page
Shortest Service
Time First
• Select the disk I/O request that requires
the least movement of the disk arm from
its current position
• Always choose the minimum seek time
See Note Page
SCAN
• Arm moves in one direction only, satisfying
all outstanding requests until it reaches
the last track in that direction then the
direction is reversed
See Note Page
C-SCAN
• Restricts scanning to one direction only
• When the last track has been visited in
one direction, the arm is returned to the
opposite end of the disk and the scan
begins again
See Note Page
circular
N-step-SCAN
• Segments the disk request queue into
subqueues of length N
• Subqueues are processed one at a time,
using SCAN
• New requests added to other queue when
queue is processed
See Note Page
FSCAN
• Two subqueues
• When a scan begins, all of the requests
are in one of the queues, with the other
empty.
• All new requests are put into the other
queue.
• Service of new requests is deferred until all of
the old requests have been processed.
See Note Page
Performance Compared
Comparison of Disk Scheduling Algorithms
Disk Scheduling
Algorithms
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Multiple Disks
• Disk I/O performance may be increased
by spreading the operation over multiple
read/write heads
– Or multiple disks
• Disk failures can be recovered if parity
information is stored
See Note Page
RAID
• Redundant Array of Independent Disks
• Set of physical disk drives viewed by the
operating system as a single logical drive
• Data are distributed across the physical
drives of an array
• Redundant disk capacity is used to store
parity information which provides
recoverability from disk failure
See Note Page
Equivalence, similarity
RAID 0 - Stripped
• Not a true RAID – no redundancy
• Disk failure is catastrophic
• Very fast due to parallel read/write
See Note Page
Disastrous, shattering
RAID 1 - Mirrored
• Redundancy through duplication instead
of parity.
• Read requests can made in parallel.
• Simple recovery from disk failure
See Note Page
RAID 2
(Using Hamming code)
• Synchronized disk rotation
• Data stripping is used (extremely small)
• Hamming code used to correct single bit
errors and detect double-bit errors
See Note Page
Coordinated, harmonized, matched
narrow piece
Playacting, preference
RAID 3
bit-interleaved parity
• Similar to RAID-2 but uses all parity bits
stored on a single drive
See Note Page
equivalence
RAID 4
Block-level parity
• A bit-by-bit parity strip is calculated across
corresponding strips on each data disk
• The parity bits are stored in the
corresponding strip on the parity disk.
See Note Page
RAID 5
Block-level Distributed parity
• Similar to RAID-4 but distributing the parity
bits across all drives
See Note Page
RAID 6
Dual Redundancy
• Two different parity calculations are
carried out
– stored in separate blocks on different disks.
• Can recover from two disks failing
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Disk Cache
• Buffer in main memory for disk sectors
• Contains a copy of some of the sectors on
the disk
• When an I/O request is made for a
particular sector,
– a check is made to determine if the sector is
in the disk cache.
• A number of ways exist to populate the
cache
See Note Page
Least Recently Used
(LRU)
• The block that has been in the cache the
longest with no reference to it is replaced
• A stack of pointers reference the cache
– Most recently referenced block is on the top of
the stack
– When a block is referenced or brought into
the cache, it is placed on the top of the stack
See Note Page
Least Frequently Used
(LFU)
• The block that has experienced the fewest
references is replaced
• A counter is associated with each block
• Counter is incremented each time block
accessed
• When replacement is required, the block
with the smallest count is selected.
See Note Page
Frequency-Based
Replacement
See Note Page
LRU Disk Cache
Performance
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Devices are Files
• Each I/O device is associated with a
special file
– Managed by the file system
– Provides a clean uniform interface to users
and processes.
• To access a device, read and write
requests are made for the special file
associated with the device.
See Note Page
UNIX SVR4 I/O
• Each individual device is associated with a
special file
• Two types of I/O
– Buffered
– Unbuffered
See Note Page
Buffer Cache
• Three lists are
maintained
– Free List
– Device List
– Driver I/O Queue
See Note Page
Character Cache
• Used by character oriented devices
– E.g. terminals and printers
• Either written by the I/O device and read
by the process or vice versa
– Producer/consumer model used
See Note Page
Unbuffered I/O
• Unbuffered I/O is simply DMA between
device and process
– Fastest method
– Process is locked in main memory and can’t
be swapped out
– Device is tied to process and unavailable for
other processes
See Note Page
I/O for Device Types
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Linux/Unix Similarities
• Linux and Unix (e.g. SVR4) are very
similar in I/O terms
– The Linux kernel associates a special file with
each I/O device driver.
– Block, character, and network devices are
recognized.
See Note Page
The Elevator Scheduler
• Maintains a single queue for disk read and
write requests
• Keeps list of requests sorted by block
number
• Drive moves in a single direction to satisfy
each request
See Note Page
Deadline scheduler
– Uses three queues
• Incoming requests
• Read requests go to
the tail of a FIFO
queue
• Write requests go to
the tail of a FIFO
queue
– Each request has
an expiration time
See Note Page
Anticipatory
I/O scheduler
• Elevator and deadline scheduling can be
counterproductive if there are numerous
synchronous read requests.
• Delay a short period of time after
satisfying a read request to see if a new
nearby request can be made
See Note Page
Linux Page Cache
• Linux 2.4 and later, a single unified page
cache for all traffic between disk and main
memory
• Benefits:
– Dirty pages can be collected and written out
efficiently
– Pages in the page cache are likely to be
referenced again due to temporal locality
See Note Page
Roadmap
– I/O Devices
– Organization of the I/O Function
– Operating System Design Issues
– I/O Buffering
– Disk Scheduling
– Raid
– Disk Cache
– UNIX SVR4 I/O
– LINUX I/O
– Windows I/O
Windows I/O Manager
• The I/O manager is
responsible for all I/O
for the operating
system
• It provides a uniform
interface that all types
of drivers can call.
See Note Page
Windows I/O
• The I/O manager works closely with:
– Cache manager – handles all file caching
– File system drivers - routes I/O requests for
file system volumes to the appropriate
software driver for that volume.
– Network drivers - implemented as software
drivers rather than part of the Windows
Executive.
– Hardware device drivers
See Note Page
Asynchronous and
Synchronous I/O
• Windows offers two modes of I/O
operation:
– asynchronous and synchronous.
• Asynchronous mode is used whenever
possible to optimize application
performance.
Harmony, orchestrate
Software RAID
• Windows implements RAID functionality
as part of the operating system and can
be used with any set of multiple disks.
• RAID 0, 1 and RAID 5 are supported.
• In the case of RAID 1 (disk mirroring), the
two disks containing the primary and
mirrored partitions may be on the same
disk controller or different disk controllers.
Volume Shadow Copies
• Shadow copies are implemented by a
software driver that makes copies of data
on the volume before it is overwritten.
• Designed as an efficient way of making
consistent snapshots of volumes to that
they can be backed up.
– Also useful for archiving files on a per-volume
basis
See Note Page
Quick Review
SUMMARY [1]
The computer system’s interface to the outside world is its I/O architecture. This
architecture is designed to provide a systematic means of controlling interaction
with the outside world and to provide the operating system with the information it
needs to manage I/O activity effectively.
The I/O function is generally broken up into a number of layers, with lower layers
dealing with details that are closer to the physical functions to be performed and
higher layers dealing with I/O in a logical and generic fashion. The result is that
changes in hardware parameters need not affect most of the I/O software.
A key aspect of I/O is the use of buffers that are controlled by I/O utilities rather than
by application processes. Buffering smoothes out the differences between the
internal speeds of the computer system and the speeds of I/O devices. The use of
buffers also decouples the actual I/O transfer from the address space of the
application process. This allows the operating system more flexibility in performing
its memory-management function.
SUMMARY [2]
The aspect of I/O that has the greatest impact on overall system performance is
disk I/O. Accordingly, there has been greater research and design effort in this
area than in any other kind of I/O. Two of the most widely used approaches to
improve disk I/O performance are disk scheduling and the disk cache.
At any time, there may be a queue of requests for I/O on the same disk. It is the
object of disk scheduling to satisfy these requests in a way that minimizes the
mechanical seek time of the disk and hence improves performance. The physical
layout of pending requests plus considerations of locality come into play.
A disk cache is a buffer, usually kept in main memory, that functions as a cache of
disk blocks between disk memory and the rest of main memory. Because of the
principle of locality, the use of a disk cache should substantially reduce the number
of block I/O transfers between main memory and disk.
Review Questions [Homework 01]
1.List and briefly define three techniques for performing I/O.
2.What is the difference between logical I/O and device I/O?
3.What is the difference between block-oriented devices and
stream-oriented devices? Give a few examples of each.
4.Why would you expect improved performance using a
double buffer rather than a single buffer for I/O?
Review Questions [Homework 02]
5.What delay elements are involved in a disk read or write?
6.Briefly define the disk scheduling policies illustrated in
Figure 11.7.
7.Briefly define the seven RAID levels.
8.What is the typical disk sector size?
Review Questions [1] (self-study)
•List three categories of I/O devices with one example for each [s. 11]
•I/O devices differ in a number of areas. Comment by giving six differences [s. 15]
•Data may be transferred as what? [s 19]
•The nature of errors differs widely from one device to another. Comment by giving three
different aspects. [s. 21]
•List three techniques for performing I/O [s. 23]
•List three of IO functions and explain briefly one of them. [s. 24/25/26]
•Define DMA and draw its typical block diagram [s. 27]
•DMA can be configured in several ways. Comment [s. 28/29/30]
•List three different DMS configurations and contrast two of them [s. 28/29/30]
•List three hierarchical designs of I/O organization and draw a pictorial diagram that contrast
two of them. [s. 35/36/37/38]
•Contrast block-oriented buffering and stream-oriented buffering with examples for each. [s.
41/42]
•Contrast by drawing no-buffer and single-buffer. [s. 43/44]
•Contrast block-oriented single buffering and stream-oriented single buffering. [s. 45/46]
•Contrast by drawing single buffer and double buffer. [s. 44/47]
•Contrast by drawing double buffer and circular buffer. [s. 47/48]
•Discuss buffer limitations [s. 49]
Review Questions [2] (self-study)
•The actual details of disk I/O operation depend on many things. Draw a pictorial diagram
that illustrates the general timing diagram of disk I/O transfer. [s. 51]
•Disk performance parameters is measured by access time. Comment to show the different
components that are considered in measuring the access time. [s. 53]
•List four disk scheduling algorithms that are based on selection according to requester and
conduct comparison between two of them. [s. 64]
• List five disk scheduling algorithms that are based on selection according to requested item
and conduct comparison between two of them. [s. 64]
•Contrast RAID 0 and RAID 1 [s. 68/69]
•Conduct comparison between RAID 0, RAID 1, and RAID 2 [s. 68/69]70]
•Contrast RAID 2 and RAID 3 [s. 70/71]
•Contrast RAID 4 and RAID 3 [s. 71/72]
•Contrast RAID 5 and RAID 4 [s. 72/73]
•Contrast RAID 6 and RAID 5 [s. 73/74]
•Define disk cash. State two ways that are used to populate the cache. [s. 76]
•Contrast LRU and LFU [s. 77/78]
Review Statements [1]
•Human readable devices: Devices used to communicate with the user such as Printers and
terminals.
•Machine readable devices: Used to communicate with electronic equipment such as disk
drives, USB keys, sensors, controllers, and actuators.
•Communication devices: Used to communicate with remote devices such as digital line
drivers, and modems
•Controller or I/O module is added means that the processor uses programmed I/O without
interrupts, and also the processor does not need to handle details of external devices
•Controller or I/O module with interrupts means that efficiency improves as processor does
not spend time waiting for an I/O operation to be performed
•Direct Memory Access means that blocks of data are moved into memory without involving
the processor, and also Processor involved at beginning and end only
•I/O module is a separate processor means that CPU directs the I/O processor to execute an
I/O program in main memory.
•I/O processor means that I/O module has its own local memory, and also it is commonly
used to control communications with interactive terminals
•Processor delegates I/O operation to the DMA module
•DMA module transfers data directly to or from memory
•When complete DMA module sends an interrupt signal to the processor
•Most I/O devices extremely slow compared to main memory
•Use of multiprogramming allows for some processes to be waiting on I/O while another
process executes
Review Statements [2]
•I/O cannot keep up with processor speed
•Swapping used to bring in ready processes but this is an I/O operation itself
•For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform
manner
•Hide most of the details of device I/O in lower-level routines
•Difficult to completely generalize, but can use a hierarchical modular design of I/O functions
•A hierarchical philosophy leads to organizing an OS into layers
•Each layer relies on the next lower layer to perform more primitive functions
•It provides services to the next higher layer. Changes in one layer should not require
changes in other layers
•Logical I/O deals with the device as a logical resource
•Device I/O converts requested operations into sequence of I/O instructions
• Scheduling and Control performs actual queuing and control operations
•communications architecture consist of a number of layers such as TCP/IP,
•Directory management concerned with user operations affecting files
•File System represents logical structure and operations
•Physical organisation converts logical names to physical addresses
•Processes must wait for I/O to complete before proceeding to avoid deadlock certain pages
must remain in main memory during I/O
•It may be more efficient to perform input transfers in advance of requests being made and to
perform output transfers some time after the request is made.
Review Statements [3]
•In block-oriented Buffering, information is stored in fixed sized blocks
•In block-oriented Buffering, transfers are made a block at a time
•Block-oriented Buffering are used for disks and USB keys
•Stream-Oriented Buffering Transfer information as a stream of bytes
•Stream-Oriented Buffering are used for terminals, printers, communication ports, mouse
and other pointing devices, and most other devices that are not secondary storage
•No Buffer: Without a buffer, the OS directly access the device as and when it needs
•Single Buffer: Operating system assigns a buffer in main memory for an I/O request
•Block Oriented Single Buffer: input transfers made to buffer, block moved to user space
when needed, the next block is moved into the buffer, read ahead or anticipated Input, and
often a reasonable assumption as data is usually accessed sequentially
•Stream-oriented Single Buffer: line-at-time or Byte-at-a-time, terminals often deal with one
line at a time with carriage return signaling the end of the line, and byte-at-a-time suites
devices where a single keystroke may be significant, and also sensors and controllers
•Double Buffer: Use two system buffers instead of one, a process can transfer data to or
from one buffer while the operating system empties or fills the other buffer
•Circular Buffer: More than two buffers are used, each individual buffer is one unit in a
circular buffer, and are used when I/O operation must keep up with process
•Buffer Limitations: Buffering smoothes out peaks in I/O demand, but with enough demand
eventually all buffers become full and their advantage is lost, and however, when there is a
variety of I/O and process activities to service, buffering can increase the efficiency of the OS
and the performance of individual processes.
Review Statements [4]
•Access Time is the sum of Seek time and Rotational delay or rotational latency
•Seek time: The time it takes to position the head at the desired track
•Rotational delay or rotational latency: The time its takes for the beginning of the sector to
reach the head
•Transfer Time is the time taken to transfer the data.
•First-in, first-out (FIFO): process request sequentially, fair to all processes, and approaches
random scheduling in performance if there are many processes
•Priority: goal is not to optimize disk use but to meet other objectives, short batch jobs may
have higher priority, provide good interactive response time, longer jobs may have to wait an
excessively long time, and a poor policy for database systems
•Last-in, first-out: good for transaction processing systems, the device is given to the most
recent user so there should be little arm movement, and possibility of starvation since a job
may never regain the head of the line
•Shortest Service Time First: select the disk I/O request that requires the least movement of
the disk arm from its current position, and always choose the minimum seek time
•SCAN: Arm moves in one direction only, satisfying all outstanding requests until it reaches
the last track in that direction then the direction is reversed
•C-SCAN: restricts scanning to one direction only, and when the last track has been visited in
one direction, the arm is returned to the opposite end of the disk and the scan begins again
•N-step-SCAN: segments the disk request queue into subqueues of length N, subqueues are
processed one at a time, using SCAN, and new requests added to other queue when queue
is processed
Review Statements [5]
•FSCAN: Two subqueues, when a scan begins, all of the requests are in one of the queues,
with the other empty, all new requests are put into the other queue, and service of new
requests is deferred until all of the old requests have been processed.
•Disk I/O performance may be increased by spreading the operation over multiple read/write
heads or multiple disks.
•Disk failures can be recovered if parity information is stored
•RAID stand for Redundant Array of Independent Disks that are set of physical disk drives
viewed by the operating system as a single logical drive. Data are distributed across the
physical drives of an array. Redundant disk capacity is used to store parity information which
provides recoverability from disk failure
•RAID 0 – Stripped: not a true RAID – no redundancy, disk failure is catastrophic, and very
fast due to parallel read/write
•RAID 1 – Mirrored: redundancy through duplication instead of parity, read requests can
made in parallel, and simple recovery from disk failure
•RAID 2 (Using Hamming code): synchronised disk rotation, data stripping is used (extremely
small), and hamming code used to correct single bit errors and detect double-bit errors
•RAID 3 bit-interleaved parity: similar to RAID-2 but uses all parity bits stored on a single
drive
Review Statements [6]
•RAID 4 (Block-level parity): a bit-by-bit parity strip is calculated across corresponding strips
on each data disk, and the parity bits are stored in the corresponding strip on the parity disk.
•RAID 5 (Block-level Distributed parity): similar to RAID-4 but distributing the parity bits
across all drives
•RAID 6 (Dual Redundancy): Two different parity calculations are carried out (stored in
separate blocks on different disks.), and can recover from two disks failing
•Disk Cache is buffer in main memory for disk sectors that contains a copy of some of the
sectors on the disk, when an I/O request is made for a particular sector, a check is made to
determine if the sector is in the disk cache, and a number of ways exist to populate the
cache
•Least Recently Used (LRU): the block that has been in the cache the longest with no
reference to it is replaced, a stack of pointers reference the cache, most recently referenced
block is on the top of the stack, and when a block is referenced or brought into the cache, it
is placed on the top of the stack
•Least Frequently Used (LFU): the block that has experienced the fewest references is
replaced, a counter is associated with each block, counter is incremented each time block
accessed, and when replacement is required, the block with the smallest count is selected.
Problems [1]
1.Consider a program that accesses a single I/O device and compare un-
buffered I/O to the use of a buffer. Show that the use of the buffer can
reduce the running time by at most a factor of two.
2.Generalize the result of Problem 11.1 to the case in which a program
refers to n devices.
3.Calculate how much disk space (in sectors, tracks, and surfaces) will
be required to store 300,000 120-byte logical records if the disk is fixed-
sector with 512 bytes/sector, with 96 sectors/track, 110 tracks per
surface, and 8 usable surfaces. Ignore any file header record(s) and
track indexes, and assume that records cannot span two sectors.
4.Consider the disk system described in Problem 11.7, and assume that
the disk rotates at 360 rpm. A processor reads one sector from the disk
using interrupt-driven I/O, with one interrupt per byte. If it takes 2.5 μs to
process each interrupt, what percentage of the time will the processor
spend handling I/O (disregard seek time)?
Problems [2]
5.Repeat the preceding problem using DMA, and assume one interrupt
per sector.
6.It should be clear that disk striping can improve the data transfer rate
when the strip size is small compared to the I/O request size. It should
also be clear that RAID 0 provides improved performance relative to a
single large disk, because multiple I/O requests can be handled in
parallel. However, in this latter case, is disk striping necessary? That is,
does disk striping improve I/O request rate performance compared to a
comparable disk array without striping?
7.Consider a 4-drive, 200GB-per-drive RAID array. What is the available
data storage capacity for each of the RAID levels, 0, 1, 3, 4, 5, and 6?

Weitere ähnliche Inhalte

Was ist angesagt?

Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architectureAjithaSomasundaram
 
Comp 107 unit 4(operating systems)
Comp 107 unit 4(operating systems)Comp 107 unit 4(operating systems)
Comp 107 unit 4(operating systems)Mijanur Rahman
 
Computer fundamentals day02
Computer fundamentals  day02Computer fundamentals  day02
Computer fundamentals day02SEO SKills
 
Fundamental of computer
Fundamental of computerFundamental of computer
Fundamental of computerMijanur Rahman
 
8 operating system concept
8 operating system concept8 operating system concept
8 operating system conceptBaliThorat1
 
1 fundamentals of computer
1 fundamentals of computer1 fundamentals of computer
1 fundamentals of computerBaliThorat1
 
Operating system of computer
Operating system of computerOperating system of computer
Operating system of computerHamzaAbbas43
 
Bca i-fundamental of computer-u-2- application and system software
Bca  i-fundamental of  computer-u-2- application and system softwareBca  i-fundamental of  computer-u-2- application and system software
Bca i-fundamental of computer-u-2- application and system softwareRai University
 
Co module1a introdctnaddressingmodes
Co module1a introdctnaddressingmodesCo module1a introdctnaddressingmodes
Co module1a introdctnaddressingmodesManu Jose
 
Software languages and devices
Software languages and devicesSoftware languages and devices
Software languages and devicesYogeshSorot
 
4 computer languages
4 computer languages4 computer languages
4 computer languagesBaliThorat1
 
1 fundamentals of computer system
1 fundamentals of computer system1 fundamentals of computer system
1 fundamentals of computer systemBaliThorat1
 
08. networking-part-2
08. networking-part-208. networking-part-2
08. networking-part-2Muhammad Ahad
 
Computer generation and classification
Computer generation and classificationComputer generation and classification
Computer generation and classificationBaliThorat1
 

Was ist angesagt? (19)

Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architecture
 
Unit iii
Unit iiiUnit iii
Unit iii
 
Comp 107 unit 4(operating systems)
Comp 107 unit 4(operating systems)Comp 107 unit 4(operating systems)
Comp 107 unit 4(operating systems)
 
Computer fundamentals day02
Computer fundamentals  day02Computer fundamentals  day02
Computer fundamentals day02
 
09. storage-part-1
09. storage-part-109. storage-part-1
09. storage-part-1
 
7 processor
7 processor7 processor
7 processor
 
Fundamental of computer
Fundamental of computerFundamental of computer
Fundamental of computer
 
8 operating system concept
8 operating system concept8 operating system concept
8 operating system concept
 
1 fundamentals of computer
1 fundamentals of computer1 fundamentals of computer
1 fundamentals of computer
 
Operating system of computer
Operating system of computerOperating system of computer
Operating system of computer
 
Bca i-fundamental of computer-u-2- application and system software
Bca  i-fundamental of  computer-u-2- application and system softwareBca  i-fundamental of  computer-u-2- application and system software
Bca i-fundamental of computer-u-2- application and system software
 
Co module1a introdctnaddressingmodes
Co module1a introdctnaddressingmodesCo module1a introdctnaddressingmodes
Co module1a introdctnaddressingmodes
 
Software languages and devices
Software languages and devicesSoftware languages and devices
Software languages and devices
 
4 computer languages
4 computer languages4 computer languages
4 computer languages
 
1 fundamentals of computer system
1 fundamentals of computer system1 fundamentals of computer system
1 fundamentals of computer system
 
computer Architecture
computer Architecturecomputer Architecture
computer Architecture
 
08. networking-part-2
08. networking-part-208. networking-part-2
08. networking-part-2
 
Computer generation and classification
Computer generation and classificationComputer generation and classification
Computer generation and classification
 
9781111306366 ppt ch9
9781111306366 ppt ch99781111306366 ppt ch9
9781111306366 ppt ch9
 

Ähnlich wie Weeks [01 02] 20100921

Ähnlich wie Weeks [01 02] 20100921 (20)

11-IOManagement.ppt
11-IOManagement.ppt11-IOManagement.ppt
11-IOManagement.ppt
 
11-IOManagement.ppt
11-IOManagement.ppt11-IOManagement.ppt
11-IOManagement.ppt
 
I/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxI/o management and disk scheduling .pptx
I/o management and disk scheduling .pptx
 
Cs intro-ca
Cs intro-caCs intro-ca
Cs intro-ca
 
Ch1 introduction
Ch1   introductionCh1   introduction
Ch1 introduction
 
OS-01.ppt
OS-01.pptOS-01.ppt
OS-01.ppt
 
cs-intro-os.ppt
cs-intro-os.pptcs-intro-os.ppt
cs-intro-os.ppt
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301
 
Os1
Os1Os1
Os1
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Ch 7 io_management & disk scheduling
Ch 7 io_management & disk schedulingCh 7 io_management & disk scheduling
Ch 7 io_management & disk scheduling
 
I/O Organization
I/O OrganizationI/O Organization
I/O Organization
 
Chapter 7
Chapter 7Chapter 7
Chapter 7
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdf
 
chapter1.ppt
chapter1.pptchapter1.ppt
chapter1.ppt
 
Ch3 processes
Ch3   processesCh3   processes
Ch3 processes
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.ppt
 
Os concepts
Os conceptsOs concepts
Os concepts
 

Kürzlich hochgeladen

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 

Kürzlich hochgeladen (20)

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 

Weeks [01 02] 20100921

  • 1. IS 311 File Organization Dr. Yasser ALHABIBI Misr University for Science and Technology Faculty of Information Technology Fall Semester 2010/2011
  • 2. List of References Course notes Available (handed to students part by part). Essential books (text books) • Operating Systems: Internals and Design Principles, 6/E by williams stalling Publisher Prentice Hall Co. • Fundamentals of Database Systems, By Elmasri R and Navathe S. B., Publisher Benjamin Cummings • Database Management, By F R McFadden & J A Hoofer, Publisher Benjamin Cummings • An Introduction to Database Systems, By C. J. Date, Publisher Addison-Wesley Recommended books • Other books that are related to Operating Systems, and Database Management Systems
  • 3. Course Outline • Basic file concepts: Storage characteristics, Magnetic tape, Hard magnetic disks, Blocking and extents • Physical data transfer considerations: Buffered input/output, Memory, Mapped files, File systems • Sequential chronological files: Description and Primitive operations, File functions, Algorithms • Ordered files: Description and Primitive operations, Algorithms • Direct Access files: Description and Primitive operations, Hashing functions, Algorithms for handling synonyms • Indexed sequential files: Description and Primitive operations, Algorithms • Indexes: concepts, Linear indexes, Tree indexes, Description of heterogeneous and homogeneous Trees, Tree indexing algorithms.
  • 5. Course Format • Lectures will be largely interactive. You will be called on randomly to answer questions. • 20-minutes fairly simple quiz on classes (Weeks: 3, 5, 7, 11,13, and 15) just before break. • Course at Windows Live Group will host lecture slides, quiz answers, homework, general announcements.
  • 6. Grading • Breakdown – 40% final – 20% Mid Term – 20% weekly in-class quizzes – 20% bi-weekly assignments (Lab Activity) • May share ideas for weekly assignment but must be write up individually. • Will drop lowest quiz score.
  • 7. Homework Submission • Homework due every other Saturday before midnight – explicit submission instructions later. • Please do not ever ask for an extension. • Late assignments incur 10% per day penalty up to 3 days. After 3 days, no credit. • Solutions will be posted after all homework are submitted (or 3 days, whichever comes first) • Under special circumstances, you may be excused from an assignment or quiz. Must talk to me ahead of time.
  • 8. Course Strategy • Course will move quickly • Each subject builds on previous ones – More important to be consistent than occasionally heroic • Start by hacking, back up to cover more fundamental topics
  • 9. Weeks 01 & 02 Reference: Operating Systems: Internals and Design Principles, 6/E by williams stalling,, Publisher Prentice Hall Co. [Chapter 11]
  • 10. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 11. Categories of I/O Devices • Difficult area of OS design – Difficult to develop a consistent solution due to a wide variety of devices and applications • Three Categories: – Human readable – Machine readable – Communications
  • 12. Human readable • Devices used to communicate with the user – Printers and – terminals • Video display • Keyboard • Mouse etc
  • 13. Machine readable • Used to communicate with electronic equipment – Disk drives – USB keys – Sensors – Controllers – Actuators
  • 14. Communication • Used to communicate with remote devices – Digital line drivers – Modems
  • 15. Differences in I/O Devices • Devices differ in a number of areas – Data Rate – Application – Complexity of Control – Unit of Transfer – Data Representation – Error Conditions
  • 16. Data Rate • May be massive difference between the data transfer rates of devices
  • 17. Application – Disk used to store files requires file management software – Disk used to store virtual memory pages needs special hardware and software to support it – Terminal used by system administrator may have a higher priority See Note Page
  • 18. Complexity of control • A printer requires a relatively simple control interface. • A disk is much more complex. • This complexity is filtered to some extent by the complexity of the I/O module that controls the device.
  • 19. Unit of transfer • Data may be transferred as – a stream of bytes or characters (e.g., terminal I/O) – or in larger blocks (e.g., disk I/O).
  • 20. Data representation • Different data encoding schemes are used by different devices, – including differences in character code and parity conventions. Rule, principle Equivalence, similarity
  • 21. Error Conditions • The nature of errors differ widely from one device to another. • Aspects include: – the way in which they are reported, – their consequences, – the available range of responses
  • 22. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 23. Techniques for performing I/O ORGANIZATION OF THE I/O FUNCTION • Programmed I/O • Interrupt-driven I/O • Direct memory access (DMA) See Note Page
  • 24. Evolution of the I/O Function 1. Processor directly controls a peripheral device 2. Controller or I/O module is added – Processor uses programmed I/O without interrupts – Processor does not need to handle details of external devices See Note Page
  • 25. Evolution of the I/O Function cont… 3. Controller or I/O module with interrupts – Efficiency improves as processor does not spend time waiting for an I/O operation to be performed 3. Direct Memory Access – Blocks of data are moved into memory without involving the processor – Processor involved at beginning and end only See Note Page
  • 26. Evolution of the I/O Function cont… 5. I/O module is a separate processor – CPU directs the I/O processor to execute an I/O program in main memory. 5. I/O processor – I/O module has its own local memory – Commonly used to control communications with interactive terminals See Note Page
  • 27. Direct Memory Address • Processor delegates I/O operation to the DMA module • DMA module transfers data directly to or from memory • When complete DMA module sends an interrupt signal to the processor
  • 28. DMA Configurations: Single Bus • DMA can be configured in several ways • Shown here, all modules share the same system bus See Note Page
  • 29. DMA Configurations: Integrated DMA & I/O • Direct Path between DMA and I/O modules • This substantially cuts the required bus cycles See Note Page
  • 30. DMA Configurations: I/O Bus • Reduces the number of I/O interfaces in the DMA module See Note Page
  • 31. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 32. Goals: Efficiency • Most I/O devices extremely slow compared to main memory • Use of multiprogramming allows for some processes to be waiting on I/O while another process executes • I/O cannot keep up with processor speed – Swapping used to bring in ready processes – But this is an I/O operation itself See Note Page
  • 33. Generality • For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform manner • Hide most of the details of device I/O in lower-level routines • Difficult to completely generalize, but can use a hierarchical modular design of I/O functions See Note Page
  • 34. Hierarchical design • A hierarchical philosophy leads to organizing an OS into layers • Each layer relies on the next lower layer to perform more primitive functions • It provides services to the next higher layer. • Changes in one layer should not require changes in other layers See Note Page
  • 35. Figure 11.4 A Model of I/O Organization
  • 36. Local peripheral device • Logical I/O: – Deals with the device as a logical resource • Device I/O: – Converts requested operations into sequence of I/O instructions • Scheduling and Control – Performs actual queuing and control operations See Note Page
  • 37. Communications Port • Similar to previous but the logical I/O module is replaced by a communications architecture, – This consist of a number of layers. – An example is TCP/IP,
  • 38. File System • Directory management – Concerned with user operations affecting files • File System – Logical structure and operations • Physical organisation – Converts logical names to physical addresses See Note Page
  • 39. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 40. I/O Buffering • Processes must wait for I/O to complete before proceeding – To avoid deadlock certain pages must remain in main memory during I/O • It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made. See Note Page
  • 41. Block-oriented Buffering • Information is stored in fixed sized blocks • Transfers are made a block at a time – Can reference data b block number • Used for disks and USB keys See Note Page
  • 42. Stream-Oriented Buffering • Transfer information as a stream of bytes • Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage See Note Page
  • 43. No Buffer • Without a buffer, the OS directly access the device as and when it needs
  • 44. Single Buffer • Operating system assigns a buffer in main memory for an I/O request See Note Page
  • 45. Block Oriented Single Buffer • Input transfers made to buffer • Block moved to user space when needed • The next block is moved into the buffer – Read ahead or Anticipated Input • Often a reasonable assumption as data is usually accessed sequentially See Note Page
  • 46. Stream-oriented Single Buffer • Line-at-time or Byte-at-a-time • Terminals often deal with one line at a time with carriage return signaling the end of the line • Byte-at-a-time suites devices where a single keystroke may be significant – Also sensors and controllers See Note Page
  • 47. Double Buffer • Use two system buffers instead of one • A process can transfer data to or from one buffer while the operating system empties or fills the other buffer
  • 48. Circular Buffer • More than two buffers are used • Each individual buffer is one unit in a circular buffer • Used when I/O operation must keep up with process See Note Page
  • 49. Buffer Limitations • Buffering smoothes out peaks in I/O demand. – But with enough demand eventually all buffers become full and their advantage is lost • However, when there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes. See Note Page
  • 50. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O See Note Page
  • 51. Disk Performance Parameters • The actual details of disk I/O operation depend on many things – A general timing diagram of disk I/O transfer is shown here. See Note Page See Note Page
  • 52. Positioning the Read/Write Heads • When the disk drive is operating, the disk is rotating at constant speed. • Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system. See Note Page
  • 53. Disk Performance Parameters • Access Time is the sum of: – Seek time: The time it takes to position the head at the desired track – Rotational delay or rotational latency: The time its takes for the beginning of the sector to reach the head • Transfer Time is the time taken to transfer the data. See Note Page
  • 54. Disk Scheduling Policies • To compare various schemes, consider a disk head is initially located at track 100. – assume a disk with 200 tracks and that the disk request queue has random requests in it. • The requested tracks, in the order received by the disk scheduler, are – 55, 58, 39, 18, 90, 160, 150, 38, 184.
  • 55. First-in, first-out (FIFO) • Process request sequentially • Fair to all processes • Approaches random scheduling in performance if there are many processes See Note Page
  • 56. Priority • Goal is not to optimize disk use but to meet other objectives • Short batch jobs may have higher priority • Provide good interactive response time • Longer jobs may have to wait an excessively long time • A poor policy for database systems See Note Page
  • 57. Last-in, first-out • Good for transaction processing systems – The device is given to the most recent user so there should be little arm movement • Possibility of starvation since a job may never regain the head of the line See Note Page
  • 58. Shortest Service Time First • Select the disk I/O request that requires the least movement of the disk arm from its current position • Always choose the minimum seek time See Note Page
  • 59. SCAN • Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed See Note Page
  • 60. C-SCAN • Restricts scanning to one direction only • When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again See Note Page circular
  • 61. N-step-SCAN • Segments the disk request queue into subqueues of length N • Subqueues are processed one at a time, using SCAN • New requests added to other queue when queue is processed See Note Page
  • 62. FSCAN • Two subqueues • When a scan begins, all of the requests are in one of the queues, with the other empty. • All new requests are put into the other queue. • Service of new requests is deferred until all of the old requests have been processed. See Note Page
  • 63. Performance Compared Comparison of Disk Scheduling Algorithms
  • 65. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 66. Multiple Disks • Disk I/O performance may be increased by spreading the operation over multiple read/write heads – Or multiple disks • Disk failures can be recovered if parity information is stored See Note Page
  • 67. RAID • Redundant Array of Independent Disks • Set of physical disk drives viewed by the operating system as a single logical drive • Data are distributed across the physical drives of an array • Redundant disk capacity is used to store parity information which provides recoverability from disk failure See Note Page Equivalence, similarity
  • 68. RAID 0 - Stripped • Not a true RAID – no redundancy • Disk failure is catastrophic • Very fast due to parallel read/write See Note Page Disastrous, shattering
  • 69. RAID 1 - Mirrored • Redundancy through duplication instead of parity. • Read requests can made in parallel. • Simple recovery from disk failure See Note Page
  • 70. RAID 2 (Using Hamming code) • Synchronized disk rotation • Data stripping is used (extremely small) • Hamming code used to correct single bit errors and detect double-bit errors See Note Page Coordinated, harmonized, matched narrow piece Playacting, preference
  • 71. RAID 3 bit-interleaved parity • Similar to RAID-2 but uses all parity bits stored on a single drive See Note Page equivalence
  • 72. RAID 4 Block-level parity • A bit-by-bit parity strip is calculated across corresponding strips on each data disk • The parity bits are stored in the corresponding strip on the parity disk. See Note Page
  • 73. RAID 5 Block-level Distributed parity • Similar to RAID-4 but distributing the parity bits across all drives See Note Page
  • 74. RAID 6 Dual Redundancy • Two different parity calculations are carried out – stored in separate blocks on different disks. • Can recover from two disks failing See Note Page
  • 75. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 76. Disk Cache • Buffer in main memory for disk sectors • Contains a copy of some of the sectors on the disk • When an I/O request is made for a particular sector, – a check is made to determine if the sector is in the disk cache. • A number of ways exist to populate the cache See Note Page
  • 77. Least Recently Used (LRU) • The block that has been in the cache the longest with no reference to it is replaced • A stack of pointers reference the cache – Most recently referenced block is on the top of the stack – When a block is referenced or brought into the cache, it is placed on the top of the stack See Note Page
  • 78. Least Frequently Used (LFU) • The block that has experienced the fewest references is replaced • A counter is associated with each block • Counter is incremented each time block accessed • When replacement is required, the block with the smallest count is selected. See Note Page
  • 81. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 82. Devices are Files • Each I/O device is associated with a special file – Managed by the file system – Provides a clean uniform interface to users and processes. • To access a device, read and write requests are made for the special file associated with the device. See Note Page
  • 83. UNIX SVR4 I/O • Each individual device is associated with a special file • Two types of I/O – Buffered – Unbuffered See Note Page
  • 84. Buffer Cache • Three lists are maintained – Free List – Device List – Driver I/O Queue See Note Page
  • 85. Character Cache • Used by character oriented devices – E.g. terminals and printers • Either written by the I/O device and read by the process or vice versa – Producer/consumer model used See Note Page
  • 86. Unbuffered I/O • Unbuffered I/O is simply DMA between device and process – Fastest method – Process is locked in main memory and can’t be swapped out – Device is tied to process and unavailable for other processes See Note Page
  • 87. I/O for Device Types See Note Page
  • 88. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 89. Linux/Unix Similarities • Linux and Unix (e.g. SVR4) are very similar in I/O terms – The Linux kernel associates a special file with each I/O device driver. – Block, character, and network devices are recognized. See Note Page
  • 90. The Elevator Scheduler • Maintains a single queue for disk read and write requests • Keeps list of requests sorted by block number • Drive moves in a single direction to satisfy each request See Note Page
  • 91. Deadline scheduler – Uses three queues • Incoming requests • Read requests go to the tail of a FIFO queue • Write requests go to the tail of a FIFO queue – Each request has an expiration time See Note Page
  • 92. Anticipatory I/O scheduler • Elevator and deadline scheduling can be counterproductive if there are numerous synchronous read requests. • Delay a short period of time after satisfying a read request to see if a new nearby request can be made See Note Page
  • 93. Linux Page Cache • Linux 2.4 and later, a single unified page cache for all traffic between disk and main memory • Benefits: – Dirty pages can be collected and written out efficiently – Pages in the page cache are likely to be referenced again due to temporal locality See Note Page
  • 94. Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache – UNIX SVR4 I/O – LINUX I/O – Windows I/O
  • 95. Windows I/O Manager • The I/O manager is responsible for all I/O for the operating system • It provides a uniform interface that all types of drivers can call. See Note Page
  • 96. Windows I/O • The I/O manager works closely with: – Cache manager – handles all file caching – File system drivers - routes I/O requests for file system volumes to the appropriate software driver for that volume. – Network drivers - implemented as software drivers rather than part of the Windows Executive. – Hardware device drivers See Note Page
  • 97. Asynchronous and Synchronous I/O • Windows offers two modes of I/O operation: – asynchronous and synchronous. • Asynchronous mode is used whenever possible to optimize application performance. Harmony, orchestrate
  • 98. Software RAID • Windows implements RAID functionality as part of the operating system and can be used with any set of multiple disks. • RAID 0, 1 and RAID 5 are supported. • In the case of RAID 1 (disk mirroring), the two disks containing the primary and mirrored partitions may be on the same disk controller or different disk controllers.
  • 99. Volume Shadow Copies • Shadow copies are implemented by a software driver that makes copies of data on the volume before it is overwritten. • Designed as an efficient way of making consistent snapshots of volumes to that they can be backed up. – Also useful for archiving files on a per-volume basis See Note Page
  • 101. SUMMARY [1] The computer system’s interface to the outside world is its I/O architecture. This architecture is designed to provide a systematic means of controlling interaction with the outside world and to provide the operating system with the information it needs to manage I/O activity effectively. The I/O function is generally broken up into a number of layers, with lower layers dealing with details that are closer to the physical functions to be performed and higher layers dealing with I/O in a logical and generic fashion. The result is that changes in hardware parameters need not affect most of the I/O software. A key aspect of I/O is the use of buffers that are controlled by I/O utilities rather than by application processes. Buffering smoothes out the differences between the internal speeds of the computer system and the speeds of I/O devices. The use of buffers also decouples the actual I/O transfer from the address space of the application process. This allows the operating system more flexibility in performing its memory-management function.
  • 102. SUMMARY [2] The aspect of I/O that has the greatest impact on overall system performance is disk I/O. Accordingly, there has been greater research and design effort in this area than in any other kind of I/O. Two of the most widely used approaches to improve disk I/O performance are disk scheduling and the disk cache. At any time, there may be a queue of requests for I/O on the same disk. It is the object of disk scheduling to satisfy these requests in a way that minimizes the mechanical seek time of the disk and hence improves performance. The physical layout of pending requests plus considerations of locality come into play. A disk cache is a buffer, usually kept in main memory, that functions as a cache of disk blocks between disk memory and the rest of main memory. Because of the principle of locality, the use of a disk cache should substantially reduce the number of block I/O transfers between main memory and disk.
  • 103. Review Questions [Homework 01] 1.List and briefly define three techniques for performing I/O. 2.What is the difference between logical I/O and device I/O? 3.What is the difference between block-oriented devices and stream-oriented devices? Give a few examples of each. 4.Why would you expect improved performance using a double buffer rather than a single buffer for I/O?
  • 104. Review Questions [Homework 02] 5.What delay elements are involved in a disk read or write? 6.Briefly define the disk scheduling policies illustrated in Figure 11.7. 7.Briefly define the seven RAID levels. 8.What is the typical disk sector size?
  • 105. Review Questions [1] (self-study) •List three categories of I/O devices with one example for each [s. 11] •I/O devices differ in a number of areas. Comment by giving six differences [s. 15] •Data may be transferred as what? [s 19] •The nature of errors differs widely from one device to another. Comment by giving three different aspects. [s. 21] •List three techniques for performing I/O [s. 23] •List three of IO functions and explain briefly one of them. [s. 24/25/26] •Define DMA and draw its typical block diagram [s. 27] •DMA can be configured in several ways. Comment [s. 28/29/30] •List three different DMS configurations and contrast two of them [s. 28/29/30] •List three hierarchical designs of I/O organization and draw a pictorial diagram that contrast two of them. [s. 35/36/37/38] •Contrast block-oriented buffering and stream-oriented buffering with examples for each. [s. 41/42] •Contrast by drawing no-buffer and single-buffer. [s. 43/44] •Contrast block-oriented single buffering and stream-oriented single buffering. [s. 45/46] •Contrast by drawing single buffer and double buffer. [s. 44/47] •Contrast by drawing double buffer and circular buffer. [s. 47/48] •Discuss buffer limitations [s. 49]
  • 106. Review Questions [2] (self-study) •The actual details of disk I/O operation depend on many things. Draw a pictorial diagram that illustrates the general timing diagram of disk I/O transfer. [s. 51] •Disk performance parameters is measured by access time. Comment to show the different components that are considered in measuring the access time. [s. 53] •List four disk scheduling algorithms that are based on selection according to requester and conduct comparison between two of them. [s. 64] • List five disk scheduling algorithms that are based on selection according to requested item and conduct comparison between two of them. [s. 64] •Contrast RAID 0 and RAID 1 [s. 68/69] •Conduct comparison between RAID 0, RAID 1, and RAID 2 [s. 68/69]70] •Contrast RAID 2 and RAID 3 [s. 70/71] •Contrast RAID 4 and RAID 3 [s. 71/72] •Contrast RAID 5 and RAID 4 [s. 72/73] •Contrast RAID 6 and RAID 5 [s. 73/74] •Define disk cash. State two ways that are used to populate the cache. [s. 76] •Contrast LRU and LFU [s. 77/78]
  • 107. Review Statements [1] •Human readable devices: Devices used to communicate with the user such as Printers and terminals. •Machine readable devices: Used to communicate with electronic equipment such as disk drives, USB keys, sensors, controllers, and actuators. •Communication devices: Used to communicate with remote devices such as digital line drivers, and modems •Controller or I/O module is added means that the processor uses programmed I/O without interrupts, and also the processor does not need to handle details of external devices •Controller or I/O module with interrupts means that efficiency improves as processor does not spend time waiting for an I/O operation to be performed •Direct Memory Access means that blocks of data are moved into memory without involving the processor, and also Processor involved at beginning and end only •I/O module is a separate processor means that CPU directs the I/O processor to execute an I/O program in main memory. •I/O processor means that I/O module has its own local memory, and also it is commonly used to control communications with interactive terminals •Processor delegates I/O operation to the DMA module •DMA module transfers data directly to or from memory •When complete DMA module sends an interrupt signal to the processor •Most I/O devices extremely slow compared to main memory •Use of multiprogramming allows for some processes to be waiting on I/O while another process executes
  • 108. Review Statements [2] •I/O cannot keep up with processor speed •Swapping used to bring in ready processes but this is an I/O operation itself •For simplicity and freedom from error it is desirable to handle all I/O devices in a uniform manner •Hide most of the details of device I/O in lower-level routines •Difficult to completely generalize, but can use a hierarchical modular design of I/O functions •A hierarchical philosophy leads to organizing an OS into layers •Each layer relies on the next lower layer to perform more primitive functions •It provides services to the next higher layer. Changes in one layer should not require changes in other layers •Logical I/O deals with the device as a logical resource •Device I/O converts requested operations into sequence of I/O instructions • Scheduling and Control performs actual queuing and control operations •communications architecture consist of a number of layers such as TCP/IP, •Directory management concerned with user operations affecting files •File System represents logical structure and operations •Physical organisation converts logical names to physical addresses •Processes must wait for I/O to complete before proceeding to avoid deadlock certain pages must remain in main memory during I/O •It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.
  • 109. Review Statements [3] •In block-oriented Buffering, information is stored in fixed sized blocks •In block-oriented Buffering, transfers are made a block at a time •Block-oriented Buffering are used for disks and USB keys •Stream-Oriented Buffering Transfer information as a stream of bytes •Stream-Oriented Buffering are used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage •No Buffer: Without a buffer, the OS directly access the device as and when it needs •Single Buffer: Operating system assigns a buffer in main memory for an I/O request •Block Oriented Single Buffer: input transfers made to buffer, block moved to user space when needed, the next block is moved into the buffer, read ahead or anticipated Input, and often a reasonable assumption as data is usually accessed sequentially •Stream-oriented Single Buffer: line-at-time or Byte-at-a-time, terminals often deal with one line at a time with carriage return signaling the end of the line, and byte-at-a-time suites devices where a single keystroke may be significant, and also sensors and controllers •Double Buffer: Use two system buffers instead of one, a process can transfer data to or from one buffer while the operating system empties or fills the other buffer •Circular Buffer: More than two buffers are used, each individual buffer is one unit in a circular buffer, and are used when I/O operation must keep up with process •Buffer Limitations: Buffering smoothes out peaks in I/O demand, but with enough demand eventually all buffers become full and their advantage is lost, and however, when there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes.
  • 110. Review Statements [4] •Access Time is the sum of Seek time and Rotational delay or rotational latency •Seek time: The time it takes to position the head at the desired track •Rotational delay or rotational latency: The time its takes for the beginning of the sector to reach the head •Transfer Time is the time taken to transfer the data. •First-in, first-out (FIFO): process request sequentially, fair to all processes, and approaches random scheduling in performance if there are many processes •Priority: goal is not to optimize disk use but to meet other objectives, short batch jobs may have higher priority, provide good interactive response time, longer jobs may have to wait an excessively long time, and a poor policy for database systems •Last-in, first-out: good for transaction processing systems, the device is given to the most recent user so there should be little arm movement, and possibility of starvation since a job may never regain the head of the line •Shortest Service Time First: select the disk I/O request that requires the least movement of the disk arm from its current position, and always choose the minimum seek time •SCAN: Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed •C-SCAN: restricts scanning to one direction only, and when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again •N-step-SCAN: segments the disk request queue into subqueues of length N, subqueues are processed one at a time, using SCAN, and new requests added to other queue when queue is processed
  • 111. Review Statements [5] •FSCAN: Two subqueues, when a scan begins, all of the requests are in one of the queues, with the other empty, all new requests are put into the other queue, and service of new requests is deferred until all of the old requests have been processed. •Disk I/O performance may be increased by spreading the operation over multiple read/write heads or multiple disks. •Disk failures can be recovered if parity information is stored •RAID stand for Redundant Array of Independent Disks that are set of physical disk drives viewed by the operating system as a single logical drive. Data are distributed across the physical drives of an array. Redundant disk capacity is used to store parity information which provides recoverability from disk failure •RAID 0 – Stripped: not a true RAID – no redundancy, disk failure is catastrophic, and very fast due to parallel read/write •RAID 1 – Mirrored: redundancy through duplication instead of parity, read requests can made in parallel, and simple recovery from disk failure •RAID 2 (Using Hamming code): synchronised disk rotation, data stripping is used (extremely small), and hamming code used to correct single bit errors and detect double-bit errors •RAID 3 bit-interleaved parity: similar to RAID-2 but uses all parity bits stored on a single drive
  • 112. Review Statements [6] •RAID 4 (Block-level parity): a bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk. •RAID 5 (Block-level Distributed parity): similar to RAID-4 but distributing the parity bits across all drives •RAID 6 (Dual Redundancy): Two different parity calculations are carried out (stored in separate blocks on different disks.), and can recover from two disks failing •Disk Cache is buffer in main memory for disk sectors that contains a copy of some of the sectors on the disk, when an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache, and a number of ways exist to populate the cache •Least Recently Used (LRU): the block that has been in the cache the longest with no reference to it is replaced, a stack of pointers reference the cache, most recently referenced block is on the top of the stack, and when a block is referenced or brought into the cache, it is placed on the top of the stack •Least Frequently Used (LFU): the block that has experienced the fewest references is replaced, a counter is associated with each block, counter is incremented each time block accessed, and when replacement is required, the block with the smallest count is selected.
  • 113. Problems [1] 1.Consider a program that accesses a single I/O device and compare un- buffered I/O to the use of a buffer. Show that the use of the buffer can reduce the running time by at most a factor of two. 2.Generalize the result of Problem 11.1 to the case in which a program refers to n devices. 3.Calculate how much disk space (in sectors, tracks, and surfaces) will be required to store 300,000 120-byte logical records if the disk is fixed- sector with 512 bytes/sector, with 96 sectors/track, 110 tracks per surface, and 8 usable surfaces. Ignore any file header record(s) and track indexes, and assume that records cannot span two sectors. 4.Consider the disk system described in Problem 11.7, and assume that the disk rotates at 360 rpm. A processor reads one sector from the disk using interrupt-driven I/O, with one interrupt per byte. If it takes 2.5 μs to process each interrupt, what percentage of the time will the processor spend handling I/O (disregard seek time)?
  • 114. Problems [2] 5.Repeat the preceding problem using DMA, and assume one interrupt per sector. 6.It should be clear that disk striping can improve the data transfer rate when the strip size is small compared to the I/O request size. It should also be clear that RAID 0 provides improved performance relative to a single large disk, because multiple I/O requests can be handled in parallel. However, in this latter case, is disk striping necessary? That is, does disk striping improve I/O request rate performance compared to a comparable disk array without striping? 7.Consider a 4-drive, 200GB-per-drive RAID array. What is the available data storage capacity for each of the RAID levels, 0, 1, 3, 4, 5, and 6?

Hinweis der Redaktion

  1. Beginning with a brief discussion of I/O devices and the organization of the I/O functions. Next examine operating system design issues, including design objectives, and the way in which the I/O function can be structured. Then I/O buffering is examined; The next sections of the chapter are devoted to magnetic disk I/O. We begin by developing a model of disk I/O performance and then examine several techniques that can be used to enhance performance.
  2. Suitable for communicating with the computer user. Examples include printers and terminals, the latter consisting of video display, keyboard, and perhaps other devices such as a mouse.
  3. Suitable for communicating with electronic equipment. Examples are disk drives, USB keys, sensors, controllers, and actuators.
  4. Suitable for communicating with remote devices. Examples are digital line drivers and modems.
  5. Each of these are covered in subsequent slides This diversity makes a uniform and consistent approach to I/O, both from the point of view of the operating system and from the point of view of user processes, difficult to achieve.
  6. There may be differences of several orders of magnitude between the data transfer rates.
  7. The use to which a device is put has an influence on the software and policies in the operating system and supporting utilities. Examples: disk used for files requires the support of file management software. disk used as a backing store for pages in a virtual memory scheme depends on the use of virtual memory hardware and software. These applications have an impact on disk scheduling algorithms. Another example, a terminal may be used by an ordinary user or a system administrator. implying different privilege levels and perhaps different priorities in the operating system.
  8. A printer requires a relatively simple control interface while a disk is much more complex. This complexity is filtered to some extent by the complexity of the I/O module that controls the device.
  9. Data may be transferred as a stream of bytes or characters (e.g., terminal I/O) or in larger blocks (e.g., disk I/O).
  10. Different data encoding schemes are used by different devices, including differences in character code and parity conventions.
  11. The nature of errors, the way in which they are reported, their consequences, and the available range of responses differ widely from one device to another.
  12. From section 1.7 ORGANIZATION OF THE I/O FUNCTION Table 11.1 indicates the relationship among these three techniques. Programmed I/O : Processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy waits for the operation to be completed before proceeding. Interrupt-driven I/O : Processor issues an I/O command on behalf of a process. If the I/O instruction from the process is nonblocking , then the processor continues to execute instructions from the process that issued the I/O command. If the I/O instruction is blocking , then the next instruction that the processor executes is from the OS, which will put the current process in a blocked state and schedule another process. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred. Table 11.1 indicates the relationship among these three techniques. In most computer systems, DMA is the dominant form of transfer that must be supported by the operating system.
  13. The processor directly controls a peripheral device. This is seen in simple microprocessor-controlled devices. A controller or I/O module is added. The processor uses programmed I/O without interrupts. With this step, the processor becomes somewhat divorced from the specific details of external device interfaces. Separated, broken up, unconnected
  14. 3. Now interrupts are employed. The processor need not spend time waiting for an I/O operation to be performed, thus increasing efficiency. 4. The I/O module is given direct control of memory via DMA. It can now move a block of data to or from memory without involving the processor, except at the beginning and end of the transfer.
  15. 5. I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O. CPU directs the I/O processor to execute an I/O program in main memory. The I/O processor fetches and executes these instructions without processor intervention. Allowing the processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed. 6. The I/O module has a local memory of its own and is, in fact, a computer in its own right. A large set of I/O devices can be controlled, with minimal processor involvement. Commonly used to control communications with interactive terminals. The I/O processor takes care of most of the tasks involved in controlling the terminals.
  16. The DMA mechanism can be configured in a variety of ways. Some possibilities are shown here In the first example, all modules share the same system bus. The DMA module, acting as a surrogate processor, uses programmed I/O to exchange data between memory and an I/O module through the DMA module. This is clearly inefficient : As with processor-controlled programmed I/O, each transfer of a word consumes two bus cycles (transfer request followed by transfer).
  17. The number of required bus cycles can be cut substantially by integrating the DMA and I/O functions. This means that there is a path between the DMA module and one or more I/O modules that does not include the system bus. The DMA logic may actually be a part of an I/O module, or it may be a separate module that controls one or more I/O modules.
  18. This concept can be taken one step further by connecting I/O modules to the DMA module using an I/O bus This reduces the number of I/O interfaces in the DMA module to one and provides for an easily expandable configuration. In all of these cases the system bus that the DMA module shares with the processor and main memory is used by the DMA module only to exchange data with memory and to exchange control signals with the processor. The exchange of data between the DMA and I/O modules takes place off the system bus.
  19. Efficiency is important because I/O operations often form a bottleneck in a computing system. One way to tackle this problem is multiprogramming , which, as we have seen, allows some processes to be waiting on I/O operations while another process is executing. However, even with the vast size of main memory in today’s machines, often I/O is not keeping up with the activities of the processor. Swapping is used to bring in additional ready processes to keep the processor busy, but this in itself is an I/O operation. Thus, a major effort in I/O design has been schemes for improving the efficiency of the I/O. The area that has received the most attention, because of its importance, is disk I/O. Exchange, transaction, substitution
  20. For simplicity and freedom from error, it is desirable to handle all devices in a uniform manner. This applies both to the way in which processes view I/O devices and the way in which the operating system manages I/O devices and operations. Because of the diversity of device characteristics, it is difficult in practice to achieve true generality. What can be done is to use a hierarchical, modular approach to the design of the I/O function. T his hides most of the details of device I/O in lower-level routines so that user processes and upper levels of the operating system see devices in terms of general functions, such as read, write, open, close, lock, unlock.
  21. The hierarchical philosophy suggested that the functions of the operating system should be separated according to their complexity, their characteristic time scale, and their level of abstraction. [for more detail see Chapter 2] This approach leads to an organization of the operating system into a series of layers. Each layer performs a related subset of the functions required of the operating system. It relies on the next lower layer to perform more primitive functions and to conceal the details of those functions. It provides services to the next higher layer. Ideally, the layers should be defined so that changes in one layer do not require changes in other layers.
  22. Logical I/O: Deals with the device as a logical resource and is not concerned with the details of actually controlling the device. Concerned with managing general I/O functions on behalf of user processes, allowing them to deal with the device in terms of a device identifier and simple commands such as open, close, read, write. Device I/O: The requested operations and data (buffered characters, records, etc.) are converted into appropriate sequences of I/O instructions, channel commands, and controller orders. Buffering techniques may be used to improve utilization. Scheduling and control: The actual queuing and scheduling of I/O operations occurs at this layer, as well as the control of the operations. Interrupts are handled at this layer and I/O status is collected and reported. This is the layer of software that actually interacts with the I/O module and hence the device hardware.
  23. The logical I/O module is replaced by a communications architecture, which may itself consist of a number of layers. An example is TCP/IP,.
  24. Directory management: At this layer, symbolic file names are converted to identifiers that either reference the file directly or indirectly through a file descriptor or index table. Concerned with user operations that affect the directory of files, such as add, delete, and reorganize. File system : This layer deals with the logical structure of files and with the operations that can be specified by users, such as open, close, read, write. Access rights are also managed at this layer. Physical organization: Files and records must be converted to physical secondary storage addresses, taking into account the physical track and sector structure of the secondary storage device. Allocation of secondary storage space and main storage buffers is generally treated at this layer as well.
  25. To avoid deadlock, the user memory involved in the I/O operation must be locked in main memory immediately before the I/O request is issued, even though the I/O operation is queued and may not be executed for some time. If a block is being transferred from a user process area directly to an I/O module, then the process is blocked during the transfer and the process may not be swapped out. To avoid these overheads and inefficiencies , it is sometimes convenient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.
  26. A block-oriented device stores information in blocks that are usually of fixed size, and transfers are made one block at a time. Generally, it is possible to reference data by its block number. Disks and USB keys are examples of block-oriented devices.
  27. A stream-oriented device transfers data in and out as a stream of bytes, with no block structure. Terminals, printers, communications ports, mouse and other pointing devices, and most other devices that are not secondary storage are stream oriented.
  28. When a user process issues an I/O request, the operating system assigns a buffer in the system portion of main memory to the operation.
  29. For block-oriented devices, Input transfers are made to the system buffer. When the transfer is complete, the process moves the block into user space and immediately requests another block. Called reading ahead , or anticipated input ; it is done in the expectation that the block will eventually be needed. Often this is a reasonable assumption most of the time because data are usually accessed sequentially. Only at the end of a sequence of processing will a block be read in unnecessarily.
  30. The single buffering scheme can be used in a line-at-a-time fashion or a byte-at-a-time fashion. Line-at-a-time operation is appropriate for scroll-mode terminals (sometimes called dumb terminals). Byte-at-a-time operation is used on where each keystroke is significant, or for peripherals such as sensors and controllers.
  31. A process transfers data to (or from) one buffer while the operating system empties (or fills) the other.
  32. Double buffering may be inadequate if the process performs rapid bursts of I/O. The problem can often be alleviated by using more than two buffers. When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer with each individual buffer being one unit in the circular buffer. Improve, ease, lighten
  33. Buffering is a technique that smoothes out peaks in I/O demand. However, no amount of buffering will allow an I/O device to keep pace with a process indefinitely when the average demand of the process is greater than the I/O device can service. Even with multiple buffers, all of the buffers will eventually fill up and the process will have to wait after processing each chunk of data. However, in a multiprogramming environment, when there is a variety of I/O activity and a variety of process activity to service, buffering is one tool that can increase the efficiency of the operating system and the performance of individual processes.
  34. Over the last 40 years, the increase in the speed of processors and main memory has far outstripped that for disk access, with processor and main memory speeds increasing by about two orders of magnitude compared to one order of magnitude for disk. The result is that disks are currently at least four orders of magnitude slower than main memory. Thus, the performance of disk storage subsystem is of vital concern. In this section, we highlight some of the key issues and look at the most important approaches.
  35. The actual details of disk I/O operation depend on the computer system, the operating system, and the nature of the I/O channel and disk controller hardware. A general timing diagram of disk I/O transfer is shown in Figure 11.6.
  36. When the disk drive is operating, the disk is rotating at constant speed. To read or write, the head must be positioned at the desired track and at the beginning of the desired sector on that track. Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system.
  37. Access Time is the sum of Seek Time is the time it takes to position the head at the track. Rotational delay is the time it takes for the beginning of the sector to reach the head Once the head is in position, the read or write operation is then performed as the sector moves under the head; this is the data transfer portion of the operation; the time required for the transfer is the transfer time .
  38. Movie icon jumps to animation at http://gaia.ecs.csus.edu/%7ezhangd/oscal/DiskApplet.html The simplest form of scheduling is first-in-first-out (FIFO) scheduling, which processes items from the queue in sequential order. This strategy has the advantage of being fair, because every request is honored and the requests are honored in the order received. This figure illustrates the disk arm movement with FIFO. This graph is generated directly from the data in Table 11.2a. As can be seen, the disk accesses are in the same order as the requests were originally received. With FIFO, if there are only a few processes that require access and if many of the requests are to clustered file sectors, then we can hope for good performance. But , this technique will often approximate random scheduling in performance, if there are many processes competing for the disk.
  39. With a system based on priority (PRI), the control of the scheduling is outside the control of disk management software. This is not intended to optimize disk utilization but to meet other objectives within the operating system. Often short batch jobs and interactive jobs are given higher priority than longer jobs that require longer computation. This allows a lot of short jobs to be flushed through the system quickly and may provide good interactive response time. However, longer jobs may have to wait excessively long times. Furthermore, such a policy could lead to countermeasures on the part of users, who split their jobs into smaller pieces to beat the system. This type of policy tends to be poor for database systems.
  40. In transaction processing systems, giving the device to the most recent user should result in little or no arm movement for moving through a sequential file.
  41. Select the disk I/O request that requires the least movement of the disk arm from its current position. Thus, we always choose to incur the minimum seek time. Always choosing the minimum seek time does not guarantee that the average seek time over a number of arm movements will be minimum. However, this should provide better performance than FIFO. Because the arm can move in two directions, a random tie-breaking algorithm may be used to resolve cases of equal distances. Acquire, gain, deserve
  42. With SCAN, the arm is required to move in one direction only, satisfying all outstanding requests en route, until it reaches the last track in that direction or until there are no more requests in that direction. The service direction is then reversed and the scan proceeds in the opposite direction, again picking up all requests in order. This latter refinement is sometimes referred to as the LOOK policy. The SCAN policy favors jobs whose requests are for tracks nearest to both innermost and outermost tracks and favors the latest-arriving jobs.
  43. The C-SCAN (circular SCAN) policy restricts scanning to one direction only. Thus, when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again. This reduces the maximum delay experienced by new requests.
  44. The N-step-SCAN policy segments the disk request queue into subqueues of length N. Subqueues are processed one at a time, using SCAN. While a queue is being processed, new requests must be added to some other queue. If fewer than N requests are available at the end of a scan, then all of them are processed with the next scan.
  45. FSCAN is a policy that uses two subqueues. When a scan begins, all of the requests are in one of the queues, with the other empty. During the scan, all new requests are put into the other queue. Thus, service of new requests is deferred until all of the old requests have been processed.
  46. T
  47. With multiple disks, separate I/O requests can be handled in parallel, as long as the data required reside on separate disks. Also, a single I/O request can be executed in parallel if the block of data to be accessed is distributed across multiple disks.
  48. Movie icon links to animation at: http://gaia.ecs.csus.edu/%7ezhangd/oscal/RAIDFiles/RAID.htm The RAID scheme consists of seven levels, zero through six. These levels do not imply a hierarchical relationship but designate different design architectures that share three common characteristics: 1. RAID is a set of physical disk drives viewed by the operating system as a single logical drive. 2. Data are distributed across the physical drives of an array in a scheme known as striping, described subsequently. 3. Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure.
  49. RAID level 0 is not a true member of the RAID family, because it does not include redundancy. The advantage of this layout is that if a single I/O request consists of multiple logically contiguous strips, then up to n strips for that request can be handled in parallel, greatly reducing the I/O transfer time.
  50. each logical strip is mapped to two separate physical disks so that every disk in the array has a mirror disk that contains the same data. A read request can be serviced by either of the two disks that contains the requested data, whichever one involves the minimum seek time plus rotational latency. A write request requires that both corresponding strips be updated, but this can be done in parallel. Thus, the write performance is dictated by the slower of the two writes Recovery from a failure is simple. When a drive fails, the data may still be accessed from the second drive.
  51. In a parallel access array, all member disks participate in the execution of every I/O request. Typically, the spindles of the individual drives are synchronized so that each disk head is in the same position on each disk at any given time. As in the other RAID schemes, data striping is used. In the case of RAID 2 and 3, the strips are very small, often as small as a single byte or word. With RAID 2, an error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in the corresponding bit positions on multiple parity disks. Typically, a Hamming code is used, which is able to correct single-bit errors and detect double-bit errors.
  52. RAID 3 is organized in a similar fashion to RAID 2. The difference is that RAID 3 requires only a single redundant disk, no matter how large the disk array. RAID 3 employs parallel access, with data distributed in small strips. Instead of an error-correcting code, a simple parity bit is computed for the set of individual bits in the same position on all of the data disks.
  53. A bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk.
  54. RAID 5 is organized in a similar fashion to RAID 4. RAID 5 distributes the parity strips across all disks. A typical allocation is a round-robin scheme For an n-disk array, the parity strip is on a different disk for the first n stripes, and the pattern then repeats. The distribution of parity strips across all drives avoids the potential I/O bottleneck of the single parity disk found in RAID 4.
  55. Two different parity calculations are carried out and stored in separate blocks on different disks. Thus, a RAID 6 array whose user data require N disks consists of N+2 disks. P and Q are two different data check algorithms. One of the two is the exclusive-OR calculation used in RAID 4 and 5. The other is an independent data check algorithm. This makes it possible to regenerate data even if two disks containing user data fail.
  56. A disk cache is a buffer in main memory for disk sectors. The cache contains a copy of some of the sectors on the disk. When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache. If so, the request is satisfied via the cache. If not, the requested sector is read into the disk cache from the disk. Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to satisfy a single I/O request, it is likely that there will be future references to that same block.
  57. The most commonly used algorithm is least recently used (LRU) Replace that block that has been in the cache longest with no reference to it. The cache consists of a stack of blocks, with the most recently referenced block on the top of the stack. When a block in the cache is referenced, it is moved from its existing position on the stack to the top of the stack. When a block is brought in from secondary memory, remove the block that is on the bottom of the stack and push the incoming block onto the top of the stack. It is not necessary actually to move these blocks around in main memory; a stack of pointers can be associated with the cache.
  58. Replace that block in the set that has experienced the fewest references. LFU could be implemented by associating a counter with each block. When a block is brought in, it is assigned a count of 1; with each reference to the block, its count is incremented by 1. When replacement is required, the block with the smallest count is selected. Intuitively, it might seem that LFU is more appropriate than LRU because LFU makes use of more pertinent information about each block in the selection process. Why?
  59. The blocks are logically organized in a stack, as with the LRU algorithm. A certain portion of the top part of the stack is designated the new section. When there is a cache hit, the referenced block is moved to the top of the stack. If the block was already in the new section, its reference count is not incremented; otherwise it is incremented by 1. Given a sufficiently large new section, this results in the reference counts for blocks that are repeatedly re-referenced within a short interval remaining unchanged. On a miss, the block with the smallest reference count that is not in the new section is chosen for replacement; the least recently used such block is chosen in the event of a tie. A further refinement (Figure 11.9b): Divide the stack into three sections: new, middle, and old. As before, reference counts are not incremented on blocks in the new section. However, only blocks in the old section are eligible for replacement. Assuming a sufficiently large middle section, this allows relatively frequently referenced blocks a chance to build up their reference counts before becoming eligible for replacement.
  60. Figure 11.10 summarizes results from several studies using LRU, one for a UNIX system running on a VAX Figure 11.11 shows results for simulation studies of the frequency-based replacement algorithm. A comparison of the two figures points out one of the risks of this sort of performance assessment. The figures appear to show that LRU outperforms the frequency-based replacement algorithm. However, when identical reference patterns using the same cache structure are compared, the frequency-based replacement algorithm is superior. Thus, the exact sequence of reference patterns, plus related design issues such as block size, will have a profound influence on the performance achieved.
  61. In UNIX, each individual I/O device is associated with a special file. These are managed by the file system and are read and written in the same manner as user data files. This provides a clean, uniform interface to users and processes. To read from or write to a device, read and write requests are made for the special file associated with the device.
  62. There are two types of I/O in UNIX: buffered and unbuffered. Buffered I/O passes through system buffers, Whereas unbuffered I/O typically involves the DMA facility, with the transfer taking place directly between the I/O module and the process I/O area. For buffered I/O, two types of buffers are used: system buffer caches and character queues.
  63. The buffer cache in UNIX is essentially a disk cache. I/O operations with disk are handled through the buffer cache. The data transfer between the buffer cache and the user process space always occurs using DMA. Because both the buffer cache and the process I/O area are in main memory, the DMA facility is used to perform a memory-to-memory copy. This does not use up any processor cycles, but it does consume bus cycles. To manage the buffer cache, three lists are maintained: Free list: List of all slots in the cache (a slot is referred to as a buffer in UNIX; each slot holds one disk sector) that are available for allocation Device list: List of all buffers currently associated with each disk Driver I/O queue : List of buffers that are actually undergoing or waiting for I/O on a particular device
  64. Block-oriented devices, such as disk and USB keys, can be effectively served by the buffer cache. A different form of buffering is appropriate for character-oriented devices, such as terminals and printers. A character queue is either written by the I/O device and read by the process or written by the process and read by the device. In both cases, the producer/consumer model introduced in Chapter 5 is used. Thus, character queues may only be read once; as each character is read, it is effectively destroyed. This is in contrast to the buffer cache, which may be read multiple times and hence follows the readers/writers model
  65. Unbuffered I/O, which is simply DMA between device and process space, is always the fastest method for a process to perform I/O. A process that is performing unbuffered I/O is locked in main memory and cannot be swapped out. This reduces the opportunities for swapping by tying up part of main memory, thus reducing the overall system performance. Also, the I/O device is tied up with the process for the duration of the transfer, making it unavailable for other processes.
  66. This figure shows the types of I/O suited to each type of device. Disk drives are block oriented, and have the potential for reasonable high throughput. Thus, I/O for these devices tends to be unbuffered or via buffer cache. Tape drives are functionally similar to disk drives and use similar I/O schemes. Because terminals involve relatively slow exchange of characters, terminal I/O typically makes use of the character queue. Similarly, communication lines require serial processing of bytes of data for input or output and are best handled by character queues. The type of I/O used for a printer will generally depend on its speed.
  67. In general terms, the Linux I/O kernel facility is very similar to that of other UNIX implementation, such as SVR4. The Linux kernel associates a special file with each I/O device driver. Block, character, and network devices are recognized. In this section, we look at several features of the Linux I/O facility.
  68. The elevator scheduler maintains a single queue for disk read and write requests and performs both sorting and merging functions on the queue. The elevator scheduler keeps the list of requests sorted by block number. As the disk requests are handled, the drive moves in a single direction, satisfying each request as it is encountered.
  69. Each incoming request is placed in the sorted elevator queue. In addition, the same request is placed at the tail of a read FIFO queue for a read request or a write FIFO queue for a write request. The read and write queues maintain a list of requests in the sequence in which the requests were made. Associated with each request is an expiration time, with a default value of 0.5 seconds for a read request and 5 seconds for a write request. Ordinarily, the scheduler dispatches from the sorted queue. When a request is satisfied, it is removed from the head of the sorted queue and also from the appropriate FIFO queue. However, when the item at the head of one of the FIFO queues becomes older than its expiration time, then the scheduler next dispatches from that FIFO queue, taking the expired request, plus the next few requests from the queue. As each request is dispatched, it is also removed from the sorted queue.
  70. In Linux, the anticipatory scheduler is superimposed on the deadline scheduler. When a read request is dispatched, the anticipatory scheduler causes the scheduling system to delay for up to 6 milliseconds, depending on the configuration. During this small delay, there is a good chance (principal of locality) that the application that issued the last read request will issue another read request to the same region of the disk. If so, that request will be serviced immediately. If no such read request occurs, the scheduler resumes using the deadline scheduling algorithm.
  71. In Linux 2.2 and earlier releases, the kernel maintained a page cache for reads and writes from regular file system files and for virtual memory pages, and a separate buffer cache for block I/O. For Linux 2.4 and later, there is a single unified page cache that is involved in all traffic between disk and main memory. The page cache confers two benefits. When it is time to write back dirty pages to disk, a collection of them can be ordered properly and written out efficiently. Second, because of the principle of temporal locality, pages in the page cache are likely to be referenced again before they are flushed from the cache, thus saving a disk I/O operation. Dirty pages are written back to disk in two situations: When free memory falls below a specified threshold, the kernel reduces the size of the page cache to release memory to be added to the free memory pool. When dirty pages grow older than a specified threshold, a number of dirty pages are written back to disk.
  72. The I/O manager is responsible for all I/O for the operating system and provides a uniform interface that all types of drivers can call.
  73. The I/O manager works closely with four types of kernel components: Cache manager: The cache manager handles file caching for all file systems. A kernel thread, the lazy writer, periodically batches the updates together to write to disk which allows the I/O to be more efficient. The cache manager works by mapping regions of files into kernel virtual memory and then relying on the virtual memory manager to do most of the work to copy pages to and from the files on disk. File system drivers: The I/O manager treats a file system driver as just another device driver and routes I/O requests for file system volumes to the appropriate software driver for that volume. The file system, in turn, sends I/O requests to the software drivers that manage the hardware device adapter. Network drivers: Windows includes integrated networking capabilities and support for remote file systems. The facilities are implemented as software drivers rather than part of the Windows Executive. Hardware device drivers: These software drivers access the hardware registers of the peripheral devices using entry points in the kernel’s Hardware Abstraction Layer. A set of these routines exists for every platform that Windows supports; because the routine names are the same for all platforms, the source code of Windows device drivers is portable across different processor types.
  74. Shadow copies are an efficient way of making consistent snapshots of volumes to that they can be backed up. They are also useful for archiving files on a per-volume basis. If a user deletes a file, he or she can retrieve an earlier copy from any available shadow copy made by the system administrator. Shadow copies are implemented by a software driver that makes copies of data on the volume before it is overwritten.