SlideShare ist ein Scribd-Unternehmen logo
1 von 15
16
INTELLIGENT DISK SUBSYSTEMS
Caching: Acceleration of Hard Disk Access
 In the field of disk subsystems, caches are designed to accelerate write and read
accesses to physical hard disks.
 In this connection we can differentiate between two types of cache:
1. Cache on the hard disk
2. Cache in the RAID controller.
a. Write cache
b. Read cache
Cache on the hard disk
 Each individual hard disk comes with a very small cache.
 This is necessary because the transfer rate of the I/O channel to the disk controller is
significantly higher than the speed at which the disk controller can write to or read
from the physical hard disk.
 If a server or a RAID controller writes a block to a physical hard disk, the disk
controller stores this in its cache. The disk controller can thus write the block to the
physical hard disk in its own time while the I/O channel can be used for data traffic
to the other hard disks.
 The hard disk controller transfers the block from its cache to the RAID controller or
to the server at the higher data rate of the I/O channel
Note1: The disk controller is the circuit which enables the CPU to communicate with a hard disk.
Note2: A RAID controller is a hardware device or software program used to manage hard disk drives.
Note3: A RAID controller is often improperly shortened to a disk controller. The two should not be confused as they
provide very different functionality.
Write cache in the disk subsystem controller(RAID Controller)
 Disk subsystems controller come with their own cache, which in some models is gigabytes
in size.
 Many applications do not write data at a continuous rate, but in batches. If a server sends
several data blocks to the disk subsystem, the controller initially buffers all blocks into a
write cache with a battery backup and immediately reports back to the server that all data
has been securely written to the drive.
 The battery backup is necessary to allow the data in the write cache to survive a power cut.
Read cache in the disk subsystem controller(RAID Controller)
 To speed up read access by the server, the disk subsystem’s controller must copy the relevant data blocks from
the slower physical hard disk to the fast cache before the server requests the data in question
 The problem with this is that it is very difficult for the disk subsystem’s controller to work out in advance what
data the server will ask for next
 The controller can only analyse past data access and use this to extrapolate which data blocks the server will
access next
17
Intelligent disk subsystems
 Intelligent disk subsystems represent the third level of complexity for controllers after JBODs
and RAID arrays.
 The controllers of intelligent disk subsystems offer additional functions over and above
those offered by RAID.
 These functions include, instant copies, remote mirroring and LUN masking.
 Instant copies
 Instant copies can virtually copy data sets of several terabytes within a disk subsystem in a
few seconds.
 Virtual copying means that disk subsystems fool the attached servers into believing that they
are capable of copying such large data quantities in such a short space of time.
 The actual copying process takes significantly longer. However, the same server, or a second
server, can access the virtually copied data after a few seconds
1. Server 1 works on the original data
2. The original data is virtually copied in a few seconds
3. Then server 2 can work with the data copy, whilst server 1 continues to operate with the original data
 Instant copies are used, for example, for the generation of test data, for the backup of data
and for the generation of data copies for data mining.
 When copying data using instant copies, attention should be paid to the consistency of the
copied data
 The background process for copying all the data of an instant copy requires many hours
when very large data volumes are involved, making this not a viable alternative
 One remedy is incremental instant copy where data is only copied in its entirety the first
time around. Afterwards the instant copy is repeated – for example, perhaps daily –
whereby only those changes since the previous instant copy are copied
18
 Remote mirroring
 Something as simple as a power failure can prevent access to production data and data
copies for several hours. A fire in the disk subsystem would destroy original data and data
copies.
 Remote mirroring offers protection against such disasters.
 Modern disk subsystems can now mirror their data, or part of their data, independently to a
second disk subsystem, which is a long way away.
 The entire remote mirroring operation is handled by the two participating disk subsystems.
 Remote mirroring is invisible to application servers and does not consume their resources.
1. The application server stores its data on a local disk subsystem.
2. The disk subsystem saves the data to several physical drives by means of RAID.
3. The local disk subsystem uses remote mirroring to mirror the data onto a second disk subsystem
located in the backup data centre
4. Users use the application via the LAN.
5. The stand-by server in the backup data centre is used as a test system.
6. If the first disk subsystem fails, the application is started up on the stand-by server using the data of the
second disk subsystem
7. Users use the application via the WAN.
19
Two types of remote mirroring:
1. Synchronous
2. Asynchronous
Synchronous remote mirroring
 In synchronous remote mirroring a disk subsystem does not acknowledge write operations
until it has saved a block itself and received write confirmation from the second disk
subsystem.
 Synchronous remote mirroring has the advantage that the copy of the data held by the
second disk subsystem is always up-to-date.
Asynchronous remote mirroring
 In asynchronous remote mirroring one disk subsystem acknowledges a write operation as soon as it has
saved the block itself.
 In asynchronous remote mirroring there is no guarantee that the data on the second disk
subsystem is up-to-date.
 LUN masking (LUN – Logical Unit Number)
 over a storage area network (SAN).
 LUN masking limits the access to the hard disks that the disk subsystem exports to the
connected server.
 A disk subsystem makes the storage capacity of its internal physical hard disks available to
servers by permitting access to individual physical hard disks, or to virtual hard disks created
using RAID, via the connection ports.
 Based upon the SCSI protocol, all hard disks – physical and virtual – that are visible outside
the disk subsystem are also known as LUN
 Without LUN masking every server would see all hard disks that the disk subsystem
provides. As a result, considerably more hard disks are visible to each server than is
necessary
 Each server now sees only the hard disks that it actually requires. LUN masking thus acts as a
filter between the exported hard disks and the accessing servers
 With LUN masking, each server sees only its own hard disks. A configuration error on server
1 can no longer destroy the data of the two other servers. The data is now protected.
20
Availability of disk subsystems
 Today, disk subsystems can be constructed so that they can withstand the failure of any
component without data being lost or becoming inaccessible
 The following list describes the individual measures that can be taken to increase the
availability of data
• The data is distributed over several hard disks using RAID processes and supplemented by further
data for error correction. After the failure of a physical hard disk, the data of the defective hard disk
can be reconstructed from the remaining data and the additional data.
• Individual hard disks store the data using the so-called Hamming code. The Hamming code allows
data to be correctly restored even if individual bits are changed on the hard disk.
• Each internal physical hard disk can be connected to the controller via two internal I/O channels. If
one of the two channels fails, the other can still be used.
• The controller in the disk subsystem can be realised by several controller instances. If one of the
controller instances fails, one of the remaining instances takes over the tasks of the defective
instance.
• Server and disk subsystem are connected together via several I/O channels. If one of the channels
fails, the remaining ones can still be used.
• Instant copies can be used to protect against logical errors. For example, it would be possible
to create an instant copy of a database every hour. If a table is ‘accidentally’ deleted, then the
database could revert to the last instant copy in which the database is still complete.
• Remote mirroring protects against physical damage. If, for whatever reason, the original data can
no longer be accessed, operation can continue using the data copy that was generated
using remote mirroring
• LUN masking limits the visibility of virtual hard disks. This prevents data being changed or deleted
unintentionally by other servers.
21
The Physical I/O path from the CPU to the Storage System
 One or more CPUs process data that is stored in the CPU cache or in the RAM. CPU cache
and RAM are very fast; however, their data is lost when the power is been switched off.
 Therefore, the data is moved from the RAM to storage devices such as disk subsystems and
tape libraries via system bus, host bus and I/O bus
 Although storage devices are slower than CPU cache and RAM they compensate for this by
being cheaper and by their ability to store data even when the power is switched off
The physical I/O path from the CPU to the storage system consists of system bus, host I/O bus
and I/O bus. More recent technologies such as InfiniBand, Fibre Channel and Internet SCSI (iSCSI)
replace individual buses with a serial network. For historic reasons the corresponding
connections are still called host I/O bus or I/O bus.
22
SCSI(Small Computer System Interface)
 Fast transfer speeds, up to 320 megabytes per second
 SCSI defines a parallel bus for the transmission of data with additional lines for the control of
communication
 The bus can be realised in the form of printed conductors on the circuit board or as a cable
 A so-called daisy chain can connect up to 16 devices together
 The SCSI protocol defines how the devices communicate with each other via the SCSI bus.
It specifies how the devices reserve the SCSI bus and in which format data is transferred.
An SCSI bus connects one server to several peripheral devices by means of
a daisy chain. SCSI defines both the characteristics of the connection cable and also the
transmission protocol.
 The SCSI protocol introduces SCSI IDs and Logical Unit Numbers (LUNs) for the addressing of
devices
 Each device in the SCSI bus must have an unambiguous ID with the HBA in the server
requiring its own ID
Note: A host bus adapter (HBA) is a circuit board and/or integrated circuit adapter that provides input/output (I/O)
processing and physical connectivity between a server and a storage device
 Depending upon the version of the SCSI standard, a maximum of 8 or 16 IDs are permitted
per SCSI bus
 SCSI Target IDs with a higher priority win the arbitration of the SCSI bus.
 Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they may
send data through it. During the arbitration of the bus, the device that has the highest
priority SCSI ID always wins.
 In the event that the bus is heavily loaded, this can lead to devices with lower priorities never
being allowed to send data. The SCSI arbitration procedure is therefore ‘unfair’.
SCSI and storage networks
 SCSI is only suitable for the realisation of storage networks to a limited degree
 First, a SCSI daisy chain can only connect a very few devices with each other.
 Second, the maximum lengths of SCSI buses greatly limit the construction of storage
 networks
23
I/O TECHNIQUES
Fibre Channel Protocol Stack
 Fibre Channel is currently (2009) the technique most frequently used for implementing
storage networks.
 Fibre Channel was originally developed as a backbone technology for the connection of LANs
By coincidence, the design goals of Fibre Channel are covered by the requirements of a
transmission technology for storage networks such as:
• Serial transmission for high speed and long distances;
• Low rate of transmission errors;
• Low delay (latency) of the transmitted data;
 The Fibre Channel protocol stack is subdivided into five layers
 The lower four layers, FC-0 to FC-3 define the fundamental communication techniques, i.e.
the physical levels, the transmission and the addressing. The upper layer, FC-4, defines how
application protocols (upper layer protocols, ULPs) are mapped on the underlying Fibre
Channel network.
Links, ports and topologies
The Fibre Channel standard defines three different topologies:
1. Point- to-point : - Point-to-point defines a bi-directional connection between two
Devices
2. Arbitrated loop : - Arbitrated loop defines a unidirectional ring in which only two devices
can ever exchange data with one another at any one time
3. Fabric : - Fabric defines a network in which several devices can exchange data
simultaneously at full bandwidth
24
Links
 The connection between two ports is called a link
 In the point-to-point topology and in the fabric topology the links are always bi-directional
 The links of the arbitrated loop topology are unidirectional
Ports:
A Fibre Channel (FC) port is a hardware pathway into and out of a node that performs data
communications over an FC link. (An FC link is sometimes called an FC channel.)
B-Port Bridge Port B-Ports serve to connect two Fibre Channel switches together via
Asynchronous Transfer Mode
FC-1: 8b/10b encoding, ordered sets and link control protocol
FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable
(8b/10b encoding). FC-1 also describes certain transmission words (ordered sets) that are
required for the administration of a Fibre Channel connection (link control protocol).
 8b/10b encoding: An encoding procedure that converts an 8-bit data byte sequence into a
10-bit transmission word sequence that is optimised for serial transmission.
 Ordered sets: The Ordered Sets are four byte transmission words containing data and
special characters which have a special meaning.
 Link Control Protocol: With the aid of ordered sets, FC-1 defines various link level protocols
for the initialisation and administration of a link. Examples of link level protocols are the
initialisation and arbitration of an arbitrated loop.
FC-2: Data transfer
 FC-2 is the most comprehensive layer in the Fibre Channel protocol stack.
 It determines how larger data units (for example, a file) are transmitted via the Fibre
Channel network.
 It regulates the flow control that ensures that the transmitter only sends the data at a speed
that the receiver can process it.
25
FC-3: Common services
The FC-3 level of the FC standard is intended to provide the common services required for advanced
features such as:
Striping -To multiply bandwidth using multiple N_ports in parallel to transmit a single information
unit across multiple links.
Hunt groups - The ability for more than one Port to respond to the same alias address. This improves
efficiency by decreasing the chance of reaching a busy N_Port.
Multicast - Multicast delivers a single transmission to multiple destination ports. This includes
sending to all N_Ports on a Fabric (broadcast) or to only a subset of the N_Ports on a Fabric.
Fibre Channel SAN
This section expands our view of Fibre Channel with the aim of realising storage networks
with Fibre Channel. To this end, we will first consider the three Fibre Channel topologies
point-to-point, fabric and arbitrated loop more closely
Point-to-point topology
Fabric
arbitrated loop
Same as above The Fibre Channel standard defines three different topologies:
26
IP STORAGE:
 IP storage is an approach to build storage networks upon TCP, IP and Ethernet.
IP Storage in a generalized way can be termed as utilizing Internet Protocol (IP) in a Storage Area
Network (SAN). The Internet Protocol used in the SAN environment will be usually over Gigabit
Ethernet. It is usually termed as a substitute to the Fibre Channel framework, which is seen in
traditional SAN infrastructure.
It is a known fact that due to FC infrastructure in SAN, there were certain issues like expense,
complexity and interoperability which were hindering the implementation of FC SAN. But the
proponents of IP Storage have a strong view that with the benefits offered by IP based storage
infrastructure over Fibre channel alternative; wide- spread adoption of SAN can take place.
The advantage of a common network hardware and technologies will surely make the IP SAN
deployment less complicated than FC SAN. As the hardware components are less expensive and the
technology is widely known and is well used, the interoperability issues and training costs will
automatically be less. Furthermore, the ubiquity factor offered by TCP/IP will make extend or
connect SANs worldwide.
Benefits offered by IP Storage
 With the use of IP, SAN connectivity is possible on an universal front
 When compared to Fiber Channel, IP Storage connectivity will offer operable architecture
and will have fewer issues
 It will lessen the operational costs, as less expensive hardware components can be used than
in FC
 IP Storage can allow storage traffic to be routed over a separate network
 Use of existing infrastructure will surely reduce the deployment costs
 High performance levels can be achieved if the infrastructure is properly laid
 Offers wider connectivity options and that too less complex than Fibre Channel
 Minimum disruption levels are guaranteed in IP Storage
27
The NAS Architecture
 NAS is a specialized computer that provides file access and storage for a client/server
network.
 Because it is a specialized solution, its major components are proprietary and optimized for
one activity: shared file I/O within a network
 NAS is considered a bundled solution consisting of pre-packaged hardware and installed
software regardless of vendor selection
 The bundled solution encompasses a server part, a computer system with RAM, a CPU,
buses, network and storage adapters, storage controllers, and disk storage
 The software portion contains the operating system, file system, and network drivers.
 The last element is the network segment, which consists of the network interfaces and
related connectivity components.
 Plug and Play is an important characteristic of the NAS packaged solution.
 NAS can be one of the most difficult to administer and manage when numbers of boxes
exceed reasonable complexities
 NAS uses existing network resources by attaching to Ethernet-based TCP/IP network
topologies.
 Although NAS uses existing network resources, care should be taken to design expansions to
networks so increases in NAS storage traffic do not impact end-user transactional traffic
 NAS uses a Real Time Operating System (RTOS) that is optimized through proprietary
enhancements to the kernel. Unfortunately, you can’t get to it, see it, or change it. It’s a
closed box
28
The NAS Hardware Architecture
 Figure 9-1 is a generic layout of NAS hardware components.
 As you can see, they are just like any other server system. They have CPUs built on
motherboards that are attached to bus systems. RAM and system cache, meanwhile, are the
same with preconfigured capacities, given the file I/O specialization. What we don’t see in
the underlying system functionality are the levels of storage optimization that have evolved.
 NAS internal hardware configurations are optimized for I/O, and in particular file I/O. The
results of this optimization turns the entire system into one large I/O manager for processing
I/O requests on a network.
The NAS Software Architecture
 The operating system for NAS is a UNIX kernel derivative,
 Components such as memory management, resource management, and process
management are all optimized to service file I/O requests
 Memory management will be optimized to handle as many file requests as possible
 Blah .. blah.. and more blah :P
29
Performance bottlenecks in file servers
Current NAS servers and NAS gateways, as well as classical file servers, provide their storage capacity
via conventional network file systems such as NFS and CIFS or Internet protocols such as FTP and
HTTP. Although these may be suitable for classical file sharing, such protocols are not powerful
enough for I/O-intensive applications such as databases or video processing. Nowadays, therefore,
I/O-intensive databases draw their storage from disk subsystems rather than file servers.
Let us assume for a moment that a user wishes to read a file on an NFS client, which is stored on a
NAS server with internal SCSI disks. The NAS server’s operating system first of all loads the file into
the main memory from the hard disk via the SCSI bus, the PCI bus and the system bus, only to
forward it from there to the network card via the system bus and the PCI bus. The data is thus
shovelled through the system bus and the PCI bus on the file server twice (Figure). If the load on a
file server is high enough, its buses can thus become a performance bottleneck.
When using classical network file systems the data to be transported is additionally copied from the
private storage area of the application into the buffer cache of the kernel
30
Network Connectivity
A limiting factor of file I/O processing with NAS is the TCP/IP network environment and the
processing of TCP layers.
NAS as a storage system
Putting all the parts together allows a highly optimized storage device to be placed on the network
with a minimum of network disruption, not to mention significantly reduced server OS latency and
costs, where configuration flexibility and management are kept to a minimum. The NAS solutions are
available for a diversity of configurations and workloads. They range from departmental solutions
where NAS devices can be deployed quickly and in departmental environment settings to mid-range
and enterprise-class products that are generally deployed in data center settings.

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada
 
RAID
RAIDRAID
RAID
 
Distributed Shared Memory
Distributed Shared MemoryDistributed Shared Memory
Distributed Shared Memory
 
Distributed system
Distributed systemDistributed system
Distributed system
 
Distributed file systems dfs
Distributed file systems   dfsDistributed file systems   dfs
Distributed file systems dfs
 
Storage overview
Storage overviewStorage overview
Storage overview
 
GOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMGOOGLE FILE SYSTEM
GOOGLE FILE SYSTEM
 
Sun NFS , Case study
Sun NFS , Case study Sun NFS , Case study
Sun NFS , Case study
 
SAN
SANSAN
SAN
 
Database replication
Database replicationDatabase replication
Database replication
 
Google File System
Google File SystemGoogle File System
Google File System
 
Raid
Raid Raid
Raid
 
cloud storage
cloud storagecloud storage
cloud storage
 
Chapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage SystemsChapter 12 - Mass Storage Systems
Chapter 12 - Mass Storage Systems
 
message passing vs shared memory
message passing vs shared memorymessage passing vs shared memory
message passing vs shared memory
 
Introduction to Storage technologies
Introduction to Storage technologiesIntroduction to Storage technologies
Introduction to Storage technologies
 
Cloud Computing- components, working, pros and cons
Cloud Computing- components, working, pros and consCloud Computing- components, working, pros and cons
Cloud Computing- components, working, pros and cons
 
Distributed Computing system
Distributed Computing system Distributed Computing system
Distributed Computing system
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
 
Storage basics
Storage basicsStorage basics
Storage basics
 

Andere mochten auch

Storage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesStorage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesSudarshan Dhondaley
 
Ism 80 lecture notes
Ism 80 lecture notesIsm 80 lecture notes
Ism 80 lecture notesj_shapiro
 
Information Storage and Management
Information Storage and Management Information Storage and Management
Information Storage and Management EMC
 
Information Storage and Management notes ssmeena
Information Storage and Management notes ssmeena Information Storage and Management notes ssmeena
Information Storage and Management notes ssmeena ssmeena7
 
MongoDB Case Study at NoSQL Now 2012
MongoDB Case Study at NoSQL Now 2012MongoDB Case Study at NoSQL Now 2012
MongoDB Case Study at NoSQL Now 2012Sean Laurent
 
MongoDB Case Study in Healthcare
MongoDB Case Study in HealthcareMongoDB Case Study in Healthcare
MongoDB Case Study in HealthcareMongoDB
 
Storage Area Network (San)
Storage Area Network (San)Storage Area Network (San)
Storage Area Network (San)sankcomp
 
Network Attached Storage (NAS)
Network Attached Storage (NAS)Network Attached Storage (NAS)
Network Attached Storage (NAS)sandeepgodfather
 
System simulation & modeling notes[sjbit]
System simulation & modeling notes[sjbit]System simulation & modeling notes[sjbit]
System simulation & modeling notes[sjbit]qwerty626
 

Andere mochten auch (10)

Storage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 NotesStorage Area Networks Unit 4 Notes
Storage Area Networks Unit 4 Notes
 
Ism 80 lecture notes
Ism 80 lecture notesIsm 80 lecture notes
Ism 80 lecture notes
 
Information Storage and Management
Information Storage and Management Information Storage and Management
Information Storage and Management
 
Information Storage and Management notes ssmeena
Information Storage and Management notes ssmeena Information Storage and Management notes ssmeena
Information Storage and Management notes ssmeena
 
MongoDB Case Study at NoSQL Now 2012
MongoDB Case Study at NoSQL Now 2012MongoDB Case Study at NoSQL Now 2012
MongoDB Case Study at NoSQL Now 2012
 
MongoDB Case Study in Healthcare
MongoDB Case Study in HealthcareMongoDB Case Study in Healthcare
MongoDB Case Study in Healthcare
 
Storage Area Network (San)
Storage Area Network (San)Storage Area Network (San)
Storage Area Network (San)
 
Modelling and simulation
Modelling and simulationModelling and simulation
Modelling and simulation
 
Network Attached Storage (NAS)
Network Attached Storage (NAS)Network Attached Storage (NAS)
Network Attached Storage (NAS)
 
System simulation & modeling notes[sjbit]
System simulation & modeling notes[sjbit]System simulation & modeling notes[sjbit]
System simulation & modeling notes[sjbit]
 

Ähnlich wie Storage Area Networks Unit 2 Notes

I/O System and Case study
I/O System and Case studyI/O System and Case study
I/O System and Case studyLavanya G
 
final-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdffinal-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdfSamiksha880257
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Tuan Yang
 
03 Data Recovery - Notes
03 Data Recovery - Notes03 Data Recovery - Notes
03 Data Recovery - NotesKranthi
 
Operation System
Operation SystemOperation System
Operation SystemANANTHI1997
 
Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systemsdonny101
 
Designing Information Structures For Performance And Reliability
Designing Information Structures For Performance And ReliabilityDesigning Information Structures For Performance And Reliability
Designing Information Structures For Performance And Reliabilitybryanrandol
 
Distributed virtual disk storage system
Distributed virtual disk storage systemDistributed virtual disk storage system
Distributed virtual disk storage systemAlexander Decker
 
11.distributed virtual disk storage system
11.distributed virtual disk storage system11.distributed virtual disk storage system
11.distributed virtual disk storage systemAlexander Decker
 
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...ssuserec8a711
 
Advanced Storage Area Network
Advanced Storage Area NetworkAdvanced Storage Area Network
Advanced Storage Area NetworkSoumee Maschatak
 
I/O System and Case study
I/O System and Case studyI/O System and Case study
I/O System and Case studymalarselvi mms
 
Ch14 OS
Ch14 OSCh14 OS
Ch14 OSC.U
 

Ähnlich wie Storage Area Networks Unit 2 Notes (20)

I/O System and Case study
I/O System and Case studyI/O System and Case study
I/O System and Case study
 
final-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdffinal-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdf
 
Operation System
Operation SystemOperation System
Operation System
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)
 
03 Data Recovery - Notes
03 Data Recovery - Notes03 Data Recovery - Notes
03 Data Recovery - Notes
 
Operation System
Operation SystemOperation System
Operation System
 
Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systems
 
os
osos
os
 
Challenges in Managing IT Infrastructure
Challenges in Managing IT InfrastructureChallenges in Managing IT Infrastructure
Challenges in Managing IT Infrastructure
 
Designing Information Structures For Performance And Reliability
Designing Information Structures For Performance And ReliabilityDesigning Information Structures For Performance And Reliability
Designing Information Structures For Performance And Reliability
 
Distributed virtual disk storage system
Distributed virtual disk storage systemDistributed virtual disk storage system
Distributed virtual disk storage system
 
11.distributed virtual disk storage system
11.distributed virtual disk storage system11.distributed virtual disk storage system
11.distributed virtual disk storage system
 
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...Introduction to Enterprise Data Storage,  Direct Attached Storage, Storage Ar...
Introduction to Enterprise Data Storage, Direct Attached Storage, Storage Ar...
 
Advanced Storage Area Network
Advanced Storage Area NetworkAdvanced Storage Area Network
Advanced Storage Area Network
 
Disks.pptx
Disks.pptxDisks.pptx
Disks.pptx
 
DAS RAID NAS SAN
DAS RAID NAS SANDAS RAID NAS SAN
DAS RAID NAS SAN
 
I/O System and Case study
I/O System and Case studyI/O System and Case study
I/O System and Case study
 
OSCh14
OSCh14OSCh14
OSCh14
 
OS_Ch14
OS_Ch14OS_Ch14
OS_Ch14
 
Ch14 OS
Ch14 OSCh14 OS
Ch14 OS
 

Mehr von Sudarshan Dhondaley

Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Sudarshan Dhondaley
 
Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Sudarshan Dhondaley
 
Software architecture Unit 1 notes
Software architecture Unit 1 notesSoftware architecture Unit 1 notes
Software architecture Unit 1 notesSudarshan Dhondaley
 

Mehr von Sudarshan Dhondaley (6)

C# Unit5 Notes
C# Unit5 NotesC# Unit5 Notes
C# Unit5 Notes
 
C# Unit 2 notes
C# Unit 2 notesC# Unit 2 notes
C# Unit 2 notes
 
C# Unit 1 notes
C# Unit 1 notesC# Unit 1 notes
C# Unit 1 notes
 
Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2Architectural Styles and Case Studies, Software architecture ,unit–2
Architectural Styles and Case Studies, Software architecture ,unit–2
 
Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5Designing and documenting software architecture unit 5
Designing and documenting software architecture unit 5
 
Software architecture Unit 1 notes
Software architecture Unit 1 notesSoftware architecture Unit 1 notes
Software architecture Unit 1 notes
 

Kürzlich hochgeladen

Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 

Kürzlich hochgeladen (20)

TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 

Storage Area Networks Unit 2 Notes

  • 1. 16 INTELLIGENT DISK SUBSYSTEMS Caching: Acceleration of Hard Disk Access  In the field of disk subsystems, caches are designed to accelerate write and read accesses to physical hard disks.  In this connection we can differentiate between two types of cache: 1. Cache on the hard disk 2. Cache in the RAID controller. a. Write cache b. Read cache Cache on the hard disk  Each individual hard disk comes with a very small cache.  This is necessary because the transfer rate of the I/O channel to the disk controller is significantly higher than the speed at which the disk controller can write to or read from the physical hard disk.  If a server or a RAID controller writes a block to a physical hard disk, the disk controller stores this in its cache. The disk controller can thus write the block to the physical hard disk in its own time while the I/O channel can be used for data traffic to the other hard disks.  The hard disk controller transfers the block from its cache to the RAID controller or to the server at the higher data rate of the I/O channel Note1: The disk controller is the circuit which enables the CPU to communicate with a hard disk. Note2: A RAID controller is a hardware device or software program used to manage hard disk drives. Note3: A RAID controller is often improperly shortened to a disk controller. The two should not be confused as they provide very different functionality. Write cache in the disk subsystem controller(RAID Controller)  Disk subsystems controller come with their own cache, which in some models is gigabytes in size.  Many applications do not write data at a continuous rate, but in batches. If a server sends several data blocks to the disk subsystem, the controller initially buffers all blocks into a write cache with a battery backup and immediately reports back to the server that all data has been securely written to the drive.  The battery backup is necessary to allow the data in the write cache to survive a power cut. Read cache in the disk subsystem controller(RAID Controller)  To speed up read access by the server, the disk subsystem’s controller must copy the relevant data blocks from the slower physical hard disk to the fast cache before the server requests the data in question  The problem with this is that it is very difficult for the disk subsystem’s controller to work out in advance what data the server will ask for next  The controller can only analyse past data access and use this to extrapolate which data blocks the server will access next
  • 2. 17 Intelligent disk subsystems  Intelligent disk subsystems represent the third level of complexity for controllers after JBODs and RAID arrays.  The controllers of intelligent disk subsystems offer additional functions over and above those offered by RAID.  These functions include, instant copies, remote mirroring and LUN masking.  Instant copies  Instant copies can virtually copy data sets of several terabytes within a disk subsystem in a few seconds.  Virtual copying means that disk subsystems fool the attached servers into believing that they are capable of copying such large data quantities in such a short space of time.  The actual copying process takes significantly longer. However, the same server, or a second server, can access the virtually copied data after a few seconds 1. Server 1 works on the original data 2. The original data is virtually copied in a few seconds 3. Then server 2 can work with the data copy, whilst server 1 continues to operate with the original data  Instant copies are used, for example, for the generation of test data, for the backup of data and for the generation of data copies for data mining.  When copying data using instant copies, attention should be paid to the consistency of the copied data  The background process for copying all the data of an instant copy requires many hours when very large data volumes are involved, making this not a viable alternative  One remedy is incremental instant copy where data is only copied in its entirety the first time around. Afterwards the instant copy is repeated – for example, perhaps daily – whereby only those changes since the previous instant copy are copied
  • 3. 18  Remote mirroring  Something as simple as a power failure can prevent access to production data and data copies for several hours. A fire in the disk subsystem would destroy original data and data copies.  Remote mirroring offers protection against such disasters.  Modern disk subsystems can now mirror their data, or part of their data, independently to a second disk subsystem, which is a long way away.  The entire remote mirroring operation is handled by the two participating disk subsystems.  Remote mirroring is invisible to application servers and does not consume their resources. 1. The application server stores its data on a local disk subsystem. 2. The disk subsystem saves the data to several physical drives by means of RAID. 3. The local disk subsystem uses remote mirroring to mirror the data onto a second disk subsystem located in the backup data centre 4. Users use the application via the LAN. 5. The stand-by server in the backup data centre is used as a test system. 6. If the first disk subsystem fails, the application is started up on the stand-by server using the data of the second disk subsystem 7. Users use the application via the WAN.
  • 4. 19 Two types of remote mirroring: 1. Synchronous 2. Asynchronous Synchronous remote mirroring  In synchronous remote mirroring a disk subsystem does not acknowledge write operations until it has saved a block itself and received write confirmation from the second disk subsystem.  Synchronous remote mirroring has the advantage that the copy of the data held by the second disk subsystem is always up-to-date. Asynchronous remote mirroring  In asynchronous remote mirroring one disk subsystem acknowledges a write operation as soon as it has saved the block itself.  In asynchronous remote mirroring there is no guarantee that the data on the second disk subsystem is up-to-date.  LUN masking (LUN – Logical Unit Number)  over a storage area network (SAN).  LUN masking limits the access to the hard disks that the disk subsystem exports to the connected server.  A disk subsystem makes the storage capacity of its internal physical hard disks available to servers by permitting access to individual physical hard disks, or to virtual hard disks created using RAID, via the connection ports.  Based upon the SCSI protocol, all hard disks – physical and virtual – that are visible outside the disk subsystem are also known as LUN  Without LUN masking every server would see all hard disks that the disk subsystem provides. As a result, considerably more hard disks are visible to each server than is necessary  Each server now sees only the hard disks that it actually requires. LUN masking thus acts as a filter between the exported hard disks and the accessing servers  With LUN masking, each server sees only its own hard disks. A configuration error on server 1 can no longer destroy the data of the two other servers. The data is now protected.
  • 5. 20 Availability of disk subsystems  Today, disk subsystems can be constructed so that they can withstand the failure of any component without data being lost or becoming inaccessible  The following list describes the individual measures that can be taken to increase the availability of data • The data is distributed over several hard disks using RAID processes and supplemented by further data for error correction. After the failure of a physical hard disk, the data of the defective hard disk can be reconstructed from the remaining data and the additional data. • Individual hard disks store the data using the so-called Hamming code. The Hamming code allows data to be correctly restored even if individual bits are changed on the hard disk. • Each internal physical hard disk can be connected to the controller via two internal I/O channels. If one of the two channels fails, the other can still be used. • The controller in the disk subsystem can be realised by several controller instances. If one of the controller instances fails, one of the remaining instances takes over the tasks of the defective instance. • Server and disk subsystem are connected together via several I/O channels. If one of the channels fails, the remaining ones can still be used. • Instant copies can be used to protect against logical errors. For example, it would be possible to create an instant copy of a database every hour. If a table is ‘accidentally’ deleted, then the database could revert to the last instant copy in which the database is still complete. • Remote mirroring protects against physical damage. If, for whatever reason, the original data can no longer be accessed, operation can continue using the data copy that was generated using remote mirroring • LUN masking limits the visibility of virtual hard disks. This prevents data being changed or deleted unintentionally by other servers.
  • 6. 21 The Physical I/O path from the CPU to the Storage System  One or more CPUs process data that is stored in the CPU cache or in the RAM. CPU cache and RAM are very fast; however, their data is lost when the power is been switched off.  Therefore, the data is moved from the RAM to storage devices such as disk subsystems and tape libraries via system bus, host bus and I/O bus  Although storage devices are slower than CPU cache and RAM they compensate for this by being cheaper and by their ability to store data even when the power is switched off The physical I/O path from the CPU to the storage system consists of system bus, host I/O bus and I/O bus. More recent technologies such as InfiniBand, Fibre Channel and Internet SCSI (iSCSI) replace individual buses with a serial network. For historic reasons the corresponding connections are still called host I/O bus or I/O bus.
  • 7. 22 SCSI(Small Computer System Interface)  Fast transfer speeds, up to 320 megabytes per second  SCSI defines a parallel bus for the transmission of data with additional lines for the control of communication  The bus can be realised in the form of printed conductors on the circuit board or as a cable  A so-called daisy chain can connect up to 16 devices together  The SCSI protocol defines how the devices communicate with each other via the SCSI bus. It specifies how the devices reserve the SCSI bus and in which format data is transferred. An SCSI bus connects one server to several peripheral devices by means of a daisy chain. SCSI defines both the characteristics of the connection cable and also the transmission protocol.  The SCSI protocol introduces SCSI IDs and Logical Unit Numbers (LUNs) for the addressing of devices  Each device in the SCSI bus must have an unambiguous ID with the HBA in the server requiring its own ID Note: A host bus adapter (HBA) is a circuit board and/or integrated circuit adapter that provides input/output (I/O) processing and physical connectivity between a server and a storage device  Depending upon the version of the SCSI standard, a maximum of 8 or 16 IDs are permitted per SCSI bus  SCSI Target IDs with a higher priority win the arbitration of the SCSI bus.  Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they may send data through it. During the arbitration of the bus, the device that has the highest priority SCSI ID always wins.  In the event that the bus is heavily loaded, this can lead to devices with lower priorities never being allowed to send data. The SCSI arbitration procedure is therefore ‘unfair’. SCSI and storage networks  SCSI is only suitable for the realisation of storage networks to a limited degree  First, a SCSI daisy chain can only connect a very few devices with each other.  Second, the maximum lengths of SCSI buses greatly limit the construction of storage  networks
  • 8. 23 I/O TECHNIQUES Fibre Channel Protocol Stack  Fibre Channel is currently (2009) the technique most frequently used for implementing storage networks.  Fibre Channel was originally developed as a backbone technology for the connection of LANs By coincidence, the design goals of Fibre Channel are covered by the requirements of a transmission technology for storage networks such as: • Serial transmission for high speed and long distances; • Low rate of transmission errors; • Low delay (latency) of the transmitted data;  The Fibre Channel protocol stack is subdivided into five layers  The lower four layers, FC-0 to FC-3 define the fundamental communication techniques, i.e. the physical levels, the transmission and the addressing. The upper layer, FC-4, defines how application protocols (upper layer protocols, ULPs) are mapped on the underlying Fibre Channel network. Links, ports and topologies The Fibre Channel standard defines three different topologies: 1. Point- to-point : - Point-to-point defines a bi-directional connection between two Devices 2. Arbitrated loop : - Arbitrated loop defines a unidirectional ring in which only two devices can ever exchange data with one another at any one time 3. Fabric : - Fabric defines a network in which several devices can exchange data simultaneously at full bandwidth
  • 9. 24 Links  The connection between two ports is called a link  In the point-to-point topology and in the fabric topology the links are always bi-directional  The links of the arbitrated loop topology are unidirectional Ports: A Fibre Channel (FC) port is a hardware pathway into and out of a node that performs data communications over an FC link. (An FC link is sometimes called an FC channel.) B-Port Bridge Port B-Ports serve to connect two Fibre Channel switches together via Asynchronous Transfer Mode FC-1: 8b/10b encoding, ordered sets and link control protocol FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable (8b/10b encoding). FC-1 also describes certain transmission words (ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol).  8b/10b encoding: An encoding procedure that converts an 8-bit data byte sequence into a 10-bit transmission word sequence that is optimised for serial transmission.  Ordered sets: The Ordered Sets are four byte transmission words containing data and special characters which have a special meaning.  Link Control Protocol: With the aid of ordered sets, FC-1 defines various link level protocols for the initialisation and administration of a link. Examples of link level protocols are the initialisation and arbitration of an arbitrated loop. FC-2: Data transfer  FC-2 is the most comprehensive layer in the Fibre Channel protocol stack.  It determines how larger data units (for example, a file) are transmitted via the Fibre Channel network.  It regulates the flow control that ensures that the transmitter only sends the data at a speed that the receiver can process it.
  • 10. 25 FC-3: Common services The FC-3 level of the FC standard is intended to provide the common services required for advanced features such as: Striping -To multiply bandwidth using multiple N_ports in parallel to transmit a single information unit across multiple links. Hunt groups - The ability for more than one Port to respond to the same alias address. This improves efficiency by decreasing the chance of reaching a busy N_Port. Multicast - Multicast delivers a single transmission to multiple destination ports. This includes sending to all N_Ports on a Fabric (broadcast) or to only a subset of the N_Ports on a Fabric. Fibre Channel SAN This section expands our view of Fibre Channel with the aim of realising storage networks with Fibre Channel. To this end, we will first consider the three Fibre Channel topologies point-to-point, fabric and arbitrated loop more closely Point-to-point topology Fabric arbitrated loop Same as above The Fibre Channel standard defines three different topologies:
  • 11. 26 IP STORAGE:  IP storage is an approach to build storage networks upon TCP, IP and Ethernet. IP Storage in a generalized way can be termed as utilizing Internet Protocol (IP) in a Storage Area Network (SAN). The Internet Protocol used in the SAN environment will be usually over Gigabit Ethernet. It is usually termed as a substitute to the Fibre Channel framework, which is seen in traditional SAN infrastructure. It is a known fact that due to FC infrastructure in SAN, there were certain issues like expense, complexity and interoperability which were hindering the implementation of FC SAN. But the proponents of IP Storage have a strong view that with the benefits offered by IP based storage infrastructure over Fibre channel alternative; wide- spread adoption of SAN can take place. The advantage of a common network hardware and technologies will surely make the IP SAN deployment less complicated than FC SAN. As the hardware components are less expensive and the technology is widely known and is well used, the interoperability issues and training costs will automatically be less. Furthermore, the ubiquity factor offered by TCP/IP will make extend or connect SANs worldwide. Benefits offered by IP Storage  With the use of IP, SAN connectivity is possible on an universal front  When compared to Fiber Channel, IP Storage connectivity will offer operable architecture and will have fewer issues  It will lessen the operational costs, as less expensive hardware components can be used than in FC  IP Storage can allow storage traffic to be routed over a separate network  Use of existing infrastructure will surely reduce the deployment costs  High performance levels can be achieved if the infrastructure is properly laid  Offers wider connectivity options and that too less complex than Fibre Channel  Minimum disruption levels are guaranteed in IP Storage
  • 12. 27 The NAS Architecture  NAS is a specialized computer that provides file access and storage for a client/server network.  Because it is a specialized solution, its major components are proprietary and optimized for one activity: shared file I/O within a network  NAS is considered a bundled solution consisting of pre-packaged hardware and installed software regardless of vendor selection  The bundled solution encompasses a server part, a computer system with RAM, a CPU, buses, network and storage adapters, storage controllers, and disk storage  The software portion contains the operating system, file system, and network drivers.  The last element is the network segment, which consists of the network interfaces and related connectivity components.  Plug and Play is an important characteristic of the NAS packaged solution.  NAS can be one of the most difficult to administer and manage when numbers of boxes exceed reasonable complexities  NAS uses existing network resources by attaching to Ethernet-based TCP/IP network topologies.  Although NAS uses existing network resources, care should be taken to design expansions to networks so increases in NAS storage traffic do not impact end-user transactional traffic  NAS uses a Real Time Operating System (RTOS) that is optimized through proprietary enhancements to the kernel. Unfortunately, you can’t get to it, see it, or change it. It’s a closed box
  • 13. 28 The NAS Hardware Architecture  Figure 9-1 is a generic layout of NAS hardware components.  As you can see, they are just like any other server system. They have CPUs built on motherboards that are attached to bus systems. RAM and system cache, meanwhile, are the same with preconfigured capacities, given the file I/O specialization. What we don’t see in the underlying system functionality are the levels of storage optimization that have evolved.  NAS internal hardware configurations are optimized for I/O, and in particular file I/O. The results of this optimization turns the entire system into one large I/O manager for processing I/O requests on a network. The NAS Software Architecture  The operating system for NAS is a UNIX kernel derivative,  Components such as memory management, resource management, and process management are all optimized to service file I/O requests  Memory management will be optimized to handle as many file requests as possible  Blah .. blah.. and more blah :P
  • 14. 29 Performance bottlenecks in file servers Current NAS servers and NAS gateways, as well as classical file servers, provide their storage capacity via conventional network file systems such as NFS and CIFS or Internet protocols such as FTP and HTTP. Although these may be suitable for classical file sharing, such protocols are not powerful enough for I/O-intensive applications such as databases or video processing. Nowadays, therefore, I/O-intensive databases draw their storage from disk subsystems rather than file servers. Let us assume for a moment that a user wishes to read a file on an NFS client, which is stored on a NAS server with internal SCSI disks. The NAS server’s operating system first of all loads the file into the main memory from the hard disk via the SCSI bus, the PCI bus and the system bus, only to forward it from there to the network card via the system bus and the PCI bus. The data is thus shovelled through the system bus and the PCI bus on the file server twice (Figure). If the load on a file server is high enough, its buses can thus become a performance bottleneck. When using classical network file systems the data to be transported is additionally copied from the private storage area of the application into the buffer cache of the kernel
  • 15. 30 Network Connectivity A limiting factor of file I/O processing with NAS is the TCP/IP network environment and the processing of TCP layers. NAS as a storage system Putting all the parts together allows a highly optimized storage device to be placed on the network with a minimum of network disruption, not to mention significantly reduced server OS latency and costs, where configuration flexibility and management are kept to a minimum. The NAS solutions are available for a diversity of configurations and workloads. They range from departmental solutions where NAS devices can be deployed quickly and in departmental environment settings to mid-range and enterprise-class products that are generally deployed in data center settings.