Osnove: DAS,Trdi disk, SCSI protokol
SAN in NAS
-s protokoli, ki nastopajo v okoljih
-z napravami, katere nastopajo pri izgradnji tako osnovnih in enostavnih, kot tudi kompleksnih okolji
-topologije
Strežniki
Povezljivost
Krmilniki in gonilniki
Diskovna polja
RAID
IOPS (teorija)
Razno
2. AGENDA
• Osnove: DAS,Trdi disk, SCSI protokol
• SAN in NAS
– S protokoli, ki nastopajo v okoljih
– Z napravami, katere nastopajo pri izgradnji tako osnovnih in enostavnih, kot tudi
kompleksnih okolji
– Topologije
• Strežniki
• Povezljivost
• Krmilniki in gonilniki
• Diskovna polja
• RAID
• IOPS (teorija)
• Razno
3. DAS – Direct Attached Storage
HDD & Communications
4. Direct Attached Storage or DAS is a dedicated digital storage
device attached directly to a server or PC via a cable trough one
of the main protocols used for DAS connections:
– ATA (Advanced Technology Attachment)
– SATA (Serial Advanced Technology Attachment
– SCSI (Small Computer System Interface)
– SAS (Serial Attached SCSI)
– FC (Fibre Channel) are the main protocols used for DAS connections.
• Hard disk / HDD / DASD / is the building block of all storage
systems!
DAS creates data islands, because data cannot be shared with other servers.
What is DAS?
5. Anatomy of a Hard disks
“Platter” – A circular piece
of magnetic material
Track 0
Track 1
“Tracks” – Platters divided
into concentric circles
Platter1,track0
Platter2,track0
Platter3,track0
Cylinder 0
Disk Controller.
Does operations like read, write etc..
Read/Write Head 0
Read/Write data from Hardisk ?
Read Cylinder
0,head 0,
sector 0.
CHS Addressing
7. Communicating with your Hardisk
Various communication standards exist to talk to our
Harddisks:
Communication Requirements SCSI Specification
Speaker/Listener Initiator/Target
Voice/Language SCSI Commands
Medium SCSI Bus
Addressing SCSI ID
8. Application Word, e-mail, Web, ERP, TSM
I/O request Open, Read, Write, Close
Block
I/O
request
Servicing
block I/O
request
File systems
(UFS, NTFS, JFS)
"RAW" files
(DBMS: DB2, Oracle)
Logical
layer
Application
layer
OS
layer
HBA
External
storage
Physical
layer
Adapter drivers
Host bus adapter
SAN
FC
Volume manager
Device drivers
J
B
O
D
BUS
RAID
request
File I/O
File I/O and block I/O: Local system
10. SCSI Communication in Action
OS:
CPU
Bus Adapter:
Initiator
HBA: Give me Block 1, From
Disk 0
Target Disk0
ID 0
Hey, Disk 0, I Need your attention
Ok Bus, You have my attention
Thanks, READ LBA 2 for me
Ok, Here is LBA 2 Data: “Hello”
Converts LBA 2 –> CHS 0/0/1
CPU: Your data
“Hello”
CPU: Give me Block 1, from disk0
OS: your data,
”Hello”
15. Storage area network components
Servers with FC/iSCSI HBAs
Storage systems
FC / FCoE / iSCSI / IB
Switches / Directors
FC / FCoE / iSCSI / IB
16. What is Fibre Channel?
• Fibre Channel is a high-speed communications method
for attaching devices to a server (host) or groups of
hosts.
– Supports very high speed -1Gb/sec,2Gb/sec,4Gb/sec, 8Gb/sec
and 16Gb/sec (depending on hardware)
– Used mainly to connect to storage devices
• Serial SCSI rather than Parallel SCSI commands
• Can support non-Fibre Channel devices through gateways
– Also supports IP over fibre
– Uses light to carry signals
– Supports highly reliable connections
– Has very little signal interference
17. FC topologies: Introduction
• Fibre Channel uses three topologies:
– Point-to-point
• Two devices are connected back to back
• This is the simplest topology, with limited connectivity
– Arbitrated loop
• All devices are in a loop or ring
– Similar to token ring or Fiber Distributed Data Interface (FDDI)
networking
– Switched (fabric)
• All devices or loops of devices are connected to Fibre
Channel switches
– Similar conceptually to modern Ethernet implementations
18. Worldwide name
• A unique identifier for each Fibre Channel device
– This is similar to the way each Ethernet card has a unique MAC
address
• Each device in the SAN is identified by a unique node
world wide node name (WWNN) containing:
– A vendor identifier field, which is defined and maintained by the
IEEE
– A vendor-specific information field defined by the vendor
• Each N_Port will have its own world wide port name
(WWPN)
19. Point-to-point topology
• A point-to-point topology consists of:
– Two Fibre Channel devices connected directly together
• Characteristics of point-to-point topology
– The transmit fiber of one device goes to the receive fiber of the
other
device, and vice versa
– There is no sharing of the media
• This allows the devices to enjoy the total bandwidth of the link
– A simple link initialization is required of the two devices before
communication can begin
Storage system
20. Arbitrated loop topology
• The most complex Fibre Channel topology
– It is a cost-effective way to connect up to 127 ports in a single network
• Characteristics of arbitrated loop topology
– The media is shared among the devices, limiting each device’s access
– Arbitrated loop is not a token-passing scheme
– When a device is ready to transmit data:
• It first must arbitrate and gain control of the loop.
• Once this happens, there essentially exists point-to-point communication between
this device and one other device
• All other devices in between simply repeat the data
Storage system
21. Switched topology
• Used to connect many devices in a cross-point switched
configuration
– Each Fibre Channel device (or loop) connects to a switch or director
• Characteristics of the switched topology
– Benefit is that many devices can communicate at the same time
• The media is not shared
– Of course, it also requires the purchase of a switch or a director
Storage system
Switch/Director
23. Switch cascading - ISL
• Interconnection of Fibre Channel switches
• Seamlessly extends a single switch Fabric
– Increases connectivity
– Allows easy Fabric growth
– Increases distance
• Inter Switch Links (ISL)
Completely automatic
• Fully Distributed Name
Service
Saturn1
Saturn1 Saturn1
Saturn1
Saturn1 Saturn1
Saturn1
Saturn1 Saturn1
24. A, E
B
C
D
A 2.0 Gb
B 1.5 Gb
C 0.5 Gb
D 1.0 Gb
E 2.0 Gb
A 2.0 Gb
B 1.5 Gb
C 0.5 Gb
D 1.0 Gb
E 2.0 Gb
A 2.0 Gb
B 1.5 Gb
C 0.5 Gb
D 1.0 Gb
E 2.0 Gb
A 2.0 Gb
B 1.5 Gb
C 0.5 Gb
D 1.0 Gb
E 2.0 Gb
Individual 2 Gb links
Congestion
Single
8 Gb trunk
Interswitch link trunking
25. Trunking: Improved performance
• How trunking improves performance:
– Frames are stripped across links
– Maintains in order delivery
– Prevents single ISL bandwidth saturation (“hot spot”)
1
1
1
2
2
3
2
3
3
4
4
4
5
5
6
6
2G 2G 2G
2G 2G 2G
Frames on the trunk
Frames
can use
any link
Frames
arrive
in-order
Trunks can be
1, 2,3 or 4
links “wide”
27. Sample mesh topologies: 16-port
switches
42 Ports Available - 1 Hop Max
52 Ports Available – 1 Hop Max
82 Ports Available – 2 Hop Max
104 Ports Available – 2 Hops Max
28. Zoning overview
• Zoning allows for finer segmentation of the SAN.
• With zoning you can:
– Configure barriers between different environments/configurations
– Provide logical subsets of closed
user groups within the fabric
– Provide security
– Improve availability
– Prevent interference
– Provide temporary access among
devices for specific purposes,
such as backup
29. When to use zoning
• When needed, zoning should be used for improving:
• Interoperability
– Separate HBA zones provide good fault isolation
– Separate OS zones provide good fault isolation
– Interoperability zones should be periodically reevaluated.
• Security
– Zoning is designed to
support discrete
environments
30. Software zoning
Server A Server B Server C
1 2 3
4 5 6 7
Tape-A
Tape-B
Storage
Area
Network
Matthew, Max, EllenZone_3
Robyn, EllenZone_2
Alex, Ben, SamZone_1
AliasesZone Name
Alex Robyn Matthew
Ben
Sam
Ellen
50:05:76:ab:cd:22:03:65Robyn
50:05:76:ab:cd:02:05:94Max
50:05:76:ab:cd:20:08:90Ellen
50:05:76:ab:cd:20:09:91Ben
50:05:76:ab:cd:23:05:93Sam
50:05:76:ab:cd:24:05:94Matthew
50:05:76:ab:cd:12:06:92Alex
WWPNAlias
DS5000-A
DS5000-B
Max
31. • Fibre Channel over copper cable
– Distance limit of approximately 13 to 30 meters over twisted pair.
Media: Cable options
Media type Speed (MB/s) Transmitter Distance
1310nm LW 2m to 10km
1550nm LW 2m to 50km
1310nm LW 2m to 10km
1550nm LW 2m to 50km
400 1310nm LW 2m to 2km
800 1310nm LW 2m to 10km
1310nm LW 0.5m to 10km
1490nm LW 0.5m to 2km
100 0.5m to 300m
200 0.5m to 300m
400 0.5m to 150m
800 0.5m to 150m
1600 0.5m to 100m
100
200
850nm SW
1600
Single-mode fiber
Multimode fiber
32. What is iSCSI?
• iSCSI - Internet SCSI (Small Computer System Interface),
SCSI commands sent across a network in TCP/IP packets. It
was developed as a storage networking standard for linking
data storage facilities.
iSCSI storage system
The iSCSI Expansion card offers a connection to an iSCSI
storage device, via iSCSI host connections, to leverage the
available standard Ethernet infrastructure to offer storage area
network (SAN) solutions
34. Application
I/O request
File system
Logical layer
Physical layer
Server1
File
I/O
Device
Drivers iSCSI
client
Block I/O
Block
I/O
HBA
driver
TCP/IPHBA
NIC
IP
network
iSCSI Storage System
Other
servers
iSCSI
adapter
iSCSI driver
TCP/IP
GbE
Server1
Server2
Server3
Server1
Local
Block
I/O
iSCSI architecture: IP SAN storage
35. What is Fibre Channel over
Ethernet
• FCoE
– Unified IO out of the chassis
– Single CNA that handles Ethernet and FC traffic
– Reduced cables from chassis to top of rack switch
– Fewer switch modules in the chassis
– Simplify IO Management
• Increased performance
• Power and cooling savings
• Investment Protection
– Seamless integration into existing network infrastructure (LAN & SAN)
– Leverage existing Fibre Channel SAN and Storage infrastructure
• Seamless management integration
Fibre
Channel
Ethernet
36. Converged architecture: FCoE
• Traditional FC and Ethernet transport uses separate
expansion cards I/O modules
• In converged architecture, Fibre Channel traffic is
encapsulated into FCoE frames
• Disadvantage of FCoE – protocol does not route
Traditional Fibre Channel and Ethernet Converged Fibre Channel and Ethernet
37. SAS – Serial attached SCSI
• SAS provides smaller form factor, longer cabling distances,
SCSI compatibility, and greater addressability
• SAS replaced Ultra320 SCSI in SCSI and RAID controllers
– Full duplex, dual port, point-to-point connections
• Higher bandwidth
– SAS 3.0: 8 ports; supports up to 12 Gbps per port
• Wide port (4-ports): 48 Gbps
• Two wide ports (8-ports): 96 Gbps
• PCI-E implementation to speed up access
• Greater drive support
– SCSI-based products: 14 drives per channel
– SAS-based products: 144 drives per 4 ports
38. Serial Attached SCSI: Disk
attachment
SAS wide ports
Each SAS port includes
four full duplex links or
lanes within a single
connector, as shown,
with each lane running
a speed of 3, 6 or 12
Gbps.
39. SAS connectivity module or
„switch“
• Supports 2 Blade servers SAS
Connectivity Module
– Lenovo BladeCenter
– HP Bladesystem c-class
• Support for both booting and
data access
• Simplifies a blade environment
by consolidating OS
initialization across multiple
servers
• Great for close-
quarter SANs
SAS Connectivity Benefits
Shared direct-attached
– Up to four hosts
• High performance
• Fast out-of-the box deployment
• Reduced complexity
• Consolidation for lower TCO
• Scalability with future SAS SAN
support
41. File systems & file-sets within NAS
• File shares are exports to the user or
application
• User files are organized and stored in file
systems
– File system is local to the NAS system
• File-sets allow for breaking down the file
system space in smaller manageable units
– Certain operations can be configured
for file-sets such as replication,
snapshots, and quota-management
• Pools allow placement and migration of
files to different cost storage devices
41
NAS System
File-set
(optional)
File System
Pool
(optional)
Share
NFS, CIFS, HTTP, FTP, …
Share
Storage Storage
Pool
(optional)
Pool
(optional)
File-set
(optional)
File-set
(optional)
42. Data Transfer: Block versus File
The key to understanding the difference between block and
file data is the file system owner
STORAGE
APPLICATION
Storage Area Network
(SAN)
FC, iSCSI, FCoE or IB
NETWORK
FILE SYSTEM
STORAGE
APPLICATION
FILE SYSTEM
Direct Attached Storage
(DAS)
STORAGE
NETWORK
APPLICATION
FILE SYSTEM
Network Attached Storage
(NAS)
IP: CIFS, NFS, FTP...
44. Fibre Channel
• Fibre Channel is a serial data transfer interface
– Copper Wire Connection
– Optical Fiber Connection
• High-speed is obtained through:
– Mapping networking and I/O protocols to Fibre Channel constructs
– Encapsulating them and transporting them within Fibre Channel
frames
Fibre Channel
Switch
Windows
Host
Linux
Host
Storage
Host Bus Adapters
45. Host Bus Adapter (HBA)
• Manages the transfer of information
A Host Bus Adapter is an I/O
adapter that sits between the
host computer's bus and the
Fibre Channel loop
• Minimizes the impact on host processor performance
Performs many low-level
interface functions
automatically
• Fibre Channel
• iSCSI
• FICON
• SCSI
Multiple Technologies
• Load balance
• Fail-over
• SAN administration
• Storage management
The HBA enables a range of
high-availability and storage
management capabilities
46. Fibre Channel Addressing
• Fibre Channel Addresses are required to route the frames from
source to target
• 24 bits (3 bytes) physical addresses are assigned when a Fibre
Channel node is connected to the switch (or loop in the case of FC-
AL)
Target
Source
FC Initiator:
HBA
FC Responder:
SP Ports
FC Switch
48. Switched Fabric Topology
• Switched Fabric is a Fibre Channel topology where many
devices connect with each other via Fibre Channel switches
– This topology allows for the most number of connections with a
theoretical 16 million devices per Fabric
• Frames are routed between source and destination by the
Fabric
Fibre Channel
Switch
49. • Always put ONLY one
HBA in a zone with
Storage ports
• Each HBA port can only
talk to Storage ports in the
same zone
• HBAs & Storage Ports
may be members of more
than one zone
• HBA ports are isolated
from each other to avoid
potential problems
associated with the SCSI
discovery process
Single Initiator Zoning
Single Emulex HBA zoned to two VNX ports
50. Active/Passive Arrays: Failover
Mechanism
• Two types of path failover:
– Array-initiated LUN trespass
• Typical cause: an SP fails or
needs to reboot
• PowerPath logs a follow-over
– Host-initiated LUN trespass
• PowerPath detects a path failure,
e.g. due to a cable break, port
failure etc.
• PowerPath initiates a trespass,
and logs the event
Fabric A Fabric B
SP-A
PassiveActive
SP-B
Trespass
Host
51. Active/Active Mode (ALUA)
• Asymmetric Logical Unit Access
(ALUA)
– Asymmetric accessibility to logical
units through various ports
• Request forwarding
implementation
• Communication method to
pass IO’s between SP’s
• Software on the controller
forwards requests to the other
controller
• Not an Active-Active Array model!
– I/Os are not serviced by both SPs
for a given LUN
– I/Os are redirected to the SP
owning the LUN
Front-End Fault Masking
Back-End Fault Masking
52. LUN
LUN
LUN
Symmetrical Active-Active: Overview
• Only one SP serves IOs via a given LUN
• The remaining SP is acting as standby
• SP trespasses LUN when paths fail and
host software adjusts to new path
• LUN is presented across both SP-
paths via internal links
• Only one SP is actively processing IO
to the backend
• Host initiates trespass when path fails
• Both SPs serve IOs to and from a given
LUN
• If path fails, no disruption to LUN
• The performance is now improved up
to 2X
• Classic LUNs only!
LUN
CX: Active-Passive Active-Active
(Symmetrical)
VNX: Active-Active
(ALUA)
53. Asymmetric LUN Access: VNX
SP reports SCSI descriptor:
– TARGET_PORT_GROUPS
• Active/Optimized
• Active/Non-
Optimized
SPA SPB
I/O resumes to LUN
through alternate SP
after short delay
SPA SPB
Optimized Path
Non-optimized Path
ALUA masks
the failure and
trespasses
LUN
Owned
by SPA
Owned
by SPB
54. SPA SPB
Owned
by SPA
Symmetric LUN Access: VNX with
MCx
SPA SPB
• Both SPs send and receive
Active/Optimized
• Classic LUNs ONLY (OE R5.33)
Optimized Path
Non-optimized Path
Owned
by SPA
I/O continues through
remaining SP and
paths with NO delay
55. SPA SPB
LUN Parallel Access Locking Service
• Required for Active-Active access
LUN
Lock
Lock
•Write I/O operation acquires a lock on LBA address on both SPs
•Lock requests sent over CMI
•Lock requests are smaller/quicker than the entire I/O
CMI
56. • Lower risk with increased availability within data
centers
• Improved Availability
– All Paths are Active
– No trespass during path failure
– No trespass during NDU
• No setup on VNX or Host side
• Improved Performance
– All Paths serving I/O
– Up to 2X Improvement
VNX with Symmetric / Active-Active
Benefits
•Eliminate application timeouts
•Improve application throughput
•Multi-path load balancing
LUN
58. Fundamental Disk Performance -
“Spinning Rust”
• Drive technology really not changed much since original IBM RAMAC drive
in 1955
– An arm with a read/write head moving across a platter(s) of magnetic sectors.
• Spinning rotational drive medium dictated by two factors
– Seek time
• Time taken for head to move from current track to required track
• Dictated by the generation of drive, and the form factor (3.5” vs 2.5”)
– Rotational latency
• Time taken for drive to spin the platter so the required sector is under the
head
• Dictated by the RPM of the drive
• Average latency = ½ end to end seek time + ½ single rotation time
59. Nearline SAS / SATA - 7,200RPM
• SAS vs SATA
– SAS drives – dual ported – access through two separate intefaces
– SATA drives – single ported – access through one interface only
• Nearline as a term was originally used to describe tape, "near-online“
• Consumer grade SATA drive technology, but high capacity.
– Consumer grade, and highest density
= lowest reliability
= highest RAID protection needed
– Generally LFF (3.5”) - currently up to 4TB
– Some SFF (2.5”) - currently up to 1TB
• Performance, per 7.2K drive, we'd normally expect around :
– 100-150 IOPS
– 100-180 MB/s
60. Enterprise SAS – 10,000 / 15,000 RPM
• Enterprise grade SAS drives, but lower capacity.
– Industry has moved to mainly SFF 2.5”
• Fundamentally different class to NL-SAS / SATA
– Not only in terms of RPM
– Better reliability, firmware, and technology
• 10K RPM
– Mid-speed, closer to NL-SAS in capacity (currently 1.2TB)
– 200-300 IOPS
– 120-200 MB/s
• 15K RPM
– Fastest HDD, lowest capacity (currently 300GB)
– 300-400 IOPS (some latest generations with short-stroking ~=500
IOPS!)
– 150-200 MB/s
61. Flash drives
In the last 10 years:
CPU Speed: increased roughly 8-10x
DRAM Speed: increased roughly 7-9x
Network Speed: increased roughly 100x
Bus Speed: increased roughly 20x
Disk speed: increased only 1.2x
Flash memory is constantly-powered nonvolatile memory that can be
erased and reprogrammed.
Flash memory’s name comes from the erasure technique used where a
section of memory cells are erased in a single action or "flash.“
What is Flash Memory?
62. I/O wait with classic HDD
Time
I/O Serviced by Disk
1. Issue I/O request (~ 100 μs)
2. Wait for I/O to be serviced (~ 5,000 μs)
3. Process I/O (~ 100 μs)
• Time to process 1 I/O request = 200 μs + 5,000 μs = 5,200 μs
• CPU Utilization = Wait time / Processing time = 200 / 5,200 = ~4%
Processing
~100µs ~100µs
Waiting
~5000 µs
CPU State
1 I/O Request
63. Time
I/O Serviced by Flash
1. Issue I/O request (~ 100 μs)
2. Wait for I/O to be serviced (~ 200 μs)
3. Process I/O (~ 100 μs)
• Time to process 1 I/O request = 200 μs + 200 μs = 400 μs
• CPU Utilization = Wait time / Processing time = 200 / 400 = ~50%
Processing
~100µs ~100µs
Waiting
~200 µs
CPU State
1 I/O Request
12X Application
benefit by only
changing storage
latency!
I/O wait with Flash / SDD
65. What is Raid?
RAID means Redundant Array of Independent Disks.
It is also called Redundant Array of Inexpensive Disks.
66. Importance of RAID
1. Reliability
2. Real-time data recovery with uninterrupted access when a
hard drive fails
3. System uptime and network availability and protection from
loss Protection against data loss
4. Multiple drives working together increase system
performance
68. Data Redundancy
Redundancy gives us the ability to have a
drive fail without losing valuable data
There are Two Type of Data Redundancy:
Disk Mirroring
&
Data Parity
69. RAID Protection Comparison
RAID-0
– Striping / concatenation
– No protection against drive loss
– Capacity = SUM(drive capacitites)
RAID-1
– Mirroring between two drives
– Protects against one drive loss
– Capacity = capacity of single drive
RAID-10
– Mirroring between two sets of striped
drives
– Protects against up to half the
mirrored set
– Capacity = half capacity of all drives
RAID-5
– Rotating XOR parity
– Protects against one drive loss
– Capacity = SUM(num of drives -1)
RAID-6
– Rotating double XOR parity
– Protects against two drive losses
– Capacity = SUM(num of drives -2)
70. RAID-0
Data are stripped on all disks
Offer performances
No redundancy
2 disks minimum, maximum depending of RAID
controller
Data are split depending of stripe size
(16/32/64/128KB)
controller
71. RAID-1
Data mirrored (duplicated) on second hard
disk
Offer redundancy
Equivalent of one disk space lost for
redundancy
Only on 2 disks
Support one disk failure
Controller
72. Stripes data and parity to generate redundancy.
The parity is distributed through the stripe of the
disk array.
both parity and data are striped across a set of
separate disks.
Data chunks are much larger than the average I/O
size, but are still resizable.
Disks are able to satisfy requests independently
RAID-5
73. RAID-6
Data is striped across all disks (minimum of
four)
A two parity blocks for each data block (p and
q in the diagram) is written on the same stripe.
If one physical disk fails, the data from the
failed disk can be rebuilt onto a replacement
disk.
Provides for faster rebuilding of data from a
failed disk.
74. RAID 10
It uses RAID-1mirroring and
RAID-0 striping, and has both
security and sequential
performance.
It is a striped RAID-0 array
whose segments are
mirrored RAID-1.
It is similar in performance to
RAID 0+1, but with better
fault tolerance and rebuild
performance.
It has the same fault
tolerance as RAID-1 with the
same overhead for fault
tolerance as mirroring alone.
75. RAID penalty
Determining which type of RAID to use when building a
storage solution will largely depend on two things; capacity and
performance.
We measure disk performance in IOPS or Input/Output per
second. One read request or one write request = 1 I/O. Each
disk in you storage system can provide a certain amount of IO
based off of the rotational speed, average latency and average
seek time.
76. RAID Performance Comparison
Random Performance is dictated by the overheads on writes
RAID-5 and 6 will give best capacity usage, and failure protection
– Write penalty is 4x for RAID-5 or 6x for RAID-6
– So you need to consider this when creating your arrays.
NL-SAS being “consumer” grade and most liable to failure
– Typically needs RAID-6
– Catch 22 – worst performing drive, needs worst overhead RAID!
80. Terminology
• Block: Leverages Small Computer System Interface (SCSI) commands to
read-write specific blocks
– Common SCSI access methods include Fiber Channel (FC), Serial
Attached SCSI (SAS), Internet Small Computer System Interface
(iSCSI), Fiber Channel over Ethernet (FCoE) and InfiniBand (IB)
• NAS: reads/writes files
• File Server: A storage server dedicated (primarily) to serving file-based
workloads
• NAS Gateway: A server that provides network-based storage virtualization
– Provides protocol translation from host-based CIFS/NFS to Storage
Area Network (SAN) based block
– Examples: NetApp V Series; EMC VNX/Celerra; OnStor (LSI); HP
P4000 Unified Gateway, IBM SONAS;
• Unified Storage – a single logical, centrally managed storage platform that
serves both block (FC, iSCSI, FCoE, SAS, IB) and file-based (CIFS, NFS,
HTTP, FTP, WebDAV etc.) workloads
– Examples: EMC VNX &VNXe; NetApp FAS series; IBM Storwize V7000
Unified80