SlideShare a Scribd company logo
1 of 33
Storage

Storage Concepts
IP Storage: iSCSI and NAS/NFS
Fibre Channel SAN Storage
VMFS Datastores
Storage




Storage – virtual disks & VMFS
Storage Area Networks (SAN)
Understanding FC & iSCSI Storage
Why you need a SAN
Storage Terms you must know
What is in a Datastore?
ESX Server Storage Options
VMFS Specs and Maxs
Types of Storages


Local (SCSI/SATA/SAS/IDE)

SAN (Fibre Channel & iSCSI

NAS (NFS & CIFS)
Why do you need a Storage

ESX to Boot

For Virtual Machines

Centralized Storage is required for advanced features of vSphere like
VMotion, VMHA, FT, and DRS
ESX / ESXi

Incase if we choose to install ESX/ESXi Server, the Server can be
installed on the local disk of the physical machine or on the SAN (Boot
from SAN - Remote Boot). VMware ESX supports boot from SAN.
Booting from SAN requires one dedicated LUN per server.

VMware ESXi (4.1 Only) may be booted from SAN. This is supported
for Fibre Channel SAN, as well as iSCSI and FCoE for certain storage
adapters that have been qualified for this capability. (Refer HCL for
supported Storage Adapters).

ESXi comes in 2 version

Embedded : The ESXi is embedded in the hardware on which there is
a Flash ROM, these servers are mostly provided by vendors.
Installable/ISO: ESXi is also available as Installable or an ISO
Space Requirements to Install ESX

vmkcore                        110 Mb
boot Partition                 1.1 Gb
/ root partition               5 Gb
var/log partition              2 Gb
swap partition                 600 Mb to 1.6 Gb

vmkcore partition is used to store the core dumps generated by the VMkernel.
The /var/core directory is used to store the core dumps.

Optionally you can go for Scratch Partition (Optionally) - We call it vFAT
Scratch partition - 4Gb
ESXi can locally boot from USB ROM, we may not have 4Gb on the USB Drive
and we might not go for Vscratch partition.

When we don't go for a Vscratch partition, ESXi Kernel additionally will use 512
Mb for itself, by default. VMkernel uses 154 Mb of memory plus incase you
don't have a scratch partition, 512 Mb for itself.

Vscratch partition is used for swap for the VMkernel.

Swap is used for service console on ESX servers. It uses double or 1.5 times
of the memory. Service Console uses minimum 300Mb minimum memory by
default, so the Minimum swap partition is of 600Mb. Maximum memory it might
use is 800 Mb, so swap should be allocated around 1.6Gb.
Space Requirements to Install ESX


So in all you need around 9.8Gb disk space is what you need maximum to
install ESX Server

ESX Server supports upto 1 Tb of memory

With regards to ESXi, it has a small footprint and it is possible to install
ESXi 110/120 Mb

ESXi can be installed on SCSI, SATA, IDE and SAN(ESXi 4.1). ESXi
cannot boot NFS or CIFS.

ESX can be booted from Fibre Channel SAN, 1st boot device would be
FC HBA through which you are booting and the path to target storage
processor between these two should be active path and not a passive
path, and it should be able to recognize this boot LUN
What are all the ways to provide storage to the virtual machines?

When creating a virtual machine, you have options of Create a new virtual disk,
use existing virtual disk, use RDM or NO disk. What is the difference between a
virtual disk and a raw disk

For a operating system what is considered to be a raw?.

A disk without a file system, without an operating system understandable file
system, such a disk is considered as a raw.

Block Size
When formatting a Datastore, we have to define block sizes. A block is the
minimum size that a file occupies and this is defined while creating a file system.

For example for a block size of 8Kb, a file of 1kb will occupy 8kb. Similarly a file of
18 Kb will occupy 3 blocks (24Kb). Its important to note that a block can only be
occupied by a single file, meaning if a file occupies half the size of block, it will not
share that remaining free space on the block with another file.

Now if I format the file system with 8Mb block size I get 2Tb of disk space. This
space will be used for creating virtual machines.
What is a Datastore?
A datastore is a logical storage unit that can use disk space on one physical device or
one disk partition, or span several physical devices.

Types of datastores:

                    1.VMFS
                    2.Network File System (NFS)

Datastores are used to hold virtual machines, templates, and ISO images. A VMFS
datastore can also hold a raw device mapping (RDM), which is used to access
raw data.

VMFS Datastore:

     It can recognize only upto 2 TB LUN

     A VMFS datastore can extend spanning multiple LUN’s with a maximum of 32
     LUNs, meaning a single datastore can be of 64 TB. (This is not a good practice
     though)
     Allows concurrent access to shared storage
     Can be dynamically expanded
     Can use an 8MB block size, good for storing large virtual disk files
     Provides on-disk, block-level locking

     You can format local disk, SAN or iSCSI to create a datastore
VMFS can be formatted with different block sizes which are defined while creating
datastores. For example a 2 TB Disk Formatted with

1MB block size, maximum file size will be 256 GB
2MB block size, maximum file size will be 512 GB
4MB block size, maximum file size will be 1 TB
8MB block size, maximum file size will be 2 TB

Minimum size that a small file example of 1 KB will occupy a single block with a block size
for example 1MB, even if data in the file is 1KB. You cannot store more than a single file in
a block.

Block size and vmdk size limitation

Note: When you create a VMFS datastore on your VMware ESX servers many
administrators select the default 1MB block size without knowing when or why to change
it. The block size determines the minimum amount of disk space that any file will take up
on VMFS datastores. So an 18KB log file will actually take up 1MB of disk space (1 block)
and a 1.3MB file will take up 2MB of disk space (2 blocks). But the block size also
determines the maximum size that any file can be, if you select a 1MB block size on your
data store the maximum file size is limited to 256GB. So when you create a VM you
cannot assign it a single virtual disk greater then 256GB.
There is also no way to change the block size after you set it without deleting the
datastore and re-creating it, which will wipe out any data on the datastore.

Because of this you should choose your block size carefully when creating VMFS
datastores. The VMFS datastores mainly contain larger virtual disk files so increasing
the block size will not use all that much more disk space over the default 1MB size.

Block size and performance

Besides having smaller files use slightly more disk space on your datastore there are
no other downsides to using larger block sizes. There is no noticeable I/O
performance difference by using a larger block size. When you create your datastore,
make sure you choose your block size carefully. 1MB should be fine if you have a
smaller datastore (less than 500GB) and never plan on using virtual disks greater
then 256GB. If you have a medium (500GB – 1TB) datastore and there is a chance
that you may need a VM with a larger disk then go with a 2MB or 4MB block size.
For larger datastores (1TB – 2TB) go with a 4MB or 8MB block size. In most cases
you will not be creating virtual disks equal to the maximum size of your datastore
(2TB) so you will usually not need a 8MB block size
RDM

RDM or Raw Device Mappings is a method of presenting a RAW LUN to a Virtual
Machine
RDM’s provide a way for Virtual Machines to Access disks directly bypassing the
Virtualization Layer
RDM’s are used for Cluster applications like MCS (Microsoft Cluster Services) when
creating a cluster between a Physical and a Virtual Machine

RDM Mapping are supported for the following devices

SCSI
SATA
Fibre Channel
iSCSI
What Files Make Up a Virtual Machine?
VM Files



VMX file – The size of these files will be in KB’s
Log files, cannot grow more than MB’s
vmxd file, snapshot description file, NVRAM file, vmdk and flat.vmdk

What are the difference between vmdk and flat.vmdk

Vmdk is the description of the Virtual Disk and flat.vmdk is the actual disk for that particular
virtual machine. So it is a file which is acting as a disk for a virtual machine.

So when we create or provide a virtual disk for a virtual machine it has to be kept on VMFS
volume/datastore. So coming to conclusion it means when we format a VMFS volume with
1 MB block size, we can create a maximum virtual disk for a virtual machine of 256 Gb,
and so on.
SAN (Storage Area Network)
                                                                      SAN


                                                                      Disks
   ESX / ESXi           ESX / ESXi




     0     1    HBA      0     1
                                             Storage
                                             Processor
                                             (HBA)

                               FC Switch


Interconnecting Multiple nodes using a FC Switch is called a Fabric
SAN (Storage Area Network)

    Depending on the appropriate needs the SAN Administrator will create a
    Hardware RAID and then create a LUN
    LUNs are identified by their id, example 0, 1, 2 etc
    LUN ids can be dynamically generated or can be created static
    From the ESX Server side the HBA’s are recognized using WWN (World
    Wide Node name), similar to the MAC Address of the ethernet controller
    WWN is a 64 bit hexadecimal value assigned to the HBA by the vendor
    ESX admin needs to provide the WWN to the Storage Administrator
    ESX Servers can recognize upto 8 HBA and upto 16 paths per LUN
    But ESX Supports a maximum of 1024 paths from all the ESX Servers
SAN (Storage Area Network)
                                                                    SAN


                                                                   Disks
   ESX / ESXi          ESX / ESXi




                                                                 Storage Group
                                                                WWN1, WWN2
       0   1    HBA      0     1                                WWN3, WWN4
                                            Storage
WWN1   WWN2       WWN3   WWN3               Processor
                                            (HBA)

                               FC Switch


 Since WWN’s are 64 bit hexadecimal numbers and its difficult to remember, the
 Storage Admin creates a Alias to the WWN's giving them a friendly name,
 example for ESX1 he chooses ESX1 as the name and ESX2 respectively
LUN Mapping & LUN Masking is done at the Storage End
                                                                SAN

                                                                Disks
   ESX / ESXi         ESX / ESXi




                                                             Storage Group
                                                             WWN1, WWN2
       0   1    HBA     0    1                               WWN3, WWN4
                                         Storage
WWN1   WWN2      WWN3   WWN3             Processor
                                         (HBA)

                             FC Switch


 Depending on the Storage make a single LUN can be presented to 128 Nodes
Identifying HBA’s

      HBA are identied by vmhba, vmhba1 and so on.
      Each and every HBA has a controller which is always 0
                                                                           Disks
               ESX / ESXi




                                                                         Storage Group
                                                                            LUN1
                0      1     HBA
                                                   Storage
vmhba0          Vmhba1 controller 0                Processor          T1 T2
Controller 0                                       (HBA)

                                      FC Switch

      When you can access a LUN using multiple path it provides multipathing.
      Multipathing provides continuous access to a LUN incase of a hardware failure
Multipathing Policies
 ESX / ESXi supports multipathing policies
 NMT – Native Multipathing Polices which consists of
     Fixed – Provides only Fault Tolerance
     Most Recently Used (MRU) – Provides only Fault Tolerance
     Round Robin – Provides both Fault Tolerance as well as Load balancing
     When using fault tolerance using Fixed and MRU, one becomes a active
     path and the other becomes as passive path and is used only incase of
     a failure of the HBA currently being used, switching it to the 2nd HBA
 In Fixed multipathing when a failed HBA recovers from a failure it becomes
 active changing the state of the secondary HBA from active to Passive
 But incase of MRU when a failed HBA recovers it goes into a passive mode
 since the last used path was of the secondary HBA
 In round robin both HBA’s are in active active mode
 Storage vendors might have their own multipathing policies which might not
 be recognized by ESX Server, so kindly check with the vendor before buying
 the storage. Storage vendors might provide multipathing as a plugin to be
 installed
Fibre Switch


                      For security a storage admin
                                                          Disks
                      can configure zones at the FC
   ESX / ESXi         Switch. Zones are 2 types
                      Soft Zone
                      Hard Zone

                                                        Storage Group
                                                           LUN1
    0    1      HBA
                                           Storage
                                           Processor   T1 T2
                                           (HBA)

                              FC Switch
Zoning
 Hard Zone
 Is configured for the ports of the FC Switch
 If a cable is plugged out from the zone port and attaches to another port
 outside the zone, the LUNs are lost. Port need to be reconfigured in the hard
 zone in this case
 But if the HBA is changed and reconfigured with its WWN on the storage, no
 changes are needed to be done on the FC Switch


 Soft Zones
 Are configured using WWN’s
 Changing ports does not affect access to LUNs
 If a HBA is changed the soft zones need to be reconfigured
What is the difference between a Fibre Channel and a iSCSI?.
Both are SAN

     •   iSCSI uses IP based connection
     •   Fibre Channel uses Fibre HBA

For a Fibre channel, the Fibre Channel storage is connected to a FC Switch which is
connected to a FC HBA on the ESX host.

For a iSCSI, iSCSI storage is connected to a Ethernet network or you can use a hardware
initiator.

What is a initiator?.

An initiator is similar to a HBA. You have a hardware initiator, similar to a Fibre channel
    HBA. FC HBA will have a Fibre channel port, iSCSI initiator will have a Ethernet Port. A
    hardware initiator has a controller. The role of a controller is to encapsulate SCSI
    protocol into IP protocol for transportation from one end to the other end it has to use
    the IP. Or i can use a plain NIC also which can also communicate using Ethernet
    technology. In this case the NIC does not have the capability of encapsulating SCSI
    protocol into IP protocol. In this case you have to use software initiator. Software
    initiator uses CPU cycles since the NIC does not have a controller like a hardware
    initiator.
Understanding iSCSI Storage


iSCSI (Internet SCSI) is sending SCSI disk commands and data
over a TCP/IP network

Why use it?

1.Low cost
2.Use existing hardware - Ethernet NIC, switch, and OS features
3.Supports almost all vSphere features
Understanding iSCSI Storage




Downside – performance? and reliability?

iSCSI Terms:

• iSCSI hardware initiator - a special iSCSI NIC card
• iSCSI software initiator - use your own NIC card and OS iSCSI
software
• iSCSI Target - the server running iSCSI
ESX/ESXi                                                 Disk Arrays




                  This can be a
                  NIC or iSCSI
                      HBA

    NIC                     TCP/IP Network


                                                              SP


                             Ethernet Switch
iSCSI uses TCP/IP Protocol and uses IQN (iSCSI Qualified Name) naming
convention
Understanding iSCSI Storage



iSCSI uses IQN (iSCSI qualified name) to identify iSCSI Targets &
Initiators

It is laid out in this format:

    • date in year-month format
    • reversed domain of the iSCSI hardware provider, example
      qlogic, if it’s a software iSCSI then for example Microsoft
      might have provided the iSCSI software.
    • a unique organization assigned name (ie: hostname)
    • For example: 2009-10.com.hpesindia:iscsi1
Understanding iSCSI Storage

Configuring iSCSI

Hardware Initiator (HBA)
Login to the iSCSI storage and reboot
Go to the bios of the System and then to the BIOS of the HBA to
configure the IP

Software Initiator (Only for ESX Server)

By default the daemon iscsid is disabled
The iSCSI port 3260 is blocked by the firewall
iSCSI uses VMkernel connection type and ESX by default does not
have a VMkernel type

On ESXi everything is configured by default, all an admin has to do
is enable the software iSCSI initiator
Understanding iSCSI Storage

Configuring iSCSI

iSCSI uses 2 types of discovery method to connect to a iSCSI
storage

Static – Manually enter IP Address and the IQN informing the ESX
to connect to the iSCSI Storage

Send Target (Dynamic)
Send Target (Dynamic)

     ESX/ESXi                                                     Disk Arrays




         NIC                       TCP/IP Network


                                                                       SP


       Win2k                        Ethernet Switch


For Dynamic Discovery you need an additional system with a special software like
iNS (iSCSI Name Server Software). iNS will resolve IQN’s just like a DNS Server
resolves Host Names to IP Address. It contains database of all IQN and at the
ESX Server end an admin needs to put the IP Address of the iNS Server
Send Target (Dynamic)
Lun Mapping and LUN Masking can be done at the iSCSI Storage System


These are done in 2 ways


On IP Address
Or
IQN
NFS
NFS is supported by ESX/ESXi
CIFS is not supported at all
By default 8 NFS volumes can be mounted


This figure can be changed and upto 64 NFS volumes can be mounted on a single
ESX box


In this way, ESX supports 256 LUNs or Disks and 64 NFS which makes a total of
320 Datastores
NFS
    ESX/ESXi                                                       Disks

                                                 /data (rw,
                                                 norootsquash)
                     /mnt/nfs


                                                                 Unix/Linux


        NIC                     TCP/IP Network

                                Ethernet Switch
                                                                    NIC
NFS is a file level access
Configured in /etc/exports
And then execute the command exportfs
On the ESX/ESXi side the admin should know the IP Address of the NFS Server
and the mount point
NFS also uses VMkernel.

More Related Content

What's hot

Zettabyte File Storage System
Zettabyte File Storage SystemZettabyte File Storage System
Zettabyte File Storage SystemAmdocs
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS WorkshopAPNIC
 
ZFS: The Last Word in Filesystems
ZFS: The Last Word in FilesystemsZFS: The Last Word in Filesystems
ZFS: The Last Word in FilesystemsJarod Wang
 
Demystifying Storage
Demystifying  StorageDemystifying  Storage
Demystifying Storagebhavintu79
 
SmartOS ZFS Architecture
SmartOS ZFS ArchitectureSmartOS ZFS Architecture
SmartOS ZFS ArchitectureBill Pijewski
 
ZFS Tutorial USENIX June 2009
ZFS  Tutorial  USENIX June 2009ZFS  Tutorial  USENIX June 2009
ZFS Tutorial USENIX June 2009Richard Elling
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storageqsantechnology
 
USENIX LISA11 Tutorial: ZFS a
USENIX LISA11 Tutorial: ZFS a USENIX LISA11 Tutorial: ZFS a
USENIX LISA11 Tutorial: ZFS a Richard Elling
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And BoltsEric Sproul
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300qsantechnology
 
An Introduction to the Implementation of ZFS by Kirk McKusick
An Introduction to the Implementation of ZFS by Kirk McKusickAn Introduction to the Implementation of ZFS by Kirk McKusick
An Introduction to the Implementation of ZFS by Kirk McKusickeurobsdcon
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsYanghua Zhang
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql serverPrashant Kumar
 
Coal presentationt
Coal presentationtCoal presentationt
Coal presentationtfika sweety
 
Spinning Brown Donuts: Why Storage Still Counts
Spinning Brown Donuts: Why Storage Still CountsSpinning Brown Donuts: Why Storage Still Counts
Spinning Brown Donuts: Why Storage Still CountsSparkhound Inc.
 

What's hot (20)

Zettabyte File Storage System
Zettabyte File Storage SystemZettabyte File Storage System
Zettabyte File Storage System
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS Workshop
 
ZFS: The Last Word in Filesystems
ZFS: The Last Word in FilesystemsZFS: The Last Word in Filesystems
ZFS: The Last Word in Filesystems
 
Shadow forensics print
Shadow forensics printShadow forensics print
Shadow forensics print
 
ZFS
ZFSZFS
ZFS
 
Flourish16
Flourish16Flourish16
Flourish16
 
Demystifying Storage
Demystifying  StorageDemystifying  Storage
Demystifying Storage
 
SmartOS ZFS Architecture
SmartOS ZFS ArchitectureSmartOS ZFS Architecture
SmartOS ZFS Architecture
 
ZFS Tutorial USENIX June 2009
ZFS  Tutorial  USENIX June 2009ZFS  Tutorial  USENIX June 2009
ZFS Tutorial USENIX June 2009
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storage
 
USENIX LISA11 Tutorial: ZFS a
USENIX LISA11 Tutorial: ZFS a USENIX LISA11 Tutorial: ZFS a
USENIX LISA11 Tutorial: ZFS a
 
ZFS
ZFSZFS
ZFS
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And Bolts
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300
 
An Introduction to the Implementation of ZFS by Kirk McKusick
An Introduction to the Implementation of ZFS by Kirk McKusickAn Introduction to the Implementation of ZFS by Kirk McKusick
An Introduction to the Implementation of ZFS by Kirk McKusick
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage Protocols
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql server
 
TDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 ENTDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 EN
 
Coal presentationt
Coal presentationtCoal presentationt
Coal presentationt
 
Spinning Brown Donuts: Why Storage Still Counts
Spinning Brown Donuts: Why Storage Still CountsSpinning Brown Donuts: Why Storage Still Counts
Spinning Brown Donuts: Why Storage Still Counts
 

Viewers also liked

Managing Windows User Accounts via the Commandline
Managing Windows User Accounts via the CommandlineManaging Windows User Accounts via the Commandline
Managing Windows User Accounts via the CommandlineRedspin, Inc.
 
iSCSI introduction and usage
iSCSI introduction and usageiSCSI introduction and usage
iSCSI introduction and usageLingshan Zhu
 
Storage Primer
Storage PrimerStorage Primer
Storage Primersriramr
 
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Aventis Systems, Inc.
 
Chapter03 Creating And Managing User Accounts
Chapter03      Creating And  Managing  User  AccountsChapter03      Creating And  Managing  User  Accounts
Chapter03 Creating And Managing User AccountsRaja Waseem Akhtar
 
Windows Server 2008 R2 Overview
Windows Server 2008 R2 OverviewWindows Server 2008 R2 Overview
Windows Server 2008 R2 OverviewSteven Wilder
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterStuart Miniman
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Cisco Canada
 
FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations  FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations EMC
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )EMC
 

Viewers also liked (14)

Managing Windows User Accounts via the Commandline
Managing Windows User Accounts via the CommandlineManaging Windows User Accounts via the Commandline
Managing Windows User Accounts via the Commandline
 
Storage Managment
Storage ManagmentStorage Managment
Storage Managment
 
Mem hierarchy
Mem hierarchyMem hierarchy
Mem hierarchy
 
iSCSI introduction and usage
iSCSI introduction and usageiSCSI introduction and usage
iSCSI introduction and usage
 
Storage Primer
Storage PrimerStorage Primer
Storage Primer
 
MCSA 70-412 Chapter 10
MCSA 70-412 Chapter 10MCSA 70-412 Chapter 10
MCSA 70-412 Chapter 10
 
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
 
70 640 Lesson05 Ppt 041009
70 640 Lesson05 Ppt 04100970 640 Lesson05 Ppt 041009
70 640 Lesson05 Ppt 041009
 
Chapter03 Creating And Managing User Accounts
Chapter03      Creating And  Managing  User  AccountsChapter03      Creating And  Managing  User  Accounts
Chapter03 Creating And Managing User Accounts
 
Windows Server 2008 R2 Overview
Windows Server 2008 R2 OverviewWindows Server 2008 R2 Overview
Windows Server 2008 R2 Overview
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
 
FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations  FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
 

Similar to Storage

vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformanceProfessionalVMware
 
Vstorm Storage Architecture V1.1
Vstorm Storage Architecture V1.1Vstorm Storage Architecture V1.1
Vstorm Storage Architecture V1.1Meznir
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
 
What is storage from Qsan Technology
What is storage from Qsan TechnologyWhat is storage from Qsan Technology
What is storage from Qsan Technologyqsantechnology
 
vmfs intro
vmfs introvmfs intro
vmfs introbergwolf
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)Pekka Männistö
 
LizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webLizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webSzymon Haly
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes StorageStephen Foskett
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V SphereAnne Achleman
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsAnne Achleman
 
SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?Saroj Sahu
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answersvivaankumar
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Manoj Kumar S
 
Varrow datacenter storage today and tomorrow
Varrow   datacenter storage today and tomorrowVarrow   datacenter storage today and tomorrow
Varrow datacenter storage today and tomorrowpittmantony
 
The Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsThe Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsTony Pearson
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overviewnomathjobs
 
Using Windows Storage Spaces and iSCSI on Amazon EBS
Using Windows Storage Spaces and iSCSI on Amazon EBSUsing Windows Storage Spaces and iSCSI on Amazon EBS
Using Windows Storage Spaces and iSCSI on Amazon EBSLaroy Shtotland
 

Similar to Storage (20)

Posscon2013
Posscon2013Posscon2013
Posscon2013
 
vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting Performance
 
Vstorm Storage Architecture V1.1
Vstorm Storage Architecture V1.1Vstorm Storage Architecture V1.1
Vstorm Storage Architecture V1.1
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
What is storage from Qsan Technology
What is storage from Qsan TechnologyWhat is storage from Qsan Technology
What is storage from Qsan Technology
 
vmfs intro
vmfs introvmfs intro
vmfs intro
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)
 
LizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webLizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-web
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes Storage
 
Qnap nas ts serie x71u-catalogo
Qnap nas ts serie x71u-catalogoQnap nas ts serie x71u-catalogo
Qnap nas ts serie x71u-catalogo
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage Enhancements
 
SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answers
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01
 
Varrow datacenter storage today and tomorrow
Varrow   datacenter storage today and tomorrowVarrow   datacenter storage today and tomorrow
Varrow datacenter storage today and tomorrow
 
The Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsThe Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged Environments
 
3487570
34875703487570
3487570
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overview
 
Using Windows Storage Spaces and iSCSI on Amazon EBS
Using Windows Storage Spaces and iSCSI on Amazon EBSUsing Windows Storage Spaces and iSCSI on Amazon EBS
Using Windows Storage Spaces and iSCSI on Amazon EBS
 

Storage

  • 1. Storage Storage Concepts IP Storage: iSCSI and NAS/NFS Fibre Channel SAN Storage VMFS Datastores
  • 2. Storage Storage – virtual disks & VMFS Storage Area Networks (SAN) Understanding FC & iSCSI Storage Why you need a SAN Storage Terms you must know What is in a Datastore? ESX Server Storage Options VMFS Specs and Maxs
  • 3. Types of Storages Local (SCSI/SATA/SAS/IDE) SAN (Fibre Channel & iSCSI NAS (NFS & CIFS)
  • 4. Why do you need a Storage ESX to Boot For Virtual Machines Centralized Storage is required for advanced features of vSphere like VMotion, VMHA, FT, and DRS
  • 5. ESX / ESXi Incase if we choose to install ESX/ESXi Server, the Server can be installed on the local disk of the physical machine or on the SAN (Boot from SAN - Remote Boot). VMware ESX supports boot from SAN. Booting from SAN requires one dedicated LUN per server. VMware ESXi (4.1 Only) may be booted from SAN. This is supported for Fibre Channel SAN, as well as iSCSI and FCoE for certain storage adapters that have been qualified for this capability. (Refer HCL for supported Storage Adapters). ESXi comes in 2 version Embedded : The ESXi is embedded in the hardware on which there is a Flash ROM, these servers are mostly provided by vendors. Installable/ISO: ESXi is also available as Installable or an ISO
  • 6. Space Requirements to Install ESX vmkcore 110 Mb boot Partition 1.1 Gb / root partition 5 Gb var/log partition 2 Gb swap partition 600 Mb to 1.6 Gb vmkcore partition is used to store the core dumps generated by the VMkernel. The /var/core directory is used to store the core dumps. Optionally you can go for Scratch Partition (Optionally) - We call it vFAT Scratch partition - 4Gb ESXi can locally boot from USB ROM, we may not have 4Gb on the USB Drive and we might not go for Vscratch partition. When we don't go for a Vscratch partition, ESXi Kernel additionally will use 512 Mb for itself, by default. VMkernel uses 154 Mb of memory plus incase you don't have a scratch partition, 512 Mb for itself. Vscratch partition is used for swap for the VMkernel. Swap is used for service console on ESX servers. It uses double or 1.5 times of the memory. Service Console uses minimum 300Mb minimum memory by default, so the Minimum swap partition is of 600Mb. Maximum memory it might use is 800 Mb, so swap should be allocated around 1.6Gb.
  • 7. Space Requirements to Install ESX So in all you need around 9.8Gb disk space is what you need maximum to install ESX Server ESX Server supports upto 1 Tb of memory With regards to ESXi, it has a small footprint and it is possible to install ESXi 110/120 Mb ESXi can be installed on SCSI, SATA, IDE and SAN(ESXi 4.1). ESXi cannot boot NFS or CIFS. ESX can be booted from Fibre Channel SAN, 1st boot device would be FC HBA through which you are booting and the path to target storage processor between these two should be active path and not a passive path, and it should be able to recognize this boot LUN
  • 8. What are all the ways to provide storage to the virtual machines? When creating a virtual machine, you have options of Create a new virtual disk, use existing virtual disk, use RDM or NO disk. What is the difference between a virtual disk and a raw disk For a operating system what is considered to be a raw?. A disk without a file system, without an operating system understandable file system, such a disk is considered as a raw. Block Size When formatting a Datastore, we have to define block sizes. A block is the minimum size that a file occupies and this is defined while creating a file system. For example for a block size of 8Kb, a file of 1kb will occupy 8kb. Similarly a file of 18 Kb will occupy 3 blocks (24Kb). Its important to note that a block can only be occupied by a single file, meaning if a file occupies half the size of block, it will not share that remaining free space on the block with another file. Now if I format the file system with 8Mb block size I get 2Tb of disk space. This space will be used for creating virtual machines.
  • 9. What is a Datastore? A datastore is a logical storage unit that can use disk space on one physical device or one disk partition, or span several physical devices. Types of datastores: 1.VMFS 2.Network File System (NFS) Datastores are used to hold virtual machines, templates, and ISO images. A VMFS datastore can also hold a raw device mapping (RDM), which is used to access raw data. VMFS Datastore: It can recognize only upto 2 TB LUN A VMFS datastore can extend spanning multiple LUN’s with a maximum of 32 LUNs, meaning a single datastore can be of 64 TB. (This is not a good practice though) Allows concurrent access to shared storage Can be dynamically expanded Can use an 8MB block size, good for storing large virtual disk files Provides on-disk, block-level locking You can format local disk, SAN or iSCSI to create a datastore
  • 10. VMFS can be formatted with different block sizes which are defined while creating datastores. For example a 2 TB Disk Formatted with 1MB block size, maximum file size will be 256 GB 2MB block size, maximum file size will be 512 GB 4MB block size, maximum file size will be 1 TB 8MB block size, maximum file size will be 2 TB Minimum size that a small file example of 1 KB will occupy a single block with a block size for example 1MB, even if data in the file is 1KB. You cannot store more than a single file in a block. Block size and vmdk size limitation Note: When you create a VMFS datastore on your VMware ESX servers many administrators select the default 1MB block size without knowing when or why to change it. The block size determines the minimum amount of disk space that any file will take up on VMFS datastores. So an 18KB log file will actually take up 1MB of disk space (1 block) and a 1.3MB file will take up 2MB of disk space (2 blocks). But the block size also determines the maximum size that any file can be, if you select a 1MB block size on your data store the maximum file size is limited to 256GB. So when you create a VM you cannot assign it a single virtual disk greater then 256GB.
  • 11. There is also no way to change the block size after you set it without deleting the datastore and re-creating it, which will wipe out any data on the datastore. Because of this you should choose your block size carefully when creating VMFS datastores. The VMFS datastores mainly contain larger virtual disk files so increasing the block size will not use all that much more disk space over the default 1MB size. Block size and performance Besides having smaller files use slightly more disk space on your datastore there are no other downsides to using larger block sizes. There is no noticeable I/O performance difference by using a larger block size. When you create your datastore, make sure you choose your block size carefully. 1MB should be fine if you have a smaller datastore (less than 500GB) and never plan on using virtual disks greater then 256GB. If you have a medium (500GB – 1TB) datastore and there is a chance that you may need a VM with a larger disk then go with a 2MB or 4MB block size. For larger datastores (1TB – 2TB) go with a 4MB or 8MB block size. In most cases you will not be creating virtual disks equal to the maximum size of your datastore (2TB) so you will usually not need a 8MB block size
  • 12. RDM RDM or Raw Device Mappings is a method of presenting a RAW LUN to a Virtual Machine RDM’s provide a way for Virtual Machines to Access disks directly bypassing the Virtualization Layer RDM’s are used for Cluster applications like MCS (Microsoft Cluster Services) when creating a cluster between a Physical and a Virtual Machine RDM Mapping are supported for the following devices SCSI SATA Fibre Channel iSCSI
  • 13. What Files Make Up a Virtual Machine?
  • 14. VM Files VMX file – The size of these files will be in KB’s Log files, cannot grow more than MB’s vmxd file, snapshot description file, NVRAM file, vmdk and flat.vmdk What are the difference between vmdk and flat.vmdk Vmdk is the description of the Virtual Disk and flat.vmdk is the actual disk for that particular virtual machine. So it is a file which is acting as a disk for a virtual machine. So when we create or provide a virtual disk for a virtual machine it has to be kept on VMFS volume/datastore. So coming to conclusion it means when we format a VMFS volume with 1 MB block size, we can create a maximum virtual disk for a virtual machine of 256 Gb, and so on.
  • 15. SAN (Storage Area Network) SAN Disks ESX / ESXi ESX / ESXi 0 1 HBA 0 1 Storage Processor (HBA) FC Switch Interconnecting Multiple nodes using a FC Switch is called a Fabric
  • 16. SAN (Storage Area Network) Depending on the appropriate needs the SAN Administrator will create a Hardware RAID and then create a LUN LUNs are identified by their id, example 0, 1, 2 etc LUN ids can be dynamically generated or can be created static From the ESX Server side the HBA’s are recognized using WWN (World Wide Node name), similar to the MAC Address of the ethernet controller WWN is a 64 bit hexadecimal value assigned to the HBA by the vendor ESX admin needs to provide the WWN to the Storage Administrator ESX Servers can recognize upto 8 HBA and upto 16 paths per LUN But ESX Supports a maximum of 1024 paths from all the ESX Servers
  • 17. SAN (Storage Area Network) SAN Disks ESX / ESXi ESX / ESXi Storage Group WWN1, WWN2 0 1 HBA 0 1 WWN3, WWN4 Storage WWN1 WWN2 WWN3 WWN3 Processor (HBA) FC Switch Since WWN’s are 64 bit hexadecimal numbers and its difficult to remember, the Storage Admin creates a Alias to the WWN's giving them a friendly name, example for ESX1 he chooses ESX1 as the name and ESX2 respectively
  • 18. LUN Mapping & LUN Masking is done at the Storage End SAN Disks ESX / ESXi ESX / ESXi Storage Group WWN1, WWN2 0 1 HBA 0 1 WWN3, WWN4 Storage WWN1 WWN2 WWN3 WWN3 Processor (HBA) FC Switch Depending on the Storage make a single LUN can be presented to 128 Nodes
  • 19. Identifying HBA’s HBA are identied by vmhba, vmhba1 and so on. Each and every HBA has a controller which is always 0 Disks ESX / ESXi Storage Group LUN1 0 1 HBA Storage vmhba0 Vmhba1 controller 0 Processor T1 T2 Controller 0 (HBA) FC Switch When you can access a LUN using multiple path it provides multipathing. Multipathing provides continuous access to a LUN incase of a hardware failure
  • 20. Multipathing Policies ESX / ESXi supports multipathing policies NMT – Native Multipathing Polices which consists of Fixed – Provides only Fault Tolerance Most Recently Used (MRU) – Provides only Fault Tolerance Round Robin – Provides both Fault Tolerance as well as Load balancing When using fault tolerance using Fixed and MRU, one becomes a active path and the other becomes as passive path and is used only incase of a failure of the HBA currently being used, switching it to the 2nd HBA In Fixed multipathing when a failed HBA recovers from a failure it becomes active changing the state of the secondary HBA from active to Passive But incase of MRU when a failed HBA recovers it goes into a passive mode since the last used path was of the secondary HBA In round robin both HBA’s are in active active mode Storage vendors might have their own multipathing policies which might not be recognized by ESX Server, so kindly check with the vendor before buying the storage. Storage vendors might provide multipathing as a plugin to be installed
  • 21. Fibre Switch For security a storage admin Disks can configure zones at the FC ESX / ESXi Switch. Zones are 2 types Soft Zone Hard Zone Storage Group LUN1 0 1 HBA Storage Processor T1 T2 (HBA) FC Switch
  • 22. Zoning Hard Zone Is configured for the ports of the FC Switch If a cable is plugged out from the zone port and attaches to another port outside the zone, the LUNs are lost. Port need to be reconfigured in the hard zone in this case But if the HBA is changed and reconfigured with its WWN on the storage, no changes are needed to be done on the FC Switch Soft Zones Are configured using WWN’s Changing ports does not affect access to LUNs If a HBA is changed the soft zones need to be reconfigured
  • 23. What is the difference between a Fibre Channel and a iSCSI?. Both are SAN • iSCSI uses IP based connection • Fibre Channel uses Fibre HBA For a Fibre channel, the Fibre Channel storage is connected to a FC Switch which is connected to a FC HBA on the ESX host. For a iSCSI, iSCSI storage is connected to a Ethernet network or you can use a hardware initiator. What is a initiator?. An initiator is similar to a HBA. You have a hardware initiator, similar to a Fibre channel HBA. FC HBA will have a Fibre channel port, iSCSI initiator will have a Ethernet Port. A hardware initiator has a controller. The role of a controller is to encapsulate SCSI protocol into IP protocol for transportation from one end to the other end it has to use the IP. Or i can use a plain NIC also which can also communicate using Ethernet technology. In this case the NIC does not have the capability of encapsulating SCSI protocol into IP protocol. In this case you have to use software initiator. Software initiator uses CPU cycles since the NIC does not have a controller like a hardware initiator.
  • 24. Understanding iSCSI Storage iSCSI (Internet SCSI) is sending SCSI disk commands and data over a TCP/IP network Why use it? 1.Low cost 2.Use existing hardware - Ethernet NIC, switch, and OS features 3.Supports almost all vSphere features
  • 25. Understanding iSCSI Storage Downside – performance? and reliability? iSCSI Terms: • iSCSI hardware initiator - a special iSCSI NIC card • iSCSI software initiator - use your own NIC card and OS iSCSI software • iSCSI Target - the server running iSCSI
  • 26. ESX/ESXi Disk Arrays This can be a NIC or iSCSI HBA NIC TCP/IP Network SP Ethernet Switch iSCSI uses TCP/IP Protocol and uses IQN (iSCSI Qualified Name) naming convention
  • 27. Understanding iSCSI Storage iSCSI uses IQN (iSCSI qualified name) to identify iSCSI Targets & Initiators It is laid out in this format: • date in year-month format • reversed domain of the iSCSI hardware provider, example qlogic, if it’s a software iSCSI then for example Microsoft might have provided the iSCSI software. • a unique organization assigned name (ie: hostname) • For example: 2009-10.com.hpesindia:iscsi1
  • 28. Understanding iSCSI Storage Configuring iSCSI Hardware Initiator (HBA) Login to the iSCSI storage and reboot Go to the bios of the System and then to the BIOS of the HBA to configure the IP Software Initiator (Only for ESX Server) By default the daemon iscsid is disabled The iSCSI port 3260 is blocked by the firewall iSCSI uses VMkernel connection type and ESX by default does not have a VMkernel type On ESXi everything is configured by default, all an admin has to do is enable the software iSCSI initiator
  • 29. Understanding iSCSI Storage Configuring iSCSI iSCSI uses 2 types of discovery method to connect to a iSCSI storage Static – Manually enter IP Address and the IQN informing the ESX to connect to the iSCSI Storage Send Target (Dynamic)
  • 30. Send Target (Dynamic) ESX/ESXi Disk Arrays NIC TCP/IP Network SP Win2k Ethernet Switch For Dynamic Discovery you need an additional system with a special software like iNS (iSCSI Name Server Software). iNS will resolve IQN’s just like a DNS Server resolves Host Names to IP Address. It contains database of all IQN and at the ESX Server end an admin needs to put the IP Address of the iNS Server
  • 31. Send Target (Dynamic) Lun Mapping and LUN Masking can be done at the iSCSI Storage System These are done in 2 ways On IP Address Or IQN
  • 32. NFS NFS is supported by ESX/ESXi CIFS is not supported at all By default 8 NFS volumes can be mounted This figure can be changed and upto 64 NFS volumes can be mounted on a single ESX box In this way, ESX supports 256 LUNs or Disks and 64 NFS which makes a total of 320 Datastores
  • 33. NFS ESX/ESXi Disks /data (rw, norootsquash) /mnt/nfs Unix/Linux NIC TCP/IP Network Ethernet Switch NIC NFS is a file level access Configured in /etc/exports And then execute the command exportfs On the ESX/ESXi side the admin should know the IP Address of the NFS Server and the mount point NFS also uses VMkernel.