SlideShare ist ein Scribd-Unternehmen logo
1 von 8
Downloaden Sie, um offline zu lesen
WHITE   PAPER




Comparison of Storage Protocol Performance
          in VMware vSphere™ 4
VMware white paper




    Table of Contents

    Introduction  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3
    executive Summary  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3
    experimental Setup  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3
    I/O workload  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4
    experiment: Single VM: Throughput (read and write)  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
    experiment: Single VM: CPU Cost Per I/O (read) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
    experiment: Multiple VMs: aggregate Throughput (read)  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 7
    Conclusion  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 7




2
VMware white paper




Introduction
This paper compares the performance of various storage protocols available on VMware vSphere™ 4. The protocols Fibre Channel,
Hardware iSCSI, Software iSCSI, and NFS are tested using virtual machines on an ESX 4.0 host. Iometer is used to generate the I/O workload.
The Fibre Channel experiments were conducted over a 4Gb Fibre Channel network. The Hardware iSCSI, Software iSCSI, and NFS
experiments were conducted over a Gigabit Ethernet connection. Experiments over 10Gb Ethernet and 8Gb Fibre Channel will be
included in a future update to this paper.
This paper will start by describing key aspects of the test environment: ESX host, storage array, virtual machines, and Iometer workload.
Next, performance results for throughput and CPU cost are presented from experiments involving one or more virtual machines.
Finally, the key findings of the experiments are summarized.
The terms “storage server” and “storage array” will be used interchangeably in this paper.

executive Summary
The experiments in this paper show that each of the four storage protocols (Fibre Channel, Hardware iSCSI, Software iSCSI, and NFS)
can achieve line-rate throughput for both single virtual machine and multiple virtual machines on an ESX host. These experiments
also show that Fibre Channel and Hardware iSCSI have substantially lower CPU cost than Software iSCSI and NFS.

experimental Setup
The table below shows key aspects of the test environment for the ESX host, the storage array, and the virtual machine.

ESX Host

 Component                                         Details
 hypervisor                                        VMware eSX 4.0
 processors                                        Four intel Xeon e7340 Quad-Core 2.4Ghz
                                                   processors
 Memory                                            32GB
 Fibre Channel hBa                                 QLogic QLa2432 4Gb
 Fibre Channel network                             4Gb FC switch
 NiC for NFS and Sw iSCSi                          1Gb (intel 82571eB)
 MtU for NFS, Sw iSCSi, hw iSCSi                   1500 bytes
 iSCSi hBa                                         QLogic QL4062c 1Gb (Firmware: 3.0.1.49)
 ip network for NFS and Sw/hw iSCSi                1Gb ethernet with dedicated switch and
                                                   VLaN (extreme Summit 400-48t)
 File system for NFS                               Native file system on NFS server
 File system for FC and Sw/hw iSCSi                None (rDM-physical was used)


Storage Array

 Component                                         Details
 Storage server                                    One server supporting FC, iSCSi, and NFS
 Disk Drives: Number per data LUN                  9
 Disk Drives: Size                                 300Gb
 Disk Drives: Speed                                15K rpM
 Disk Drives: type                                 Fibre Channel




                                                                                                                                              3
VMware white paper




    Virtual Machine

     Component                                          Details
     Guest OS                                           windows Server 2008 enterprise Sp1
     Virtual processors                                 1
     Memory                                             512MB
     Virtual disk for data                              100MB Mapped raw LUN (rDM-physical)
     File system                                        None (physical drives were used)
     SCSi controller                                    LSi Logic parallel


    VMware’s VMFS file system is recommended for production deployments of virtual machines on iSCSI and Fibre Channel arrays.
    Because NFS storage presents files and not blocks, VMFS is not needed or possible. VMFS was therefore not used in the Fibre
    Channel and iSCSI experiments to attempt to produce results that could be compared across all protocols.

    I/O workload
    Iometer (http://sourceforge.net/projects/iometer) was used to generate the I/O workload for these experiments. Iometer is a free
    storage performance testing tool that can be configured to measure throughput and latency under a wide variety of access profiles.

    Iometer Workload

     Component                                          Details
     Number of outstanding i/Os                         16
     run time                                           2 min
     ramp-up time                                       2 min
     Number of workers                                  1 (per VM)


    Each virtual (data) disk of the virtual machines used in these experiments is 100MB in size. The small size of these virtual disks ensures
    that the I/O working set will fit into the cache of the storage array. An experiment with a working set size that fits into the cache of
    the storage array is commonly referred to as a cached run.
    For read operations in a cached run experiment, the data is served from the storage array’s cache, and read performance is independent
    of disk latencies.
    For write operations in a cached run experiment, the rate of write requests at the storage array may exceed the storage array’s rate of
    writing the dirty blocks from the write cache to disk. If this happens, the write cache will eventually fill up. Once the write cache is full,
    write performance is limited by the rate at which dirty blocks in the write cache are written to disk. This rate is limited by the latency
    of the disks in the storage array, the RAID configuration, and the number of disk spindles used for the LUN.
    For these reasons, read performance for cached runs is a better indication of the true performance of a storage protocol on the ESX
    host, irrespective of the storage array used.




4
VMware white paper




experiment: Single VM: Throughput (read and write)
Figure 1 shows the sequential read throughput (in MB/sec) of running a single virtual machine in the standard workload configuration
for different I/O block sizes, for each of the storage protocols.

Figure 1: Read throughput for different I/O block sizes




                          450
                          400
    Throughput (MB/sec)




                          350
                          300
                          250
                          200
                          150
                          100
                           50
                            0
                                1KB   4KB            8KB   16KB      32KB    64KB    128KB       256KB    512KB

                                                           I/O Block Sizes     NFS    SW iSCSI      HW iSCSI      FC




For Fibre Channel, read throughput is limited by the bandwidth of the 4Gb Fibre Channel link for I/O sizes at or above 64KB. For IP-based
protocols, read throughput is limited by the bandwidth of the 1Gb Ethernet link for I/O sizes at or above 32KB.
Figure 2 shows the sequential write throughput (in MB/sec) of running a single virtual machine in the standard workload configuration
for different I/O block sizes, for each of the storage protocols.

Figure 2: Write throughput for different I/O block sizes




                          450
                          400
    Throughput (MB/sec)




                          350
                          300
                          250
                          200
                          150
                          100
                           50
                            0
                                1KB   4KB            8KB   16KB      32KB    64KB    128KB       256KB    512KB

                                                           I/O Block Sizes     NFS    SW iSCSI      HW iSCSI      FC


                                                                                                                                            5
VMware white paper




    For Fibre Channel, the maximum write throughput for any I/O block size is consistently lower than read throughput of the same I/O
    block size. This is the result of disk write bandwidth limitations on the storage array. For the IP-based protocols, write throughput for
    block sizes at or above 16KB is limited by the bandwidth of the 1Gb Ethernet link.
    To summarize, a single Iometer thread running in a virtual machine can saturate the bandwidth of the respective networks for all four
    storage protocols, for both read and write. Fibre Channel throughput performance is higher because of the higher bandwidth of the
    Fibre Channel link. For the IP-based protocols, there is no significant throughput difference for most block sizes.

    experiment: Single VM: CPU Cost Per I/O (read)
    CPU cost is a measure of the amount of CPU resources used by ESX to perform a given amount of I/O. In this paper, the CPU cost of
    each storage protocol is measured in units of CPU cycles per I/O operation. The cost for different storage protocols is normalized with
    respect to the cost of software iSCSI on ESX 3.5.
    Figure 3 shows the relative CPU cost of sequential reads in a single virtual machine in the standard workload configuration for a block
    size of 64 KB for each of the storage protocols. Results on ESX 4.0 are shown next to ESX 3.5 to highlight efficiency improvements on
    all protocols. The CPU cost of write operations for different storage protocols was not compared as write performance is strongly
    dependent on the choice of the storage array.

    Figure 3: Relative CPU cost of 64 KB sequential reads in a single virtual machine




                                       1.2


                                       1.0
           Relative CPU Cost Per I/O




                                       0.8


                                       0.6


                                       0.4


                                       0.2


                                        0
                                             NFS                    SW iSCSI            HW iSCSI             FC

                                                                                                   ESX 3.5   ESX 4.0




    For Fibre Channel and Hardware iSCSI, a major part of the protocol processing is offloaded to the HBA, and consequently the cost
    of each I/O is very low. For Software iSCSI and NFS, host CPUs are used for protocol processing which increases cost. Furthermore,
    the cost of NFS and Software iSCSI is higher with larger block sizes, such as 64 KB. This is due to the additional CPU cycles needed
    for each block for check summing, blocking, etc. Software iSCSI and NFS are more efficient at smaller blocks and are both capable
    of delivering high throughput performance when CPU resource is not a bottleneck, as will be shown in the next section.
    The cost per I/O is dependent on a variety of test parameters, such as platform architecture, block size, and other factors.
    However, these tests demonstrate improved efficiency in vSphere 4’s storage stack over the previous version.




6
VMware white paper




experiment: Multiple VMs: aggregate Throughput (read)
Figure 4 shows the aggregate sequential read throughput (in MB/sec) of running 2, 4, 8, 16, and 32 virtual machines in the standard
workload configuration for a block size of 64KB. Each virtual machine performs I/O to its dedicated 100MB LUN.

Figure 4: Throughput of running multiple VMs in the standard workload configuration




                          450
                          400
    Throughput (MB/sec)




                          350
                          300
                          250
                          200
                          150
                          100
                           50
                            0
                                2                     4                         8           16              32
                                                          Number of VMs               NFS    SW iSCSI   HW iSCSI   FC




For each of the storage protocols, the maximum aggregate throughput is limited by the network bandwidth. There is no degradation
in aggregate throughput for a large number of virtual machines.

Conclusion
In this paper, the performance of four storage protocols was compared for accessing shared storage available on VMware ESX 4.0:
Fibre Channel, Hardware iSCSI, Software iSCSI, and NFS.
All four storage protocols for shared storage on ESX are shown to be capable of achieving throughput levels that are only limited by the
capabilities of the storage array and the connection between it and the ESX server. ESX shows excellent scalability by maintaining these
performance levels in cases of heavy consolidation. For CPU cost, Fibre Channel and Hardware iSCSI are more efficient than Software iSCSI
and NFS. However, when CPU resources are not a bottleneck, Software iSCSI and NFS can also be part of a high-performance solution.
Earlier versions of ESX were also able to achieve throughput levels that are only limited by the array and bandwidth to it. VMware vSphere 4
continues to support this maximized throughput but can do so with greater efficiency. Improved efficiency on vSphere 4 means the
same high levels of performance with more virtual machines.




                                                                                                                                               7
VMware, Inc. 3401 Hillview Ave Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright © 2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.

VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

                                                                           VMW_09Q2_WP_VSPHERE_StorageProtocols_P8_R1

Weitere ähnliche Inhalte

Was ist angesagt?

CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
Deployment Strategies
Deployment StrategiesDeployment Strategies
Deployment StrategiesMongoDB
 
Hdlogger project 2014.Aug
Hdlogger project 2014.AugHdlogger project 2014.Aug
Hdlogger project 2014.Aug捷恩 蔡
 
Vcpfaq
VcpfaqVcpfaq
Vcpfaqpeddin
 
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce RichardsonThe 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardsonharryvanhaaren
 
제3회난공불락 오픈소스 인프라세미나 - lustre
제3회난공불락 오픈소스 인프라세미나 - lustre제3회난공불락 오픈소스 인프라세미나 - lustre
제3회난공불락 오픈소스 인프라세미나 - lustreTommy Lee
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackBoden Russell
 
Hadoop on a personal supercomputer
Hadoop on a personal supercomputerHadoop on a personal supercomputer
Hadoop on a personal supercomputerPaul Dingman
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
CloudStackユーザ会〜仮想ルータの謎に迫る
CloudStackユーザ会〜仮想ルータの謎に迫るCloudStackユーザ会〜仮想ルータの謎に迫る
CloudStackユーザ会〜仮想ルータの謎に迫るsamemoon
 
Design and implementation of a reliable and cost-effective cloud computing in...
Design and implementation of a reliable and cost-effective cloud computing in...Design and implementation of a reliable and cost-effective cloud computing in...
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
 
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)Boden Russell
 

Was ist angesagt? (18)

Qemu Introduction
Qemu IntroductionQemu Introduction
Qemu Introduction
 
Tacc Infinite Memory Engine
Tacc Infinite Memory EngineTacc Infinite Memory Engine
Tacc Infinite Memory Engine
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Tips of Malloc & Free
Tips of Malloc & FreeTips of Malloc & Free
Tips of Malloc & Free
 
Storage
StorageStorage
Storage
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Deployment Strategies
Deployment StrategiesDeployment Strategies
Deployment Strategies
 
Hdlogger project 2014.Aug
Hdlogger project 2014.AugHdlogger project 2014.Aug
Hdlogger project 2014.Aug
 
(Free and Net) BSD Xen Roadmap
(Free and Net) BSD Xen Roadmap(Free and Net) BSD Xen Roadmap
(Free and Net) BSD Xen Roadmap
 
Vcpfaq
VcpfaqVcpfaq
Vcpfaq
 
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce RichardsonThe 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
 
제3회난공불락 오픈소스 인프라세미나 - lustre
제3회난공불락 오픈소스 인프라세미나 - lustre제3회난공불락 오픈소스 인프라세미나 - lustre
제3회난공불락 오픈소스 인프라세미나 - lustre
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
 
Hadoop on a personal supercomputer
Hadoop on a personal supercomputerHadoop on a personal supercomputer
Hadoop on a personal supercomputer
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
CloudStackユーザ会〜仮想ルータの謎に迫る
CloudStackユーザ会〜仮想ルータの謎に迫るCloudStackユーザ会〜仮想ルータの謎に迫る
CloudStackユーザ会〜仮想ルータの謎に迫る
 
Design and implementation of a reliable and cost-effective cloud computing in...
Design and implementation of a reliable and cost-effective cloud computing in...Design and implementation of a reliable and cost-effective cloud computing in...
Design and implementation of a reliable and cost-effective cloud computing in...
 
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
 

Andere mochten auch

Growing the TYPO3 Community in North America
Growing  the TYPO3 Community in North AmericaGrowing  the TYPO3 Community in North America
Growing the TYPO3 Community in North AmericaPatrick Gaumond
 
Themes of Geography
Themes of GeographyThemes of Geography
Themes of Geographyspensley
 
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Kevin Weil
 
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Kevin Weil
 
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...Pays Médoc
 
xAPI - web-conférence FFFOD du 15/09/15
xAPI - web-conférence FFFOD du 15/09/15xAPI - web-conférence FFFOD du 15/09/15
xAPI - web-conférence FFFOD du 15/09/15FFFOD
 
Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Kevin Weil
 
Colapso cardiovascular, paro cardiaco y muerte súbita (final)
Colapso cardiovascular, paro cardiaco y muerte súbita (final)Colapso cardiovascular, paro cardiaco y muerte súbita (final)
Colapso cardiovascular, paro cardiaco y muerte súbita (final)brielle
 
Protocol Buffers and Hadoop at Twitter
Protocol Buffers and Hadoop at TwitterProtocol Buffers and Hadoop at Twitter
Protocol Buffers and Hadoop at TwitterKevin Weil
 

Andere mochten auch (19)

Growing the TYPO3 Community in North America
Growing  the TYPO3 Community in North AmericaGrowing  the TYPO3 Community in North America
Growing the TYPO3 Community in North America
 
Guida Breve alla ESDEBITAZIONE FALLIMENTARE con il Gratuito Patrocinio
Guida Breve  alla ESDEBITAZIONE FALLIMENTARE con il Gratuito PatrocinioGuida Breve  alla ESDEBITAZIONE FALLIMENTARE con il Gratuito Patrocinio
Guida Breve alla ESDEBITAZIONE FALLIMENTARE con il Gratuito Patrocinio
 
Guida Breve alla RIABILITAZIONE PENALE anche con il gratuito patrocinio
Guida Breve alla RIABILITAZIONE PENALE anche con il gratuito patrocinioGuida Breve alla RIABILITAZIONE PENALE anche con il gratuito patrocinio
Guida Breve alla RIABILITAZIONE PENALE anche con il gratuito patrocinio
 
Guida Breve alla MEDIAZIONE Facoltativa
Guida Breve alla MEDIAZIONE Facoltativa Guida Breve alla MEDIAZIONE Facoltativa
Guida Breve alla MEDIAZIONE Facoltativa
 
Guida Breve alla SEPARAZIONE DEI CONIUGI con il Gratuito Patrocinio
Guida Breve alla SEPARAZIONE DEI CONIUGI con il Gratuito PatrocinioGuida Breve alla SEPARAZIONE DEI CONIUGI con il Gratuito Patrocinio
Guida Breve alla SEPARAZIONE DEI CONIUGI con il Gratuito Patrocinio
 
Themes of Geography
Themes of GeographyThemes of Geography
Themes of Geography
 
GUIDA Breve al RECUPERO CREDITI DI LAVORO - ed. 2017
GUIDA Breve al RECUPERO CREDITI DI LAVORO - ed. 2017GUIDA Breve al RECUPERO CREDITI DI LAVORO - ed. 2017
GUIDA Breve al RECUPERO CREDITI DI LAVORO - ed. 2017
 
Guida Breve alle DIMISSIONI PER GIUSTA CAUSA con il gratuito patrocinio
Guida Breve  alle DIMISSIONI PER GIUSTA CAUSA con  il gratuito patrocinioGuida Breve  alle DIMISSIONI PER GIUSTA CAUSA con  il gratuito patrocinio
Guida Breve alle DIMISSIONI PER GIUSTA CAUSA con il gratuito patrocinio
 
Guida minima all'amministratore di sostegno - slides
Guida minima all'amministratore di sostegno - slidesGuida minima all'amministratore di sostegno - slides
Guida minima all'amministratore di sostegno - slides
 
Guida Breve all'AMMINISTRATORE DI SOSTEGNO
Guida Breve all'AMMINISTRATORE DI SOSTEGNOGuida Breve all'AMMINISTRATORE DI SOSTEGNO
Guida Breve all'AMMINISTRATORE DI SOSTEGNO
 
Guida alla gestione della CRISI DA SOVRAINDEBITAMENTO
Guida alla gestione della CRISI DA SOVRAINDEBITAMENTOGuida alla gestione della CRISI DA SOVRAINDEBITAMENTO
Guida alla gestione della CRISI DA SOVRAINDEBITAMENTO
 
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
 
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)
 
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...
Club E-Tourisme // Les réseaux sociaux, des outils au service de la promotion...
 
xAPI - web-conférence FFFOD du 15/09/15
xAPI - web-conférence FFFOD du 15/09/15xAPI - web-conférence FFFOD du 15/09/15
xAPI - web-conférence FFFOD du 15/09/15
 
Vademecum per il TESTIMONE NEL PROCESSO CIVILE (Slide)
Vademecum per il TESTIMONE NEL PROCESSO CIVILE (Slide)Vademecum per il TESTIMONE NEL PROCESSO CIVILE (Slide)
Vademecum per il TESTIMONE NEL PROCESSO CIVILE (Slide)
 
Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)
 
Colapso cardiovascular, paro cardiaco y muerte súbita (final)
Colapso cardiovascular, paro cardiaco y muerte súbita (final)Colapso cardiovascular, paro cardiaco y muerte súbita (final)
Colapso cardiovascular, paro cardiaco y muerte súbita (final)
 
Protocol Buffers and Hadoop at Twitter
Protocol Buffers and Hadoop at TwitterProtocol Buffers and Hadoop at Twitter
Protocol Buffers and Hadoop at Twitter
 

Ähnlich wie Perf Vsphere Storage Protocols

Nexenta at VMworld Hands-on Lab
Nexenta at VMworld Hands-on LabNexenta at VMworld Hands-on Lab
Nexenta at VMworld Hands-on LabNexenta Systems
 
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Виталий Стародубцев
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformanceProfessionalVMware
 
Presentation v mware performance overview
Presentation   v mware performance overviewPresentation   v mware performance overview
Presentation v mware performance overviewsolarisyourep
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Nuno Alves
 
Galaxy CloudMan performance on AWS
Galaxy CloudMan performance on AWSGalaxy CloudMan performance on AWS
Galaxy CloudMan performance on AWSEnis Afgan
 
Deep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceDeep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceAmazon Web Services
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyBenoit des Ligneris
 
Real-Time Load Balancing of an Interactive Mutliplayer Game Server
Real-Time Load Balancing of an Interactive Mutliplayer Game ServerReal-Time Load Balancing of an Interactive Mutliplayer Game Server
Real-Time Load Balancing of an Interactive Mutliplayer Game ServerJames Munro
 
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Amazon Web Services
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Esx versions diff 3.5
Esx versions diff 3.5Esx versions diff 3.5
Esx versions diff 3.5Vasanth Reddy
 
I/O performance of the Lenovo Storage N4610
I/O performance of the Lenovo Storage N4610I/O performance of the Lenovo Storage N4610
I/O performance of the Lenovo Storage N4610Principled Technologies
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
My Sql Performance On Ec2
My Sql Performance On Ec2My Sql Performance On Ec2
My Sql Performance On Ec2MySQLConference
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmugcsharney
 

Ähnlich wie Perf Vsphere Storage Protocols (20)

Nexenta at VMworld Hands-on Lab
Nexenta at VMworld Hands-on LabNexenta at VMworld Hands-on Lab
Nexenta at VMworld Hands-on Lab
 
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting Performance
 
Presentation v mware performance overview
Presentation   v mware performance overviewPresentation   v mware performance overview
Presentation v mware performance overview
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
 
Galaxy CloudMan performance on AWS
Galaxy CloudMan performance on AWSGalaxy CloudMan performance on AWS
Galaxy CloudMan performance on AWS
 
Deep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceDeep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance Performance
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization Technology
 
PROSE
PROSEPROSE
PROSE
 
Real-Time Load Balancing of an Interactive Mutliplayer Game Server
Real-Time Load Balancing of an Interactive Mutliplayer Game ServerReal-Time Load Balancing of an Interactive Mutliplayer Game Server
Real-Time Load Balancing of an Interactive Mutliplayer Game Server
 
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Esx versions diff 3.5
Esx versions diff 3.5Esx versions diff 3.5
Esx versions diff 3.5
 
cosbench-openstack.pdf
cosbench-openstack.pdfcosbench-openstack.pdf
cosbench-openstack.pdf
 
I/O performance of the Lenovo Storage N4610
I/O performance of the Lenovo Storage N4610I/O performance of the Lenovo Storage N4610
I/O performance of the Lenovo Storage N4610
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
CLFS 2010
CLFS 2010CLFS 2010
CLFS 2010
 
My Sql Performance On Ec2
My Sql Performance On Ec2My Sql Performance On Ec2
My Sql Performance On Ec2
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmug
 

Kürzlich hochgeladen

AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 

Kürzlich hochgeladen (20)

E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 

Perf Vsphere Storage Protocols

  • 1. WHITE PAPER Comparison of Storage Protocol Performance in VMware vSphere™ 4
  • 2. VMware white paper Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 I/O workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 experiment: Single VM: Throughput (read and write) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 experiment: Single VM: CPU Cost Per I/O (read) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 experiment: Multiple VMs: aggregate Throughput (read) . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2
  • 3. VMware white paper Introduction This paper compares the performance of various storage protocols available on VMware vSphere™ 4. The protocols Fibre Channel, Hardware iSCSI, Software iSCSI, and NFS are tested using virtual machines on an ESX 4.0 host. Iometer is used to generate the I/O workload. The Fibre Channel experiments were conducted over a 4Gb Fibre Channel network. The Hardware iSCSI, Software iSCSI, and NFS experiments were conducted over a Gigabit Ethernet connection. Experiments over 10Gb Ethernet and 8Gb Fibre Channel will be included in a future update to this paper. This paper will start by describing key aspects of the test environment: ESX host, storage array, virtual machines, and Iometer workload. Next, performance results for throughput and CPU cost are presented from experiments involving one or more virtual machines. Finally, the key findings of the experiments are summarized. The terms “storage server” and “storage array” will be used interchangeably in this paper. executive Summary The experiments in this paper show that each of the four storage protocols (Fibre Channel, Hardware iSCSI, Software iSCSI, and NFS) can achieve line-rate throughput for both single virtual machine and multiple virtual machines on an ESX host. These experiments also show that Fibre Channel and Hardware iSCSI have substantially lower CPU cost than Software iSCSI and NFS. experimental Setup The table below shows key aspects of the test environment for the ESX host, the storage array, and the virtual machine. ESX Host Component Details hypervisor VMware eSX 4.0 processors Four intel Xeon e7340 Quad-Core 2.4Ghz processors Memory 32GB Fibre Channel hBa QLogic QLa2432 4Gb Fibre Channel network 4Gb FC switch NiC for NFS and Sw iSCSi 1Gb (intel 82571eB) MtU for NFS, Sw iSCSi, hw iSCSi 1500 bytes iSCSi hBa QLogic QL4062c 1Gb (Firmware: 3.0.1.49) ip network for NFS and Sw/hw iSCSi 1Gb ethernet with dedicated switch and VLaN (extreme Summit 400-48t) File system for NFS Native file system on NFS server File system for FC and Sw/hw iSCSi None (rDM-physical was used) Storage Array Component Details Storage server One server supporting FC, iSCSi, and NFS Disk Drives: Number per data LUN 9 Disk Drives: Size 300Gb Disk Drives: Speed 15K rpM Disk Drives: type Fibre Channel 3
  • 4. VMware white paper Virtual Machine Component Details Guest OS windows Server 2008 enterprise Sp1 Virtual processors 1 Memory 512MB Virtual disk for data 100MB Mapped raw LUN (rDM-physical) File system None (physical drives were used) SCSi controller LSi Logic parallel VMware’s VMFS file system is recommended for production deployments of virtual machines on iSCSI and Fibre Channel arrays. Because NFS storage presents files and not blocks, VMFS is not needed or possible. VMFS was therefore not used in the Fibre Channel and iSCSI experiments to attempt to produce results that could be compared across all protocols. I/O workload Iometer (http://sourceforge.net/projects/iometer) was used to generate the I/O workload for these experiments. Iometer is a free storage performance testing tool that can be configured to measure throughput and latency under a wide variety of access profiles. Iometer Workload Component Details Number of outstanding i/Os 16 run time 2 min ramp-up time 2 min Number of workers 1 (per VM) Each virtual (data) disk of the virtual machines used in these experiments is 100MB in size. The small size of these virtual disks ensures that the I/O working set will fit into the cache of the storage array. An experiment with a working set size that fits into the cache of the storage array is commonly referred to as a cached run. For read operations in a cached run experiment, the data is served from the storage array’s cache, and read performance is independent of disk latencies. For write operations in a cached run experiment, the rate of write requests at the storage array may exceed the storage array’s rate of writing the dirty blocks from the write cache to disk. If this happens, the write cache will eventually fill up. Once the write cache is full, write performance is limited by the rate at which dirty blocks in the write cache are written to disk. This rate is limited by the latency of the disks in the storage array, the RAID configuration, and the number of disk spindles used for the LUN. For these reasons, read performance for cached runs is a better indication of the true performance of a storage protocol on the ESX host, irrespective of the storage array used. 4
  • 5. VMware white paper experiment: Single VM: Throughput (read and write) Figure 1 shows the sequential read throughput (in MB/sec) of running a single virtual machine in the standard workload configuration for different I/O block sizes, for each of the storage protocols. Figure 1: Read throughput for different I/O block sizes 450 400 Throughput (MB/sec) 350 300 250 200 150 100 50 0 1KB 4KB 8KB 16KB 32KB 64KB 128KB 256KB 512KB I/O Block Sizes NFS SW iSCSI HW iSCSI FC For Fibre Channel, read throughput is limited by the bandwidth of the 4Gb Fibre Channel link for I/O sizes at or above 64KB. For IP-based protocols, read throughput is limited by the bandwidth of the 1Gb Ethernet link for I/O sizes at or above 32KB. Figure 2 shows the sequential write throughput (in MB/sec) of running a single virtual machine in the standard workload configuration for different I/O block sizes, for each of the storage protocols. Figure 2: Write throughput for different I/O block sizes 450 400 Throughput (MB/sec) 350 300 250 200 150 100 50 0 1KB 4KB 8KB 16KB 32KB 64KB 128KB 256KB 512KB I/O Block Sizes NFS SW iSCSI HW iSCSI FC 5
  • 6. VMware white paper For Fibre Channel, the maximum write throughput for any I/O block size is consistently lower than read throughput of the same I/O block size. This is the result of disk write bandwidth limitations on the storage array. For the IP-based protocols, write throughput for block sizes at or above 16KB is limited by the bandwidth of the 1Gb Ethernet link. To summarize, a single Iometer thread running in a virtual machine can saturate the bandwidth of the respective networks for all four storage protocols, for both read and write. Fibre Channel throughput performance is higher because of the higher bandwidth of the Fibre Channel link. For the IP-based protocols, there is no significant throughput difference for most block sizes. experiment: Single VM: CPU Cost Per I/O (read) CPU cost is a measure of the amount of CPU resources used by ESX to perform a given amount of I/O. In this paper, the CPU cost of each storage protocol is measured in units of CPU cycles per I/O operation. The cost for different storage protocols is normalized with respect to the cost of software iSCSI on ESX 3.5. Figure 3 shows the relative CPU cost of sequential reads in a single virtual machine in the standard workload configuration for a block size of 64 KB for each of the storage protocols. Results on ESX 4.0 are shown next to ESX 3.5 to highlight efficiency improvements on all protocols. The CPU cost of write operations for different storage protocols was not compared as write performance is strongly dependent on the choice of the storage array. Figure 3: Relative CPU cost of 64 KB sequential reads in a single virtual machine 1.2 1.0 Relative CPU Cost Per I/O 0.8 0.6 0.4 0.2 0 NFS SW iSCSI HW iSCSI FC ESX 3.5 ESX 4.0 For Fibre Channel and Hardware iSCSI, a major part of the protocol processing is offloaded to the HBA, and consequently the cost of each I/O is very low. For Software iSCSI and NFS, host CPUs are used for protocol processing which increases cost. Furthermore, the cost of NFS and Software iSCSI is higher with larger block sizes, such as 64 KB. This is due to the additional CPU cycles needed for each block for check summing, blocking, etc. Software iSCSI and NFS are more efficient at smaller blocks and are both capable of delivering high throughput performance when CPU resource is not a bottleneck, as will be shown in the next section. The cost per I/O is dependent on a variety of test parameters, such as platform architecture, block size, and other factors. However, these tests demonstrate improved efficiency in vSphere 4’s storage stack over the previous version. 6
  • 7. VMware white paper experiment: Multiple VMs: aggregate Throughput (read) Figure 4 shows the aggregate sequential read throughput (in MB/sec) of running 2, 4, 8, 16, and 32 virtual machines in the standard workload configuration for a block size of 64KB. Each virtual machine performs I/O to its dedicated 100MB LUN. Figure 4: Throughput of running multiple VMs in the standard workload configuration 450 400 Throughput (MB/sec) 350 300 250 200 150 100 50 0 2 4 8 16 32 Number of VMs NFS SW iSCSI HW iSCSI FC For each of the storage protocols, the maximum aggregate throughput is limited by the network bandwidth. There is no degradation in aggregate throughput for a large number of virtual machines. Conclusion In this paper, the performance of four storage protocols was compared for accessing shared storage available on VMware ESX 4.0: Fibre Channel, Hardware iSCSI, Software iSCSI, and NFS. All four storage protocols for shared storage on ESX are shown to be capable of achieving throughput levels that are only limited by the capabilities of the storage array and the connection between it and the ESX server. ESX shows excellent scalability by maintaining these performance levels in cases of heavy consolidation. For CPU cost, Fibre Channel and Hardware iSCSI are more efficient than Software iSCSI and NFS. However, when CPU resources are not a bottleneck, Software iSCSI and NFS can also be part of a high-performance solution. Earlier versions of ESX were also able to achieve throughput levels that are only limited by the array and bandwidth to it. VMware vSphere 4 continues to support this maximized throughput but can do so with greater efficiency. Improved efficiency on vSphere 4 means the same high levels of performance with more virtual machines. 7
  • 8. VMware, Inc. 3401 Hillview Ave Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMW_09Q2_WP_VSPHERE_StorageProtocols_P8_R1