SlideShare ist ein Scribd-Unternehmen logo
1 von 22
Downloaden Sie, um offline zu lesen
openBench Labs
Data Center Converged SAN Infrastructure




                                                            Measuring Nexsan Performance and
                                           Analysis:        Compatibility in Virtualized Environments
Analysis: Measuring Nexsan Performance and
               Compatibility in Virtualized Environments

                                                     Author: Jack Fegreus, Ph.D.
                                                               Managing Director
                                                                openBench Labs
                                                     http://www.openBench.com
                                                             September 15, 2010




             Jack Fegreus is Managing Director of openBench Labs and consults through
          Ridgetop Research. He also contributes to InfoStor, Virtual Strategy Magazine,
          and Open Magazine, and serves as CTO of Strategic Communications.
          Previously he was Editor in Chief of Open Magazine, Data Storage, BackOffice
          CTO, Client/Server Today, and Digital Review. Jack also served as a consultant
          to Demax Software and was IT Director at Riley Stoker Corp. Jack holds a Ph.D.
          in Mathematics and worked on the application of computers to symbolic logic.
Table of Contents




Table of Contents

      Executive Summary               04

      VOE Test Scenario               06

      SASBeast Performance Spectrum   13

      Customer Value                  19




                                                               03
Executive Summary




Executive Summary
                   “For an SME site to successfullycapable of VOEVOE, Nexsantoprovides a
                        storage infrastructure that is
                                                         implement a

                   characteristic I/O patterns that distinguish a
                                                                  efficiently supporting the
                                                                        host server deliver
                   optimal performance.”
                      The mission of IT is to get the right information to the right people in time in order
                  to create value or mitigate risk. With this in mind, the growing use of digital archiving,
                  rich media in corporate applications, and a Virtual Operating Environment (VOE), such
                  as VMware® vSphere™, is driving double-digit growth in the volume of data stored. That
                  has made data storage the cornerstone of IT strategic plans for reducing capital expense
                  (CapEx) and operating expense (OpEx) resource costs.

                                                                                       To meet the needs of IT at
      openBench Labs Test Briefing:                                               small to medium enterprise
                                                                                  (SME) sites, Nexsan has evolved
           Nexsan SASBeast® Enterprise Storage Array                              its line of SASBeast™ storage
1) Enhance Administrator Productivity: An embedded WEB-based utility              arrays around highly flexible
   enables the management of multiple storage arrays from one interface,          software architecture that can be
   which can be integrated with the Microsoft Management Console and the          integrated to the point of
   Virtual Disk Service to provide a complete single-pane-of-glass storage        transparency in a Windows Server
   management interface.
                                                                                  2008 R2 environment. These
2) Maximize Density and Reliability with Hierarchical Storage: 4U chassis         Nexsan arrays can support a full
   supports any mix of 42 SSD, SAS, and SATA drives mounted vertically—
                                                                                  hierarchy of SSD, SAS, and SATA
   front-to-front and back-to-back—to cancel rotational vibrations, reduce head
   positioning errors, optimize thermal operations, and extend drive life.
                                                                                  drives in complex SAN fabrics
                                                                                  that utilize both Fibre Channel
3) Maximize Energy Savings: AutoMAID® (Massive Array of Idle Disks)
   technology automates the placing of drives in a hierarchy of idle states to
                                                                                  and iSCSI paths. For SME sites, a
   conserve energy, while maintaining near-instantaneous access to data.          SASBeast can provide multiple
                                                                                  storage targets that support a wide
4) Maximize I/O Performance: Dual active-active RAID controllers support 42
                                                                                  range of application-specific
   simultaneously active drives:
                                                                                  requirements stemming from
   Iometer Streaming I/O Benchmark: Total full-duplex throughput reached
                                                                                  Service Level Agreements (SLAs).
   1GB per second, while simultaneously streaming 128KB reads and
   128KB writes using three SAS- and one SATA-based RAID-5 volumes.
                                                                               The robust architecture of the
   Iometer I/O Operations Benchmark: 4KB reads and writes (80/20
   percent mix), averaged 2,330 IOPS on a SAS RAID-5 volume and scaled
                                                                           Nexsan SASBeast provides IT
   to 4,380 IOPS with a second volume.                                     with a single platform that can
                                                                           satisfy a wide range of storage
                                                                           metrics with respect to access
                  (IOPS), throughput (MB per second), or capacity (price per GB). Nonetheless, helping IT
                  contain traditional CapEx provisioning costs through the deployment of hierarchical
                  storage resources is only the starting point for the business value proposition of the
                  SASBeast. Through tight integration of the Nexsan Management Console with the


                                                                                                                                04
Executive Summary




Microsoft Management Console (MMC) and Virtual Disk Services (VDS), a Nexsan
SASBeast presents IT administrators with a unified SAN management suite to cost-
effectively manage the reliability, availability and scalability (RAS) of multiple petabytes
of data. This is particularly important for OpEx costs, which typically are 30 to 40 times
greater than CapEx costs over the life of a storage array.

   Nexsan’s simplified storage management and task automation is particularly important
when implementing a VOE, which introduces a complete virtualization scheme involving
servers, storage, and networks. VOE virtualization with multiple levels of abstraction can
complicate important IT administration functions. Reflecting these problems, IDG, in a
recent survey of CIOs implementing server virtualization, reported that the percent of
CIOs citing an increase in the complexity of datacenter management jumped from 47
percent at the end of 2008 to 67 percent at the end of 2009.

    Virtualization difficulties are often exacerbated by multiple incompatible advanced
point solutions, which often come as extra-cost options of storage products. These
powerful proprietary features are particularly problematic for IT at SME sites. Features
designed to resolve complex issues encountered in large datacenters frequently only
introduce incompatibilities among interdependent resources and limit the benefits that
SME sites can garner from a VOE, which independently provides IT with sufficiently
robust and easy-to-use solutions to deal with the intricacies of hypervisor architecture.

    For an SME site to successfully implement a VOE, Nexsan provides a storage
infrastructure that is capable of efficiently supporting the characteristic I/O patterns that
distinguish a VOE host server. With such a foundation in place, IT is free to use the
comprehensive virtualization features of their VOE to provision resources for VMs,
commission and decommission VM applications, and migrate VMs among multiple hosts
in real time to meet changing resource demands. What’s more, advanced third party
applications designed for a VOE are far more likely to recognize VOE solutions for such
issues as thin provisioning than hardware-specific solutions.

    To meet the exacting demands of multiple IT environments, including that of a VOE,
a Nexsan SASBeast provides IT with a storage resource fully optimized for reliability and
performance. Each physical unit features design innovations to extend the lifespan of
installed disk drives. The design of the SASBeast also promotes infrastructure scale-out,
as each additional unit also adds controllers and ports to maintain performance.

    More importantly, the scaling-out of a storage infrastructure with SASBeast units has
a minimal impact on IT overhead, which is the key driver of OpEx costs. Each SASBeast
comes with an embedded WEB-enabled Graphical User Interface (GUI), dubbed
NexScan®, which allows IT to provision a single subsystem with a hierarchy of drive
types. Furthermore, NexScan simplifies administrator tasks in a Windows Server
environment through tight integration of its management software with MMC and VDS
for end-to-end storage management. With NexScan, an administrator can provision a
logical disk on any SASBeast, export it to a Windows Server, and provision the server
with that logical disk from a single interface.


                                                                                                          05
VOE Test Scenario




VOE Test Scenario
               hile working with a range of Windows tools for administrators,
      “W       such as Server Manager and Storage Manager for SANs, we were
      able to directly configure and manage storage LUNs for both the FC and the
      iSCSI fabrics without having to open up a separate Nexsan GUI.”
      I/O RANDOMIZATION IN A VOE
          With server virtualization rated as one of the best ways to optimize resource
      utilization and minimize the costs of IT operations, many sites run eight or more server
      VMs on each host in a production VOE. As a result, VOE host servers must be able to
      deliver higher I/O throughput loads via fewer physical connections.

                                                                      In stark contrast to a VOE,
                                                                  traditional core-driven SAN
                                                                  fabrics are characterized by a
                                                                  few storage devices with
                                                                  connections that fan out over
                                                                  multiple physical servers.
                                                                  Each server generates a
                                                                  modest I/O stream and
                                                                  multiple servers seldom access
                                                                  the same data simultaneously.
                                                                  From a business software
                                                                  perspective, the I/O
                                                                  requirements for a VM are the
                                                                  same as those for a physical
                                                                  server. From an IT
                                                                  administrator’s perspective,
                                                                  however, I/O requirements for
                                                                  VOE hosts are dramatically
                                                                  different. In a VOE, a small
                                                                  number of hosts share a small
                                                                  number of large datastores,
                                                                  while the hosts aggregate and
                                                                  randomize all of the I/O from
                                                                  multiple VMs.

                                                                      Elevated I/O stress also
                                                                   impacts the I/O
                                                                   requirements of VOE
      support servers. In particular, servers used for backup should be capable of handling the
      logical disks associated with multiple hosted VMs in parallel.


                                                                                                            06
VOE Test Scenario




                                                                                                                     At the
NEXSAN ISCSI & FC CONVERGEENCE
                                                                                                                 heart of our
                                                                                                                 vSphere 4.1
                                                                                                                 VOE test
                                                                                                                 environment,
                                                                                                                 we ran a mix
                                                                                                                 of twelve
                                                                                                                 server and
                                                                                                                 workstation
                                                                                                                 VMs. To
                                                                                                                 provide a
                                                                                                                 typical SME
                                                                                                                 infrastructure,
                                                                                                                 we utilized a
                                                                                                                 Nexsan
                                                                                                                 SASBeast,
                                                                                                                 along with a
                                                                                                                 hybrid SAN
                                                                                                                 topology that
                                                                                                                 featured a
                                                                                                                 4Gbps FC
                                                                                                                 fabric and a
                                                                                                                 1GbE iSCSI
                                                                                                                 fabric. With
                                                                                                                 the price
                                                                                                                 convergence
  Among the ways that Nexsan simplifies SAN management is through the convergence of iSCSI and Fibre             of 8Gbps and
Channel devices. Nexsan treats these connections as two simple transport options. Within the Nexsan GUI, we      4Gbps FC
readily shared volumes that were used as VOE datastores by ESXi hosts and a Windows server that was used to      HBAs,
run backup software. More importantly, the SASBeast facilitated our ability to switch between a local Disaster   SASBeast
Recovery (DR) scenario, in which both the VOE host and the Windows server connected to the datastore volume
over the FC fabric, and a remote DR scenario, in which the Windows server connected to the datastore via our     systems are
iSCSI fabric.                                                                                                    now shipping
                                                                                                                 with a new
                  generation of 8Gbps FC ports.

                     While working with a range of Windows tools for administrators, such as Server
                  Manager and Storage Manager for SANs, we were able to directly configure and manage
                  storage LUNs for both the FC and the iSCSI fabrics without having to open up a separate
                  Nexsan GUI. What’s more, the Nexsan software, which treats the virtualization of storage
                  over FC and iSCSI fabrics as simple transport options, made it very easy to switch back
                  and forth between fabric connections.

                      While physically setting up the SASBeast, a number of design elements that enhance
                  reliability stood out. Storage reliability is especially important in a VOE as impaired array
                  processing for a rebuild, or worse the loss of an array, cascades from one host server


                                                                                                                                             07
VOE Test Scenario




down to multiple VMs.

   To extend the service life of disk drives, the SASBeast positions disks vertically in
opposing order—alternating between front-to-front and then back-to-back. This layout
dampens the natural rotational vibrations generated by each drive. Mounting all of the
drives in parallel tends to amplify these vibrations and induce head positioning errors on
reads and writes. Head positioning errors are particularly problematic in a VOE, which is
characterized by random small-block I/O requests. In such an environment, data access
time plays a greater role with respect to transfer time for I/O services.

   That vertical disk layout also helps create a positive-pressure air flow inside the unit.
High efficiency waste heat transfer in a storage chassis is dependent on molecular
contact as air flows over external drive surfaces. As a result, air pressure is just as
important as air flow for proper cooling.

    To facilitate testing, our Nexsan SASBeast was provisioned with twenty eight 15K rpm
SAS drives and fourteen 2TB SATA drives. To configure internal arrays and provide
external access to target volumes, our SASBeast was set up with two controllers, which
could be used to create both SAS and SATA arrays and which featured a pair of FC and a
pair of iSCSI ports per controller. Also embedded in the unit was version 1.5.4 of the
Nexsan management software, which we tightly integrated with MMC on all of our
Windows-based servers. Using this storage infrastructure, we were able to provide our
VOE and physical server environments with storage hierarchies designed to meet robust
sets of application-specific SLA metrics.

   In particular, we configured three RAID arrays on the SASBeast: Two arrays utilized
SAS drives and one array utilized SATA drives. For optimal IOPS performance, we
created a 7.2TB RAID-5 SAS array on controller 0. Then on controller 1, we created a
6.6TB RAID-6 SAS array for higher availability.

VOE CONSOLIDATION
   We implemented our vSphere™ 4.1 VOE, on a quad-processor HP ProLiant® DL580
server running the VMware ESXi™ 4.1 hypervisor. This server hosted 12 VMs running a
mix of operating systems, which included Windows Server® 2008, Windows Server 2003,
SUSE Linux Enterprise Server 11, and Windows 7. Within our VOE, we set up a storage
hierarchy that was undergirded by three central datastores that were created on each of
the three arrays set up on the Nexsan SASBeast.

   To manage VM backups, we installed Veeam Backup & Replication v4.1 on a quad-
core Dell® PowerEdge® 1900 server, which ran Windows Server 2008 R2 and shared
access to each datastore mapped to the VOE host. We tested datastore access over both
our FC fabric, which represented a local DR scenario, and over our iSCSI fabric, which
represented a remote DR scenario. In addition, we mapped another RAID-5 SATA
volume to the Dell PowerEdge server to store backup images of VMs.

   The number of VMs that typically run on a VOE host along with the automated

                                                                                                         08
VOE Test Scenario




                   movement of those VMs among hosts as a means to balance processing loads puts a
                   premium on storage resources with low I/O latency. What’s more, increasing the VM
                   density on a host also serves to further randomize I/O requests as the host consolidates
                   multiple data streams from multiple VMs.
NEXSAN VOE OPTIMIZATION
                                                                                               To handle the randomized
                                                                                           I/O patterns of a VOE host
                                                                                           with multiple VMs, the Nexsan
                                                                                           SASBeast provides a highly
                                                                                           flexible storage infrastructure.
                                                                                           In addition to being able to
                                                                                           provision a SASBeast with a
                                                                                           wide range of disk drive types,
                                                                                           administrators have an equally
                                                                                           broad choice of options from
                                                                                           which to configure access
                                                                                           policies for internal arrays and
                                                                                           external volumes.

                                                                                               Using a Nexsan SASBeast
                                                                                           with dual controllers, IT
                                                                                           administrators are free to
                                                                                           assign any RAID array that is
                                                                                           created to either controller.
                                                                                           Independent of array
                                                                                           ownership, administrators set
                                                                                           up how logical volumes
                                                                                           created on those arrays are
                                                                                           accessed over Fibre Channel
                                                                                           and iSCSI SAN fabrics. In
                                                                                           particular, a SASBeast with two
  To maximize performance in our VOE, we biased I/O caching on the SASBeast for            controllers and two FC ports
random access. As the host consolidates the I/O streams of multiple VMs, sequential I/Os   on each controller can present
for multiple VMs are consolidated by the host with the result that random access I/O       four distinct paths to each
becomes a key characteristic for a VOE host. We also set up an I/O multipathing scheme     volume exposed.
on the Nexsan SASBeast that allowed us to map any array volume to one or all Fibre
Channel and all iSCSI ports.
                                                                                    Without any external
                   constraints, four distinct paths to a storage device will create what appear to be four
                   independent devices on the client system. That presents an intolerable situation to most
                   operating systems, which assume exclusive ownership of a device. To resolve this
                   problem, the simple solution is to use an active-passive scheme for ports and controllers
                   that enables only one path at a time. That solution, however, precludes load balancing
                   and link aggregation.

                      Nexsan provides IT administrators with a number of options for sophisticated load
                   balancing via multipath I/O (MPIO). The range of options for each unit is set within the


                                                                                                                                      09
VOE Test Scenario




                   Nexsan GUI. Volume access can be restricted to a simple non-redundant controller setup
                   or can be allowed to utilize all ports and all LUNs (APAL): The later configuration
                   provides the greatest flexibility and protection and is the only configuration to support
                   iSCSI failover.

                   ASYMMETRIC MPIO
                       To provide a high performance scale-out storage architecture, each SASBeast supports
                   two internal controllers that are each capable of supporting two external FC ports and
                   two external iSCSI ports. When a RAID array is created, it is assigned a master controller
                   to service the array. If the SASBeast is placed in APAL mode, IT administrators can map
                   any volume to all of the FC and iSCSI ports as a load balancing and failover scheme. In
                   this situation I/O requests directed to the other controller incurs added overhead needed
                   to switch control of the array.

                      To garner the best I/O performance in a high-throughput low-latency environment, a
                   host must be able to implement a sophisticated load balancing scheme that distinguishes
                   between the two ports that are on the controller servicing the volume from the two ports
                   on the other controller. The key is to avoid the overhead of switching controllers.

VSPHERE 4.1 ASYMMETRIC MPIO DISCOVERY                                                                      To meet this
                                                                                                        challenge, Nexsan
                                                                                                        implements Asymmetric
                                                                                                        Logical Unit Access
                                                                                                        (ALUA) when exporting
                                                                                                        target volumes. The
                                                                                                        Nexsan device identifies
                                                                                                        the paths that are active
                                                                                                        and optimized (i.e. paths
                                                                                                        that connect to a port on
                                                                                                        the controller servicing
                                                                                                        the device) and paths
                                                                                                        that are active but are
                                                                                                        not optimized.
                                                                                                        Nonetheless, for this
                                                                                                        sophisticated MPIO
                                                                                                        mechanism to work, it
                                                                                                        must be recognized by
                                                                                                        the host operating
  When either an ESX or an ESXi 4.1 hypervisor discovers a volume exposed by the Nexsan
                                                                                                        system that is using the
SASBeast, it defaults to making only one path active for I/O. We changed this default to round robin    SASBeast as a target.
access. When this change is made, the new drivers in ESXI 4.1 did a SCSI inquiry on the FC
volume and discovered that the Nexsan was an ALUA device. As a result, the hypervisor set up the
                                                                                     Both Windows Server
four paths to the servicing controller as active optimized and the four paths to the other controller
                                                                                  2008, which uses a
as non-optimized. Using ESX or ESXi 4.0, all eight paths would set as equally active for I/O.
                                                                                  sophisticated MPIO
                   driver module from Nexsan, and the vSphere 4.1 hypervisors, ESXi 4.1 and ESX 4.1,
                   recognize the Nexsan SASBeast as an ALUA target. As a result, IT administrators can set

                                                                                                                                             10
VOE Test Scenario




                  an MPIO policy on any host running one of these operating systems that takes advantage
                  of knowing which SAN paths connect to the controller servicing a logical drive.

VSPHERE ALUA PERFORMANCE                                                        On Windows Server 2008, the base
                                                                            asymmetric access policy is dubbed
                                                                            Round Robin with Subset. This policy
                                                                            transmits I/O requests only to ports on
                                                                            the servicing controller. Should the
                                                                            servicing controller fail, Nexsan passes
                                                                            array servicing to the other controller in
                                                                            the SASBeast and the host computer
                                                                            automatically starts sending I/O requests
                                                                            to the active ports on the new servicing
                                                                            controller.

                                                                               To understand how the driver
                                                                            changes in the new VMware hypervisors
                                                                            impact host I/O throughput, we
                                                                            monitored FC data traffic at the switch
                                                                            ports connected to the Nexsan SASBeast
                                                                            and the VOE host. We tested I/O
                                                                            throughput by migrating a server VM
                                                                            from a datastore created on the SAS
                                                                            RAID-6 array, which was serviced by
                                                                            controller 1, to a datastore created on the
                                                                            SAS RAID-5 array, which was serviced
                                                                            by controller 0. We repeated this test
                                                                            twice: once with the VOE host running
                                                                            ESXI 4.1 and once with the host running
                                                                            ESXi 4.0 Update 2.

                                                                                Running either ESXi 4.0 or ESXi 4.1,
                                                                            the host server balanced all read and
                                                                            write requests over all of its FC ports;
                                                                            however, the I/O response patterns on
                                                                            the SASBeast were dramatically different
                                                                            for the two hypervisors: ESXi 4.1
                                                                            transmitted I/O requests exclusively to
                                                                            the FC ports on the controller servicing a
                                                                            target volume. In particular, when a VOE
                                                                            host was running ESXi 4.1, the host only
                                                                            directed reads to controller 1, which
                                                                            serviced the SAS RAID-6 volume, and
  Upgrading to vSphere 4.1 from vSphere 4.0 boosted I/O throughput by 20%   writes to controller 0, which serviced the
for VMs resident on a datastore imported from the SASBeast. More            SAS RAID-5 volume. In contrast read
importantly for IT OpEx costs, gains in I/O throughput required a simple    and write data requests were transmitted
change in the MPIO policy for each datastore imported from the SASBeast.

                                                                                                                                   11
VOE Test Scenario




to all of the FC ports on both of the SASBeast controllers equally, when the host was
running ESXi 4.0.

    With all I/O requests directed equally across all FC ports under ESXi 4.0, throughput
at each undistinguished port was highly variable as I/O requests arrived for both disks
serviced by the controller and disks serviced by the other controller. As a result, I/O
throughput averaged about 200MB per second.

   On the other hand, with our VOE host running ESXi 4.1, I/O requests for a logical
disk from the SASBeast were only directed to and balanced over the FC ports on the
controller servicing that disk. In this situation, full duplex reads and writes averaged
240MB per second as we migrated our VM from one datastore to another. For IT
operations, I/O throughput under ESXi 4.1 for a VM accessing a SASBeast disk volume
reached comparable levels of performance—particularly with respect to IOPS—to that of
a physical server.




                                                                                                     12
SASBeast Performance Spectrum




SASBeast Performance Spectrum
                      sing four Iometer worker processes—two reading and one
                “U    writing on three RAID-5 SAS volumes and one writing on a
                RAID-5 SATA volume—we measured total full-duplex throughput
                from the SASBeast at 1GB per second.”
               SCALE-UP AND SCALE-OUT I/O THROUGHPUT
                   We began our I/O tests by assessing the performance of logical disks from the Nexsan
               SASBeast on a physical server, which was running Windows Server 2008 R2. For these
               tests we created and imported a set of volumes from each of the three arrays that we had
               initially created.

                   We used Iometer to generate all I/O test workloads on our disk volumes. To assess
               streaming sequential throughput, we used large block reads and writes, which are
               typically used by backup, data mining, and online analytical processing (OLAP)
               applications. All of these datacenter-class applications need to stream large amounts of
               server-based data rapidly to be effective. As a result, we initially focused our attention of
               using the SASBeast in a 4Gbps Fibre Channel SAN.

                                                                                            We began our
            Fibre Channel Sequential Access I/O Throughput                               benchmark testing
      Windows Server 2008 R2 — Round Robin with Subset MPIO on a 4Gbps FC SAN            by reading data
                    Read Throughput    Write Throughput     Application Throughput       using large block
RAID & Disk Type    Iometer benchmark   Iometer benchmark Veeam Backup & Replication 4.1 I/O requests over
                       128KB blocks        128KB blocks    Parallel backup of four VMs   two FC
                                                                   245 MB/sec            connections.
  RAID-5 SAS          554 MB/sec          400 MB/sec                                     Maximum I/O
                                                                  reading VM data
                                                                                         throughput varied
  RAID-6 SAS          505 MB/sec          372 MB/sec                                     among our three
                                                                   245 MB/sec            logical volumes by
  RAID-5 SATA         522 MB/sec          430 MB/sec
                                                              writing backup image       only 10 percent.
                                                                                         During these tests,
                the fastest reads were measured at 554MB per second using volumes created on our
                RAID-5 SAS array. What’s more, the aggregate read throughput for all targets using two
                active 4Gbps ports exceeded the wire speed capability of a single 4Gbps FC port.

                   While we consistently measured the lowest I/O throughput on reads and writes using
               SAS RAID-6 volumes, the difference on writes between a SAS RAID-5 volume and a SAS
               RAID-6 volume was only about 7 percent—400MB per second versus 372MB per second.
               Using the Nexsan SASBeast, the cost for the added security provided by an extra parity
               bit, which allows two drives to fail in an array and continue processing I/O requests, is
               very minimal. This is particularly important for IT sites supporting mission-critical


                                                                                                                          13
SASBeast Performance Spectrum




                applications that require maximum availability and high-throughput performance.

                   A RAID-6 array provides an important safety net when rebuilding after a drive fails.
                Since a RAID-6 array can withstand the loss of two drives, the array can be automatically
                rebuilt with a hot-spare drive without risking total data loss should an unrecoverable bit
                error occur during the rebuild process. On the other hand, a backup of a degraded
                RAID-5 array should be run before attempting a rebuild. If an unrecoverable bit error
                occurs while rebuilding a degraded RAID-5 array, the rebuild will fail and data stored on
                the array will be lost.

                    When performing writes, the variation in throughput between the disk volumes
                reached 15 percent. Interestingly, it was SATA RAID-5 volumes that consistently
                provided the best streaming performance for large-block writes. In particular, using
                128KB writes to a SATA RAID-5 volume, throughput averaged 430MB per second.
                Given the low cost and high capacity advantages provided by 2TB SATA drives, the
                addition of exceptional write throughput makes the SASBeast an exceptional asset for
                Disk-to-Disk (D2D) backup operations and other disaster recovery functions. To assess
                the upper I/O throughput limits of our Nexsan SASBeast for D2D and other I/O intense
                applications, we used Iometer with multiple streaming read and write processes in order
                to scale total throughput. Using four Iometer worker processes—two reading and one
                writing on three RAID-5 SAS volumes and one writing on a RAID-5 SATA volume—we
                measured total full-duplex throughput from the SASBeast at 1GB per second.

                ISCSI NICHE


                      iSCSI Sequential Access I/O Throughput
  Windows Server 2008 R2 — Jumbo frames, iSCSI HBAs, and Round Robin MPIO on a 1GbE iSCSI SAN
                     Read Throughput         Write Throughput         Application Throughput
RAID & Disk Type      Iometer benchmark       Iometer benchmark     Veeam Backup & Replication 4.1
                         128KB blocks            128KB blocks          4 VM backups in parallel

                     82 MB/sec (1 HBA)       83 MB/sec (1 HBA)
  RAID-5 SAS
                    146 MB/sec (2 HBAs)     146 MB/sec (2 HBAs)

                     80 MB/sec (1 HBA        85 MB/sec (1 HBA)              136 MB/sec
  RAID-5 SATA
                    145 MB/sec (2 HBAs)     150 MB/sec (2 HBAs)        writing backup image


                    On the other hand, streaming throughput on a 1GbE iSCSI fabric has a hard limit of
                120MB per second on each connection. What’s more, to approach the upper end of that
                comparatively limited of performance, IT must pay close attention to the selection of
                equipment. Most low-end switches and even some Ethernet NICs that are typically found
                at SMB sites do not support jumbo Ethernet frames or port trunking, which are
                important functions for maximizing iSCSI throughput. What’s more, it’s also important
                to isolate iSCSI data traffic from normal LAN traffic.

                   For iSCSI testing, we utilized jumbo Ethernet frames—9,000 bytes rather than 1,500

                                                                                                                            14
SASBeast Performance Spectrum




                bytes—with QLogic iSCSI HBAs, which offload iSCSI protocol processing and optimize
                throughput of large data packets. Our throughput results paralleled our FC fabric results:
                Streaming throughput differed by about 2 to 5 percent among logical volumes created on
                SAS and SATA arrays. Once again, the highest read throughput was measured on SAS-
                based volumes and the highest write throughput was measured on SATA-based volumes.

                PUSHING IOPS
                   In addition to streaming throughput, there is also a need to satisfy small random I/O
                requests. On the server side, applications built on Oracle or SQL Server must be able to
                handle large numbers of I/O operations that transfer small amounts of data using small
                block sizes from a multitude of dispersed locations on a disk. Commercial applications
                that rely on transaction processing (TP) include such staples as SAP and Microsoft
                Exchange. More importantly, TP applications seldom exhibit steady-state characteristics.

                   Typical TP loads for database-driven applications in an SMB environment average
                several hundred IOPS. These applications often experience occasional heavy processing
                spikes, such as at the end of a financial reporting period that can rise by an order of
                magnitude to several thousand IOPS. That variability makes systems running TP
                applications among the most difficult for IT to consolidate and among the most ideal to
                target for virtualization. A well-managed VOE is capable of automatically marshaling the
                resources needed to support peak processing demands.


                                       Random Access Throughput
                            Windows Server 2008 — Iometer (80% Reads and 20% Writes)

                       4Gbps FC Fabric          4Gbps FC Fabric          1GbE iSCSI Fabric          MS Exchange
RAID & Disk Type         1 logical disk           2 logical disks          1 logical disk        Heavy use (75% reads)
                    30ms average access time 30ms average access time 30ms average access time          4KB I/O
                                                                                                   2,000 mail boxes
                     2,330 IOPS (4KB I/O)      4,318 IOPS (4KB I/O)     1,910 IOPS (4KB I/O)
  RAID-5 SAS                                                                                         1,500 IOPS
                     2,280 IOPS (8KB I/O)      4,190 IOPS (8KB I/O)     1,825 IOPS (8KB I/O)
                     1,970 IOPS (4KB I/O)                               1,350 IOPS (4KB I/O)
  RAID-6 SAS
                     1,915 IOPS (8KB I/O)                               1,275 IOPS (8KB I/O)
                     1,165 IOPS (4KB I/O)                                795 IOPS (4KB I/O)
  RAID-5 SATA
                     1,120 IOPS (8KB I/O)                                755 IOPS (8KB I/O)


                   We fully expected to sustain our highest IOPS loads on SAS RAID-5 volumes and
                were not disappointed. In these tests, we used a mix of 80 percent reads and 20 percent
                writes. In addition we limited the I/O request load with the restriction that the average
                I/O request response time had to be less than 30ms.

                   Using 4KB I/O requests—the size used by MS Exchange, we sustained 2,330 IOPS on
                a SAS RAID-5 volume, 1,970 IOPS on a SAS RAID-6 volume, and 1,160 IOPS on a
                SATA RAID-5 volume. Next, we switched our top performing RAID-5 SAS volume from
                the FC to the iSCSI fabric and repeated the test. While performance dropped to 1910


                                                                                                                                    15
SASBeast Performance Spectrum




                IOPS, it was still at a par with the FC results of our RAID-6 SAS volume and above the
                level that Microsoft suggests for supporting 2,000 mail boxes with MS Exchange.

                   Next we ran our database-centric Iometer tests with 8KB I/O requests. In these tests,
                we doubled the amount of data being transferred; however, this only marginally affected
                the number of IOPS processed. With 8KB transactions, which typify I/O access with
                Oracle and SQL Server, we sustained 2,280 IOPS on a SAS RAID-5 volume, 1,915 IOPS
                on a SAS RAID-6 volume, and 1,120 IOPS on a SATA RAID-5 volume. Once again when
                we connected our SAS RAID-5 volume over our iSCSI fabric, we measured a 20% drop
                in performance to 1,825 IOPS, which is more than sufficient to handle peak loads on
                most database-driven SME applications.

                   To test transaction-processing scalability in a datacenter environment we added
                another RAID-5 SAS volume to our second SASBeast controller. By using two volumes
                on our FC fabric, we increased IOPS performance by 85% for both 4KB and 8KB I/O
                requests. In our two-volume tests with SAS RAID-5 volumes, we sustained levels of 4,320
                IOPS and 4150 IOPS with an average response time of less than 30ms with a mix of 80
                percent reads and 20 percent writes.

                STRETCHING I/O IN A VOE

                                VM I/O Throughput Metrics
            VM: Windows Server 2008 VM — Host: ESXi 4.1 Hypervisor on a 4Gbps FC SAN
                                                                                MS Exchange
 VM Datastore                                      Random Access
                      Sequential Throughput                                  Heavy use (75% reads)
                                               1 Logical disk, 80% Reads
RAID & Disk Type      Streaming 128KB blocks                                        4KB I/O
                                               30ms average access time
                                                                               2,000 mail boxes

                       436 MB/sec (Reads)        2,380 IOPS (4KB I/O)
  RAID-5 SAS                                                                     1,500 IOPS
                       377 MB/sec (Writes)       2,011 IOPS (8KB I/O)

                       427 MB/sec (Reads)        2,325 IOPS (4KB I/O)
  RAID-6 SAS
                       342 MB/sec (Writes)       1,948 IOPS (8KB I/O)

                       420 MB/sec (Reads)
  RAID-5 SATA
                       380 MB/sec (Writes)


                   Within our VOE, we employed a test scenario using volumes created on the same
                Nexsan RAID arrays that we tested with the physical Windows server. To test I/O
                throughput, we used Iometer on a VM running Windows Server 2008. Given the I/O
                randomization that takes place as a VOE host consolidates the I/O requests from
                multiple VMs, we were not surprised to measure sequential I/O throughput at a level
                that was 20% lower than the level measured on a similarly configured physical server.

                   Nonetheless, at 420MB to 436MB per second for reads and 342MB to 380MB per
                second for writes, the levels of streaming throughput that we measured were 60 to 70
                percent greater than the streaming I/O throughput levels observed when using high-end

                                                                                                                            16
SASBeast Performance Spectrum




applications, such as backup, data mining, and video editing, on physical servers and
dedicated workstations. As a result, IT should have no problems supporting streaming
applications on server VMs or using packaged VM appliances with storage resources
underpinned by a SASBeast.

   On the other hand, we sustained IOPS levels for random access I/O on RAID-5 and
RAID-6 SAS volumes that differed by only 2 to 3 percent from the levels sustained on a
physical server. These results are important for VM deployment of mission-critical
database-driven applications, such as SAP. What’s more, the ability to sustain 2,380 IOPS
using 4KB I/O requests affirms the viability of deploying MS Exchange on a VM.

APPLICATIONS & THE BEAST
   The real value of these synthetic benchmark tests with Iometer rests in the ability to
use the results as a means of predicting the performance of applications. To put our
synthetic benchmark results into perspective, we next examined full-duplex streaming
throughput for a high end IT administrative application: VOE backup.

    What makes a VOE backup process a bellwether application for streaming I/O
throughput is the representation of VM logical disk volumes as single disk files on the
host computer, This encapsulation of VM data files into a single container file makes
image-level backups faster than traditional file-level backups and enhances VM
restoration. Virtual disks can be restored as whole images or individual files can be
restored from within the backup image.

    More importantly, VMFS treats files representing VM virtual disks—dubbed a .vmdk
files—analogously to a CD images. Host datastores typically contain only a small number
of these files, which can be accessed by only one VM process at a time. It is up to the OS
of the VM to handle file sharing for the data files encapsulated within the .vmdk file.

    This file locking scheme allows vSphere hosts to readily share datastore volumes on a
SAN. The ability to share datastores among hosts greatly simplifies the implementation
of vMotion, which moves VMs from one host to another for load balancing. With shared
datastores, there is no need to transfer data, which makes moving a VM much easier.
Before shutting down a VM, its state must be saved. The VM can then be restarted and
brought to the saved state on a new host.

   Sharing datastores over a SAN is also very important for optimizing VM backups. For
our VOE backup scenario, we utilized Veeam Backup & Replication 4.1 on a Windows
server that shared access to all datastores belonging to hosts in our vSphere VOE.

    Every VM backup process starts with the backup application sending a VMsnap
command to the host server to initiate a VM snapshot. In the snapshot process, the VM
host server creates a point-in-time copy of the VM’s virtual disk. The host server then
freezes the vmdk file associated with that virtual disk and returns a list of disk blocks for
that vmdk file to the backup application. The backup application then uses that block list
to read the VM snapshot data residing in the VMFS datastore.

                                                                                                          17
SASBeast Performance Spectrum




                                                                                             To implement the fastest
SIMPLIFIED VOE DATASTORE SHARING
                                                                                         and most efficient backup
                                                                                         process, IT must ensure that
                                                                                         all VM data will be retrieved
                                                                                         directly from the VMFS
                                                                                         datastores using the vStorage
                                                                                         API. That means the
                                                                                         windows server running
                                                                                         Veeam Backup & Replication
                                                                                         must be directly connected to
                                                                                         the VOE datastore. In other
                                                                                         words, the Windows server
                                                                                         must share each of the
                                                                                         datastores used by VOE hosts.

                                                                                                  Through integration with
                                                                                              VDS, the Nexsan SASBeast
                                                                                              makes the configuration and
                                                                                              management of shared logical
                                                                                              volumes an easy task for IT
                                                                                              administrators. In addition to
                                                                                              the proprietary Nexsan
                                                                                              software, IT administrators
   Reflecting the interoperability of the Nexsan SASBeast, we were able to use Storage        can use Storage Manager for
 Manager for SANs within the Server Manager tool, to create and manage volumes, such as       SANs to create new LUNs
 the vOperations datastore used by our ESXi host. In particular we were able to drill down on and manage existing ones on
 the vOperations volume and map it to our Windows Server 2008 R2 system via either our
 FC fabric or, our iSCSI fabric.                                                              the SASBeast. While Storage
                                                                                              Manager for SANs provided
                     less fine-grain control when configuring LUNs, wizards automated end-to-end storage
                     provisioning from the creation of a logical volume on the SASBeast, to connecting that
                     volume over either the FC or iSCSI fabric, and then formatting that volume for use on our
                     Windows server. As a result, the Nexsan hardware and software provided an infrastructure
                     that enabled the rapid set up of an optimized environment for Veeam Backup & Recovery.

                                                                                             To minimize backup
             Application Throughput: VOE Full-Backup                                     windows in a vSphere VOE,
            RAID-5 SAS Datastore — RAID-5 SATA Backup Repository                         Veeam Backup & Replication
      ESXi 4.1 Host, Windows Server 2008 R2, Veeam Backup & Recovery 4.1                 uses vStorage APIs to
                    4VMs Processed Sequentially         4VMs Processed in Parallel
                                                                                         directly back up files
Datastore Access                                                                         belonging to a VM without
                   Optimal compression, Deduplication       Optimal compression
                                                                                         first making a local copy.
 FC SAN Fabric               164 MB/sec                        245 MB/sec                What’s more, Veeam Backup
                                                                                         and Replication recognizes
iSCSI SAN Fabric              97 MB/sec                        136 MB/sec
                                                                                         disks with VMFS thin
                                                                                         provisioning to avoid


                                                                                                                                       18
SASBeast Performance Spectrum




                 backing up what is in effect empty space. In addition, the Veeam software accelerates
                 processing incremental and differential backups by leveraging Changed-Block Tracking
                 within VMFS, As a result, we were able to leverage the VOE awareness for advanced
                 options within our software solution to backup 4VMs at 245MB per second over our FC
                 fabric and 136MB per second over our iSCSI fabric.

                 GREEN CONSOLIDATION IN A VOE
                    Equally important for IT, the Nexsan SASBeast was automatically making significant
                 savings in power and cooling costs all throughout our testing. A key feature provided
                 within the Nexsan management suite provides for the optimization of up to three power-
                 saving modes. These settings modes are applied array-wide; however, the modes
                 available to an array depend upon the disk drives used in the array. For example, the
                 rotational speed of SAS drives cannot be slowed.

                                                                                                                  Once the
AUTOMAID SAVINGS
                                                                                                              disks enter into
                                                                                                              a power saving
                                                                                                              mode, they can
                                                                                                              be automatically
                                                                                                              restored to full
                                                                                                              speed with only
                                                                                                              the first I/O
                                                                                                              delayed when
                                                                                                              the array is
                                                                                                              accessed. More
                                                                                                              importantly,
                                                                                                              over the period
                                                                                                              that we ran
                                                                                                              extensive tests of
                                                                                                              sequential and
                                                                                                              random data
                                                                                                              access, the
                                                                                                              power savings
                                                                                                              for each of our
  Over our testing period, we kept the AutoMAID feature aggressively seeking to enact power savings. Over     three disk arrays
the test period, AutoMAID provided fairly uniform power savings across all arrays, amounting to just over 50% was remarkable
of the expected power consumption.                                                                            uniform. The
                                                                                                              bottom line over
                    our testing regime was an average power savings of 52 percent. Even more importantly,
                    this savings was garnered over a period that saw each of the 42 disks in our SASBeast
                    average 1,500,000 reads and 750,000 writes. Also of note for the mechanical design of the
                    SASBeast, over our entire test there was not one I/O transfer or media retry.




                                                                                                                                           19
Customer Value




Customer Value
                             hile Nexsan’s storage management software provides a number of
                    “W       important features to enhance IT productivity, the most
                    important feature for lowering OpEx costs in a complex SAN topology is
                    tight integration with VDS on Windows.”
                   MEETING SLA METRICS
                       As companies struggle to achieve maximum efficiency, the top-of-mind issue for all
                   corporate decision makers is how to reduce the cost of IT operations. Universally, the
                   leading solutions center on resource utilization, consolidation, and virtualization. These
                   strategies, however, can exacerbate the impact of a plethora of IT storage costs, from failed
                   disk drives to excessive administrator overhead costs. As resources are consolidated and
                   virtualized, the risk of catastrophic disaster increases as the number of virtual systems
                   grows and the number of physical devices underpinning those systems dwindle.

                                                                               While reducing OpEx and CapEx costs
   NEXSAN SASBEAST FEATURE BENEFITS                                         are the critical divers in justifying the
1) Application-centric Storage: Storage volumes can be created from         acquisition of storage resources, those
   a hierarchy of drives to meet multiple application-specific metrics.     resources must first and foremost meet the
2) I/O Retries Minimized: During our testing we executed a total of 95      performance metrics needed by end-user
   million read and write I/Os without a single retry.                      organizations and frequently codified in
3) Automatic Power Savings: AutoMAID technology can be set to               SLAs set up between IT and Line of
   place drives in a hierarchy of idle states to conserve energy, while     Business (LoB) divisions. These metrics
   only delaying the first I/O request returning to a normal state.         address all of the end user’s needs for
4) High Streaming Throughput: Running with SAS- and SATA-based              successfully completing LoB tasks.
   volumes, application performance mirrored benchmark performance          Typically these requirements translate into
   as backups of multiple VMs streamed total full-duplex data at            data throughput (MB per second) and data
   upwards of 500MB per second.                                             response (average access time) metrics.
5) Linear Scaling of IOPS in a VOE: Using random 4KB and 8KB I/O
   requests—typical of Exchange Server and SQL Server—VMs             In terms of common SLA metrics, our
   sustained IOPS rates for both I/O sizes that differed by less than 2.5
                                                                   benchmarks for a single SASBeast reached
   percent on a SAS RAID-5 volume and scaled by over 80% with the  levels of performance that should easily
   addition of a second target volume.
                                                                   meet the requirements of most
                                                                   applications. What’s more, the scale-out-
                   oriented architecture of the SASBeast presents an extensible infrastructure that can meet
                   even the requirements of many High Performance Computing (HPC) applications. With
                   a single logical disk from a RAID-5 base array, we were able to drive read throughput
                   well over 500MB per second and write throughput well over 400MB per second. This is
                   double the throughput rates needed for HDTV editing. With a single SASBeast and four
                   logical drives, we scaled total full-duplex throughput to 1GB per second.




                                                                                                                                20
Customer Value




   We measured equally impressive IOPS rates for random access small-block—4KB and
8KB—I/O requests. With one logical disk, we sustained more than 2,000 IOPS for both
4KB and 8KB I/O requests which scaled to over 4,000 IOPS with two logical disks. To
put this into perspective, Microsoft recommends a storage infrastructure that can sustain
1,500 IOPS for an MS Exchange installation supporting 2,000 active mail boxes.

BUILDING IN RELIABILITY
   In addition to performance specifications, storage reliability expressed in guaranteed
uptime is another important component of SAL requirements. Starting with the
mechanical design of the chassis and moving through to the embedded management
software, Nexsan’s SASBeast provides a storage platform that promotes robust reliability
and performance while presenting IT administrators with an easy to configure and
manage storage resource.

   In particular, the design of the SASBeast chassis proactively seeks to maximize the life
span of disks by minimizing vibrations and maximizing cooling. For IT operations—
especially with respect to SLAs—that design helps ensure storage performance guaranties
concerning I/O throughput and data access will not be negatively impacted by data
access errors induced by the physical environment. By extending disk drive life cycles,
there will be fewer drive failures for IT to resolve and fewer periods of degraded
performance as an array is rebuilt with a new drive. During our testing of a SASBeast,
openBench Labs generated a total of 95 million read and write requests without the
occurrence of a single read or write retry.

SIMPLIFIED MANAGEMENT DRIVES SAVINGS
    While Nexsan’s storage management software provides a number of important features
to enhance IT productivity, the most important feature for lowering OpEx costs in a
complex SAN topology is tight integration with VDS on Windows. For IT administrators,
VDS integration makes the Nexsan storage management GUI available from a number of
the standard tools on a Windows server. In particular, IT administrators are able to use
Storage Manager for SANs to implement full end-to-end storage provisioning. By invoking
just one tool, an administrator can configure a Nexsan array, create and map a logical
volume to a host server, and then format that volume on the host.

   Nexsan also leverages very sophisticated SAN constructs to simplify administrative
tasks. All storage systems with multiple controllers need to handle the dual issues of array
ownership—active service processors—and SAN load balancing—active ports. Through
Nexsan’s implementation of Asymmetric Logical Unit Access (ALUA), host systems with
advanced MPIO software can access a SASBeast and discern the subtle difference between
an active port and active service processor. As a result, an IT administrator is able to map a
LUN to each FC port on each controller and allow the server’s MPIO software to optimize
FC port aggregation and controller failover. Using this scheme openBench Labs was able to
scale streaming reads and writes to four drives at upwards of 1GB per second.




                                                                                                      21
Customer Value




REVOLUTIONARY GREEN SAVINGS
    In addition to providing an innovative solution for reliability and performance,
Nexsan’s AutoMAID power management scheme automatically reduces SASBeast power
consumption, which is a significant OpEx cost at large sites. Using three levels of
automated power-saving algorithms, the SASBeast, eliminates any need for administrator
intervention when it comes to the green IT issues of power savings and cooling.

   Nexsan automatically reduces the power consumed by idle disk drives via a feature
dubbed AutoMAID™. AutoMAID works on a per-disk basis, but within the context of a
RAID set, to provide multiple levels of power savings—from parking heads to slowing
rotational speed—to further contain OpEX costs. While testing the SASBeast,
openBench Labs garnered a 52% power savings on our arrays.




                                                                                               22

Weitere ähnliche Inhalte

Was ist angesagt?

Seneca, Pittsburgh Supercomputer, and LSI
Seneca, Pittsburgh Supercomputer, and LSI Seneca, Pittsburgh Supercomputer, and LSI
Seneca, Pittsburgh Supercomputer, and LSI Jan Robin
 
INTERSPORT improves fitness and business flexibility
INTERSPORT improves  fitness and business  flexibilityINTERSPORT improves  fitness and business  flexibility
INTERSPORT improves fitness and business flexibilityIBM India Smarter Computing
 
Ibm v3700
Ibm v3700Ibm v3700
Ibm v3700TTEC
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...Principled Technologies
 
Intel_Datagres_WhiitePaper_120315-1
Intel_Datagres_WhiitePaper_120315-1Intel_Datagres_WhiitePaper_120315-1
Intel_Datagres_WhiitePaper_120315-1Michele Hunter
 
Engineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureEngineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureBob Rhubart
 
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...Anil Vasudeva
 
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel Kannel
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel KannelMitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel Kannel
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel KannelORACLE USER GROUP ESTONIA
 
The Value of NetApp with VMware
The Value of NetApp with VMwareThe Value of NetApp with VMware
The Value of NetApp with VMwareCapito Livingstone
 
IBM Storwize V7000 — unikátní virtualizační diskové pole
IBM Storwize V7000 — unikátní virtualizační diskové poleIBM Storwize V7000 — unikátní virtualizační diskové pole
IBM Storwize V7000 — unikátní virtualizační diskové poleJaroslav Prodelal
 
Converged infrastructure ucc
Converged infrastructure  uccConverged infrastructure  ucc
Converged infrastructure ucctamar1981
 
EMC Isilon Multitenancy for Hadoop Big Data Analytics
EMC Isilon Multitenancy for Hadoop Big Data AnalyticsEMC Isilon Multitenancy for Hadoop Big Data Analytics
EMC Isilon Multitenancy for Hadoop Big Data AnalyticsEMC
 
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC WorkloadsPanasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC WorkloadsPanasas
 
Netezza vs Teradata vs Exadata
Netezza vs Teradata vs ExadataNetezza vs Teradata vs Exadata
Netezza vs Teradata vs ExadataAsis Mohanty
 
Lenovo Data Migration Solutions Brief
Lenovo Data Migration Solutions BriefLenovo Data Migration Solutions Brief
Lenovo Data Migration Solutions BriefDataCore Software
 
Comparing Dell Compellent network-attached storage to an industry-leading NAS...
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Comparing Dell Compellent network-attached storage to an industry-leading NAS...
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
 
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Agora Group
 
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...Principled Technologies
 
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Principled Technologies
 

Was ist angesagt? (20)

Seneca, Pittsburgh Supercomputer, and LSI
Seneca, Pittsburgh Supercomputer, and LSI Seneca, Pittsburgh Supercomputer, and LSI
Seneca, Pittsburgh Supercomputer, and LSI
 
INTERSPORT improves fitness and business flexibility
INTERSPORT improves  fitness and business  flexibilityINTERSPORT improves  fitness and business  flexibility
INTERSPORT improves fitness and business flexibility
 
Ibm v3700
Ibm v3700Ibm v3700
Ibm v3700
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
 
Intel_Datagres_WhiitePaper_120315-1
Intel_Datagres_WhiitePaper_120315-1Intel_Datagres_WhiitePaper_120315-1
Intel_Datagres_WhiitePaper_120315-1
 
Engineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureEngineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the Future
 
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...
IMEX Research - Is Solid State Storage Ready for Enterprise & Cloud Computing...
 
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel Kannel
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel KannelMitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel Kannel
Mitmepalgeline uus protsessor T4 SUN´i perekonnast - Karel Kannel
 
The Value of NetApp with VMware
The Value of NetApp with VMwareThe Value of NetApp with VMware
The Value of NetApp with VMware
 
IBM Storwize V7000 — unikátní virtualizační diskové pole
IBM Storwize V7000 — unikátní virtualizační diskové poleIBM Storwize V7000 — unikátní virtualizační diskové pole
IBM Storwize V7000 — unikátní virtualizační diskové pole
 
Converged infrastructure ucc
Converged infrastructure  uccConverged infrastructure  ucc
Converged infrastructure ucc
 
EMC Isilon Multitenancy for Hadoop Big Data Analytics
EMC Isilon Multitenancy for Hadoop Big Data AnalyticsEMC Isilon Multitenancy for Hadoop Big Data Analytics
EMC Isilon Multitenancy for Hadoop Big Data Analytics
 
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC WorkloadsPanasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
 
Netezza vs Teradata vs Exadata
Netezza vs Teradata vs ExadataNetezza vs Teradata vs Exadata
Netezza vs Teradata vs Exadata
 
Lenovo Data Migration Solutions Brief
Lenovo Data Migration Solutions BriefLenovo Data Migration Solutions Brief
Lenovo Data Migration Solutions Brief
 
Comparing Dell Compellent network-attached storage to an industry-leading NAS...
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Comparing Dell Compellent network-attached storage to an industry-leading NAS...
Comparing Dell Compellent network-attached storage to an industry-leading NAS...
 
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
 
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...
Dell PowerEdge R930 with Oracle: The benefits of upgrading to Samsung NVMe PC...
 
IBM System x3690 X5 Product Guide
IBM System x3690 X5 Product GuideIBM System x3690 X5 Product Guide
IBM System x3690 X5 Product Guide
 
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...
Scaling Oracle 12c database performance with EMC XtremIO storage in a Databas...
 

Andere mochten auch

Andere mochten auch (8)

IPv6 Transition,Transcición IPv6
IPv6 Transition,Transcición IPv6IPv6 Transition,Transcición IPv6
IPv6 Transition,Transcición IPv6
 
Caso de Estudio SAP sobre VMware en SATECA
Caso de Estudio SAP sobre VMware en SATECACaso de Estudio SAP sobre VMware en SATECA
Caso de Estudio SAP sobre VMware en SATECA
 
Guía para padres de protección infantil en internet
Guía para padres de protección infantil en internetGuía para padres de protección infantil en internet
Guía para padres de protección infantil en internet
 
Cisco Rock Night SMB
Cisco Rock Night   SMBCisco Rock Night   SMB
Cisco Rock Night SMB
 
Ruckus Wireless - Guia de productos en Español
Ruckus Wireless - Guia de productos en EspañolRuckus Wireless - Guia de productos en Español
Ruckus Wireless - Guia de productos en Español
 
Caso de Estudio SAP sobre VMware en Hierro Barquisimeto
Caso de Estudio SAP sobre VMware en Hierro BarquisimetoCaso de Estudio SAP sobre VMware en Hierro Barquisimeto
Caso de Estudio SAP sobre VMware en Hierro Barquisimeto
 
VMware - Contactando El Soporte Técnico De VMware
VMware - Contactando El Soporte Técnico De VMwareVMware - Contactando El Soporte Técnico De VMware
VMware - Contactando El Soporte Técnico De VMware
 
Caso de Estudio SAP sobre VMware en Greentech / Siragon
Caso de Estudio SAP sobre VMware en Greentech / SiragonCaso de Estudio SAP sobre VMware en Greentech / Siragon
Caso de Estudio SAP sobre VMware en Greentech / Siragon
 

Ähnlich wie Measuring Nexsan Performance and Compatibility in Virtualized Environments

Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010
Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010
Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010Agora Group
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
 
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANThe Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANEMC
 
Cost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructureCost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructurePrincipled Technologies
 
Scale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production EnvironmentScale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production EnvironmentScale Computing
 
Intel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsIntel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsMariaDB plc
 
Using ibm total storage productivity center for disk to monitor the svc redp3961
Using ibm total storage productivity center for disk to monitor the svc redp3961Using ibm total storage productivity center for disk to monitor the svc redp3961
Using ibm total storage productivity center for disk to monitor the svc redp3961Banking at Ho Chi Minh city
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storageryanwakeling
 
Demartek lenovo s3200_mixed_workload_environment_2016-01
Demartek lenovo s3200_mixed_workload_environment_2016-01Demartek lenovo s3200_mixed_workload_environment_2016-01
Demartek lenovo s3200_mixed_workload_environment_2016-01Lenovo Data Center
 
Storage Virtualization Introduction
Storage Virtualization IntroductionStorage Virtualization Introduction
Storage Virtualization IntroductionStephen Foskett
 
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
 
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionDrive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionPrincipled Technologies
 
h12525-top-reasons-scaleio-ho
h12525-top-reasons-scaleio-hoh12525-top-reasons-scaleio-ho
h12525-top-reasons-scaleio-hoReece Gaumont
 

Ähnlich wie Measuring Nexsan Performance and Compatibility in Virtualized Environments (20)

Application Report
Application ReportApplication Report
Application Report
 
1
11
1
 
Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010
Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010
Automated SAN Storage Tiering: Four Use Cases - Dell 8 sept 2010
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
 
IBM System Storage SAN Volume Controller
IBM System Storage SAN Volume ControllerIBM System Storage SAN Volume Controller
IBM System Storage SAN Volume Controller
 
1
11
1
 
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANThe Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
 
Cost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructureCost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructure
 
Scale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production EnvironmentScale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production Environment
 
Cosbench apac
Cosbench apacCosbench apac
Cosbench apac
 
Intel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsIntel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logs
 
Using ibm total storage productivity center for disk to monitor the svc redp3961
Using ibm total storage productivity center for disk to monitor the svc redp3961Using ibm total storage productivity center for disk to monitor the svc redp3961
Using ibm total storage productivity center for disk to monitor the svc redp3961
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storage
 
Demartek lenovo s3200_mixed_workload_environment_2016-01
Demartek lenovo s3200_mixed_workload_environment_2016-01Demartek lenovo s3200_mixed_workload_environment_2016-01
Demartek lenovo s3200_mixed_workload_environment_2016-01
 
Introducing Mache
Introducing MacheIntroducing Mache
Introducing Mache
 
Storage Virtualization Introduction
Storage Virtualization IntroductionStorage Virtualization Introduction
Storage Virtualization Introduction
 
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
 
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solutionDrive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution
Drive new initiatives with a powerful Dell EMC, Nutanix, and Toshiba solution
 
IBM SONAS Brochure
IBM SONAS BrochureIBM SONAS Brochure
IBM SONAS Brochure
 
h12525-top-reasons-scaleio-ho
h12525-top-reasons-scaleio-hoh12525-top-reasons-scaleio-ho
h12525-top-reasons-scaleio-ho
 

Mehr von Suministros Obras y Sistemas

Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NAS
Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NASNexsan E5000 Family / Familia E5000 Nexsan / Enterprise NAS
Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NASSuministros Obras y Sistemas
 
Veeam diferencias entre versión Standard y Enterprise de Backup & Replication
Veeam diferencias entre versión Standard y Enterprise de Backup & ReplicationVeeam diferencias entre versión Standard y Enterprise de Backup & Replication
Veeam diferencias entre versión Standard y Enterprise de Backup & ReplicationSuministros Obras y Sistemas
 
VMware Corporate Overview Presentation 2001, VMware Perspectiva Corporativa
VMware Corporate Overview Presentation 2001, VMware Perspectiva CorporativaVMware Corporate Overview Presentation 2001, VMware Perspectiva Corporativa
VMware Corporate Overview Presentation 2001, VMware Perspectiva CorporativaSuministros Obras y Sistemas
 
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex Generation
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex GenerationCisco Centro de Datos de proxima generación, Cisco Data Center Nex Generation
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex GenerationSuministros Obras y Sistemas
 
Veeam Product info - Backup Standard vs. Enterprise Edition
Veeam Product info -  Backup Standard vs. Enterprise EditionVeeam Product info -  Backup Standard vs. Enterprise Edition
Veeam Product info - Backup Standard vs. Enterprise EditionSuministros Obras y Sistemas
 
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationVMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationSuministros Obras y Sistemas
 

Mehr von Suministros Obras y Sistemas (20)

Cisco Rock Night - UCS & VXI
Cisco Rock Night -  UCS & VXICisco Rock Night -  UCS & VXI
Cisco Rock Night - UCS & VXI
 
ESG Brief Nexsan Gets Its NAS
ESG Brief Nexsan Gets Its NASESG Brief Nexsan Gets Its NAS
ESG Brief Nexsan Gets Its NAS
 
SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011SAP Solution On VMware - Best Practice Guide 2011
SAP Solution On VMware - Best Practice Guide 2011
 
Cisco Catalyst Poster
Cisco Catalyst PosterCisco Catalyst Poster
Cisco Catalyst Poster
 
Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NAS
Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NASNexsan E5000 Family / Familia E5000 Nexsan / Enterprise NAS
Nexsan E5000 Family / Familia E5000 Nexsan / Enterprise NAS
 
Fortinet Fortivoice - Solucion de UTM + VoIP
Fortinet Fortivoice - Solucion de UTM + VoIPFortinet Fortivoice - Solucion de UTM + VoIP
Fortinet Fortivoice - Solucion de UTM + VoIP
 
Veeam diferencias entre versión Standard y Enterprise de Backup & Replication
Veeam diferencias entre versión Standard y Enterprise de Backup & ReplicationVeeam diferencias entre versión Standard y Enterprise de Backup & Replication
Veeam diferencias entre versión Standard y Enterprise de Backup & Replication
 
Veeam Resumen de productos
Veeam Resumen de productosVeeam Resumen de productos
Veeam Resumen de productos
 
VMware Corporate Overview Presentation 2001, VMware Perspectiva Corporativa
VMware Corporate Overview Presentation 2001, VMware Perspectiva CorporativaVMware Corporate Overview Presentation 2001, VMware Perspectiva Corporativa
VMware Corporate Overview Presentation 2001, VMware Perspectiva Corporativa
 
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex Generation
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex GenerationCisco Centro de Datos de proxima generación, Cisco Data Center Nex Generation
Cisco Centro de Datos de proxima generación, Cisco Data Center Nex Generation
 
VCON xPoint S Briefing
VCON xPoint S BriefingVCON xPoint S Briefing
VCON xPoint S Briefing
 
Veeam Product info - Backup Standard vs. Enterprise Edition
Veeam Product info -  Backup Standard vs. Enterprise EditionVeeam Product info -  Backup Standard vs. Enterprise Edition
Veeam Product info - Backup Standard vs. Enterprise Edition
 
Veeam Sure Backup - Presentación Técnica
Veeam Sure Backup - Presentación TécnicaVeeam Sure Backup - Presentación Técnica
Veeam Sure Backup - Presentación Técnica
 
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & ReplicationVMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
VMware Recovery: 77x Faster! NEW ESG Lab Review, with Veeam Backup & Replication
 
Veeam nWorks Management Pack Español
Veeam nWorks Management Pack EspañolVeeam nWorks Management Pack Español
Veeam nWorks Management Pack Español
 
Veeam nWorks Management Pack Español
Veeam nWorks Management Pack EspañolVeeam nWorks Management Pack Español
Veeam nWorks Management Pack Español
 
Veeam nWorks Smart Plug-in Español
Veeam nWorks Smart Plug-in EspañolVeeam nWorks Smart Plug-in Español
Veeam nWorks Smart Plug-in Español
 
Veeam VMware MP Español
Veeam VMware MP EspañolVeeam VMware MP Español
Veeam VMware MP Español
 
Veeam Fastscp Español
Veeam Fastscp EspañolVeeam Fastscp Español
Veeam Fastscp Español
 
Veeam vPower Español
Veeam vPower EspañolVeeam vPower Español
Veeam vPower Español
 

Kürzlich hochgeladen

Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 

Kürzlich hochgeladen (20)

Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 

Measuring Nexsan Performance and Compatibility in Virtualized Environments

  • 1. openBench Labs Data Center Converged SAN Infrastructure Measuring Nexsan Performance and Analysis: Compatibility in Virtualized Environments
  • 2. Analysis: Measuring Nexsan Performance and Compatibility in Virtualized Environments Author: Jack Fegreus, Ph.D. Managing Director openBench Labs http://www.openBench.com September 15, 2010 Jack Fegreus is Managing Director of openBench Labs and consults through Ridgetop Research. He also contributes to InfoStor, Virtual Strategy Magazine, and Open Magazine, and serves as CTO of Strategic Communications. Previously he was Editor in Chief of Open Magazine, Data Storage, BackOffice CTO, Client/Server Today, and Digital Review. Jack also served as a consultant to Demax Software and was IT Director at Riley Stoker Corp. Jack holds a Ph.D. in Mathematics and worked on the application of computers to symbolic logic.
  • 3. Table of Contents Table of Contents Executive Summary 04 VOE Test Scenario 06 SASBeast Performance Spectrum 13 Customer Value 19 03
  • 4. Executive Summary Executive Summary “For an SME site to successfullycapable of VOEVOE, Nexsantoprovides a storage infrastructure that is implement a characteristic I/O patterns that distinguish a efficiently supporting the host server deliver optimal performance.” The mission of IT is to get the right information to the right people in time in order to create value or mitigate risk. With this in mind, the growing use of digital archiving, rich media in corporate applications, and a Virtual Operating Environment (VOE), such as VMware® vSphere™, is driving double-digit growth in the volume of data stored. That has made data storage the cornerstone of IT strategic plans for reducing capital expense (CapEx) and operating expense (OpEx) resource costs. To meet the needs of IT at openBench Labs Test Briefing: small to medium enterprise (SME) sites, Nexsan has evolved Nexsan SASBeast® Enterprise Storage Array its line of SASBeast™ storage 1) Enhance Administrator Productivity: An embedded WEB-based utility arrays around highly flexible enables the management of multiple storage arrays from one interface, software architecture that can be which can be integrated with the Microsoft Management Console and the integrated to the point of Virtual Disk Service to provide a complete single-pane-of-glass storage transparency in a Windows Server management interface. 2008 R2 environment. These 2) Maximize Density and Reliability with Hierarchical Storage: 4U chassis Nexsan arrays can support a full supports any mix of 42 SSD, SAS, and SATA drives mounted vertically— hierarchy of SSD, SAS, and SATA front-to-front and back-to-back—to cancel rotational vibrations, reduce head positioning errors, optimize thermal operations, and extend drive life. drives in complex SAN fabrics that utilize both Fibre Channel 3) Maximize Energy Savings: AutoMAID® (Massive Array of Idle Disks) technology automates the placing of drives in a hierarchy of idle states to and iSCSI paths. For SME sites, a conserve energy, while maintaining near-instantaneous access to data. SASBeast can provide multiple storage targets that support a wide 4) Maximize I/O Performance: Dual active-active RAID controllers support 42 range of application-specific simultaneously active drives: requirements stemming from Iometer Streaming I/O Benchmark: Total full-duplex throughput reached Service Level Agreements (SLAs). 1GB per second, while simultaneously streaming 128KB reads and 128KB writes using three SAS- and one SATA-based RAID-5 volumes. The robust architecture of the Iometer I/O Operations Benchmark: 4KB reads and writes (80/20 percent mix), averaged 2,330 IOPS on a SAS RAID-5 volume and scaled Nexsan SASBeast provides IT to 4,380 IOPS with a second volume. with a single platform that can satisfy a wide range of storage metrics with respect to access (IOPS), throughput (MB per second), or capacity (price per GB). Nonetheless, helping IT contain traditional CapEx provisioning costs through the deployment of hierarchical storage resources is only the starting point for the business value proposition of the SASBeast. Through tight integration of the Nexsan Management Console with the 04
  • 5. Executive Summary Microsoft Management Console (MMC) and Virtual Disk Services (VDS), a Nexsan SASBeast presents IT administrators with a unified SAN management suite to cost- effectively manage the reliability, availability and scalability (RAS) of multiple petabytes of data. This is particularly important for OpEx costs, which typically are 30 to 40 times greater than CapEx costs over the life of a storage array. Nexsan’s simplified storage management and task automation is particularly important when implementing a VOE, which introduces a complete virtualization scheme involving servers, storage, and networks. VOE virtualization with multiple levels of abstraction can complicate important IT administration functions. Reflecting these problems, IDG, in a recent survey of CIOs implementing server virtualization, reported that the percent of CIOs citing an increase in the complexity of datacenter management jumped from 47 percent at the end of 2008 to 67 percent at the end of 2009. Virtualization difficulties are often exacerbated by multiple incompatible advanced point solutions, which often come as extra-cost options of storage products. These powerful proprietary features are particularly problematic for IT at SME sites. Features designed to resolve complex issues encountered in large datacenters frequently only introduce incompatibilities among interdependent resources and limit the benefits that SME sites can garner from a VOE, which independently provides IT with sufficiently robust and easy-to-use solutions to deal with the intricacies of hypervisor architecture. For an SME site to successfully implement a VOE, Nexsan provides a storage infrastructure that is capable of efficiently supporting the characteristic I/O patterns that distinguish a VOE host server. With such a foundation in place, IT is free to use the comprehensive virtualization features of their VOE to provision resources for VMs, commission and decommission VM applications, and migrate VMs among multiple hosts in real time to meet changing resource demands. What’s more, advanced third party applications designed for a VOE are far more likely to recognize VOE solutions for such issues as thin provisioning than hardware-specific solutions. To meet the exacting demands of multiple IT environments, including that of a VOE, a Nexsan SASBeast provides IT with a storage resource fully optimized for reliability and performance. Each physical unit features design innovations to extend the lifespan of installed disk drives. The design of the SASBeast also promotes infrastructure scale-out, as each additional unit also adds controllers and ports to maintain performance. More importantly, the scaling-out of a storage infrastructure with SASBeast units has a minimal impact on IT overhead, which is the key driver of OpEx costs. Each SASBeast comes with an embedded WEB-enabled Graphical User Interface (GUI), dubbed NexScan®, which allows IT to provision a single subsystem with a hierarchy of drive types. Furthermore, NexScan simplifies administrator tasks in a Windows Server environment through tight integration of its management software with MMC and VDS for end-to-end storage management. With NexScan, an administrator can provision a logical disk on any SASBeast, export it to a Windows Server, and provision the server with that logical disk from a single interface. 05
  • 6. VOE Test Scenario VOE Test Scenario hile working with a range of Windows tools for administrators, “W such as Server Manager and Storage Manager for SANs, we were able to directly configure and manage storage LUNs for both the FC and the iSCSI fabrics without having to open up a separate Nexsan GUI.” I/O RANDOMIZATION IN A VOE With server virtualization rated as one of the best ways to optimize resource utilization and minimize the costs of IT operations, many sites run eight or more server VMs on each host in a production VOE. As a result, VOE host servers must be able to deliver higher I/O throughput loads via fewer physical connections. In stark contrast to a VOE, traditional core-driven SAN fabrics are characterized by a few storage devices with connections that fan out over multiple physical servers. Each server generates a modest I/O stream and multiple servers seldom access the same data simultaneously. From a business software perspective, the I/O requirements for a VM are the same as those for a physical server. From an IT administrator’s perspective, however, I/O requirements for VOE hosts are dramatically different. In a VOE, a small number of hosts share a small number of large datastores, while the hosts aggregate and randomize all of the I/O from multiple VMs. Elevated I/O stress also impacts the I/O requirements of VOE support servers. In particular, servers used for backup should be capable of handling the logical disks associated with multiple hosted VMs in parallel. 06
  • 7. VOE Test Scenario At the NEXSAN ISCSI & FC CONVERGEENCE heart of our vSphere 4.1 VOE test environment, we ran a mix of twelve server and workstation VMs. To provide a typical SME infrastructure, we utilized a Nexsan SASBeast, along with a hybrid SAN topology that featured a 4Gbps FC fabric and a 1GbE iSCSI fabric. With the price convergence Among the ways that Nexsan simplifies SAN management is through the convergence of iSCSI and Fibre of 8Gbps and Channel devices. Nexsan treats these connections as two simple transport options. Within the Nexsan GUI, we 4Gbps FC readily shared volumes that were used as VOE datastores by ESXi hosts and a Windows server that was used to HBAs, run backup software. More importantly, the SASBeast facilitated our ability to switch between a local Disaster SASBeast Recovery (DR) scenario, in which both the VOE host and the Windows server connected to the datastore volume over the FC fabric, and a remote DR scenario, in which the Windows server connected to the datastore via our systems are iSCSI fabric. now shipping with a new generation of 8Gbps FC ports. While working with a range of Windows tools for administrators, such as Server Manager and Storage Manager for SANs, we were able to directly configure and manage storage LUNs for both the FC and the iSCSI fabrics without having to open up a separate Nexsan GUI. What’s more, the Nexsan software, which treats the virtualization of storage over FC and iSCSI fabrics as simple transport options, made it very easy to switch back and forth between fabric connections. While physically setting up the SASBeast, a number of design elements that enhance reliability stood out. Storage reliability is especially important in a VOE as impaired array processing for a rebuild, or worse the loss of an array, cascades from one host server 07
  • 8. VOE Test Scenario down to multiple VMs. To extend the service life of disk drives, the SASBeast positions disks vertically in opposing order—alternating between front-to-front and then back-to-back. This layout dampens the natural rotational vibrations generated by each drive. Mounting all of the drives in parallel tends to amplify these vibrations and induce head positioning errors on reads and writes. Head positioning errors are particularly problematic in a VOE, which is characterized by random small-block I/O requests. In such an environment, data access time plays a greater role with respect to transfer time for I/O services. That vertical disk layout also helps create a positive-pressure air flow inside the unit. High efficiency waste heat transfer in a storage chassis is dependent on molecular contact as air flows over external drive surfaces. As a result, air pressure is just as important as air flow for proper cooling. To facilitate testing, our Nexsan SASBeast was provisioned with twenty eight 15K rpm SAS drives and fourteen 2TB SATA drives. To configure internal arrays and provide external access to target volumes, our SASBeast was set up with two controllers, which could be used to create both SAS and SATA arrays and which featured a pair of FC and a pair of iSCSI ports per controller. Also embedded in the unit was version 1.5.4 of the Nexsan management software, which we tightly integrated with MMC on all of our Windows-based servers. Using this storage infrastructure, we were able to provide our VOE and physical server environments with storage hierarchies designed to meet robust sets of application-specific SLA metrics. In particular, we configured three RAID arrays on the SASBeast: Two arrays utilized SAS drives and one array utilized SATA drives. For optimal IOPS performance, we created a 7.2TB RAID-5 SAS array on controller 0. Then on controller 1, we created a 6.6TB RAID-6 SAS array for higher availability. VOE CONSOLIDATION We implemented our vSphere™ 4.1 VOE, on a quad-processor HP ProLiant® DL580 server running the VMware ESXi™ 4.1 hypervisor. This server hosted 12 VMs running a mix of operating systems, which included Windows Server® 2008, Windows Server 2003, SUSE Linux Enterprise Server 11, and Windows 7. Within our VOE, we set up a storage hierarchy that was undergirded by three central datastores that were created on each of the three arrays set up on the Nexsan SASBeast. To manage VM backups, we installed Veeam Backup & Replication v4.1 on a quad- core Dell® PowerEdge® 1900 server, which ran Windows Server 2008 R2 and shared access to each datastore mapped to the VOE host. We tested datastore access over both our FC fabric, which represented a local DR scenario, and over our iSCSI fabric, which represented a remote DR scenario. In addition, we mapped another RAID-5 SATA volume to the Dell PowerEdge server to store backup images of VMs. The number of VMs that typically run on a VOE host along with the automated 08
  • 9. VOE Test Scenario movement of those VMs among hosts as a means to balance processing loads puts a premium on storage resources with low I/O latency. What’s more, increasing the VM density on a host also serves to further randomize I/O requests as the host consolidates multiple data streams from multiple VMs. NEXSAN VOE OPTIMIZATION To handle the randomized I/O patterns of a VOE host with multiple VMs, the Nexsan SASBeast provides a highly flexible storage infrastructure. In addition to being able to provision a SASBeast with a wide range of disk drive types, administrators have an equally broad choice of options from which to configure access policies for internal arrays and external volumes. Using a Nexsan SASBeast with dual controllers, IT administrators are free to assign any RAID array that is created to either controller. Independent of array ownership, administrators set up how logical volumes created on those arrays are accessed over Fibre Channel and iSCSI SAN fabrics. In particular, a SASBeast with two To maximize performance in our VOE, we biased I/O caching on the SASBeast for controllers and two FC ports random access. As the host consolidates the I/O streams of multiple VMs, sequential I/Os on each controller can present for multiple VMs are consolidated by the host with the result that random access I/O four distinct paths to each becomes a key characteristic for a VOE host. We also set up an I/O multipathing scheme volume exposed. on the Nexsan SASBeast that allowed us to map any array volume to one or all Fibre Channel and all iSCSI ports. Without any external constraints, four distinct paths to a storage device will create what appear to be four independent devices on the client system. That presents an intolerable situation to most operating systems, which assume exclusive ownership of a device. To resolve this problem, the simple solution is to use an active-passive scheme for ports and controllers that enables only one path at a time. That solution, however, precludes load balancing and link aggregation. Nexsan provides IT administrators with a number of options for sophisticated load balancing via multipath I/O (MPIO). The range of options for each unit is set within the 09
  • 10. VOE Test Scenario Nexsan GUI. Volume access can be restricted to a simple non-redundant controller setup or can be allowed to utilize all ports and all LUNs (APAL): The later configuration provides the greatest flexibility and protection and is the only configuration to support iSCSI failover. ASYMMETRIC MPIO To provide a high performance scale-out storage architecture, each SASBeast supports two internal controllers that are each capable of supporting two external FC ports and two external iSCSI ports. When a RAID array is created, it is assigned a master controller to service the array. If the SASBeast is placed in APAL mode, IT administrators can map any volume to all of the FC and iSCSI ports as a load balancing and failover scheme. In this situation I/O requests directed to the other controller incurs added overhead needed to switch control of the array. To garner the best I/O performance in a high-throughput low-latency environment, a host must be able to implement a sophisticated load balancing scheme that distinguishes between the two ports that are on the controller servicing the volume from the two ports on the other controller. The key is to avoid the overhead of switching controllers. VSPHERE 4.1 ASYMMETRIC MPIO DISCOVERY To meet this challenge, Nexsan implements Asymmetric Logical Unit Access (ALUA) when exporting target volumes. The Nexsan device identifies the paths that are active and optimized (i.e. paths that connect to a port on the controller servicing the device) and paths that are active but are not optimized. Nonetheless, for this sophisticated MPIO mechanism to work, it must be recognized by the host operating When either an ESX or an ESXi 4.1 hypervisor discovers a volume exposed by the Nexsan system that is using the SASBeast, it defaults to making only one path active for I/O. We changed this default to round robin SASBeast as a target. access. When this change is made, the new drivers in ESXI 4.1 did a SCSI inquiry on the FC volume and discovered that the Nexsan was an ALUA device. As a result, the hypervisor set up the Both Windows Server four paths to the servicing controller as active optimized and the four paths to the other controller 2008, which uses a as non-optimized. Using ESX or ESXi 4.0, all eight paths would set as equally active for I/O. sophisticated MPIO driver module from Nexsan, and the vSphere 4.1 hypervisors, ESXi 4.1 and ESX 4.1, recognize the Nexsan SASBeast as an ALUA target. As a result, IT administrators can set 10
  • 11. VOE Test Scenario an MPIO policy on any host running one of these operating systems that takes advantage of knowing which SAN paths connect to the controller servicing a logical drive. VSPHERE ALUA PERFORMANCE On Windows Server 2008, the base asymmetric access policy is dubbed Round Robin with Subset. This policy transmits I/O requests only to ports on the servicing controller. Should the servicing controller fail, Nexsan passes array servicing to the other controller in the SASBeast and the host computer automatically starts sending I/O requests to the active ports on the new servicing controller. To understand how the driver changes in the new VMware hypervisors impact host I/O throughput, we monitored FC data traffic at the switch ports connected to the Nexsan SASBeast and the VOE host. We tested I/O throughput by migrating a server VM from a datastore created on the SAS RAID-6 array, which was serviced by controller 1, to a datastore created on the SAS RAID-5 array, which was serviced by controller 0. We repeated this test twice: once with the VOE host running ESXI 4.1 and once with the host running ESXi 4.0 Update 2. Running either ESXi 4.0 or ESXi 4.1, the host server balanced all read and write requests over all of its FC ports; however, the I/O response patterns on the SASBeast were dramatically different for the two hypervisors: ESXi 4.1 transmitted I/O requests exclusively to the FC ports on the controller servicing a target volume. In particular, when a VOE host was running ESXi 4.1, the host only directed reads to controller 1, which serviced the SAS RAID-6 volume, and Upgrading to vSphere 4.1 from vSphere 4.0 boosted I/O throughput by 20% writes to controller 0, which serviced the for VMs resident on a datastore imported from the SASBeast. More SAS RAID-5 volume. In contrast read importantly for IT OpEx costs, gains in I/O throughput required a simple and write data requests were transmitted change in the MPIO policy for each datastore imported from the SASBeast. 11
  • 12. VOE Test Scenario to all of the FC ports on both of the SASBeast controllers equally, when the host was running ESXi 4.0. With all I/O requests directed equally across all FC ports under ESXi 4.0, throughput at each undistinguished port was highly variable as I/O requests arrived for both disks serviced by the controller and disks serviced by the other controller. As a result, I/O throughput averaged about 200MB per second. On the other hand, with our VOE host running ESXi 4.1, I/O requests for a logical disk from the SASBeast were only directed to and balanced over the FC ports on the controller servicing that disk. In this situation, full duplex reads and writes averaged 240MB per second as we migrated our VM from one datastore to another. For IT operations, I/O throughput under ESXi 4.1 for a VM accessing a SASBeast disk volume reached comparable levels of performance—particularly with respect to IOPS—to that of a physical server. 12
  • 13. SASBeast Performance Spectrum SASBeast Performance Spectrum sing four Iometer worker processes—two reading and one “U writing on three RAID-5 SAS volumes and one writing on a RAID-5 SATA volume—we measured total full-duplex throughput from the SASBeast at 1GB per second.” SCALE-UP AND SCALE-OUT I/O THROUGHPUT We began our I/O tests by assessing the performance of logical disks from the Nexsan SASBeast on a physical server, which was running Windows Server 2008 R2. For these tests we created and imported a set of volumes from each of the three arrays that we had initially created. We used Iometer to generate all I/O test workloads on our disk volumes. To assess streaming sequential throughput, we used large block reads and writes, which are typically used by backup, data mining, and online analytical processing (OLAP) applications. All of these datacenter-class applications need to stream large amounts of server-based data rapidly to be effective. As a result, we initially focused our attention of using the SASBeast in a 4Gbps Fibre Channel SAN. We began our Fibre Channel Sequential Access I/O Throughput benchmark testing Windows Server 2008 R2 — Round Robin with Subset MPIO on a 4Gbps FC SAN by reading data Read Throughput Write Throughput Application Throughput using large block RAID & Disk Type Iometer benchmark Iometer benchmark Veeam Backup & Replication 4.1 I/O requests over 128KB blocks 128KB blocks Parallel backup of four VMs two FC 245 MB/sec connections. RAID-5 SAS 554 MB/sec 400 MB/sec Maximum I/O reading VM data throughput varied RAID-6 SAS 505 MB/sec 372 MB/sec among our three 245 MB/sec logical volumes by RAID-5 SATA 522 MB/sec 430 MB/sec writing backup image only 10 percent. During these tests, the fastest reads were measured at 554MB per second using volumes created on our RAID-5 SAS array. What’s more, the aggregate read throughput for all targets using two active 4Gbps ports exceeded the wire speed capability of a single 4Gbps FC port. While we consistently measured the lowest I/O throughput on reads and writes using SAS RAID-6 volumes, the difference on writes between a SAS RAID-5 volume and a SAS RAID-6 volume was only about 7 percent—400MB per second versus 372MB per second. Using the Nexsan SASBeast, the cost for the added security provided by an extra parity bit, which allows two drives to fail in an array and continue processing I/O requests, is very minimal. This is particularly important for IT sites supporting mission-critical 13
  • 14. SASBeast Performance Spectrum applications that require maximum availability and high-throughput performance. A RAID-6 array provides an important safety net when rebuilding after a drive fails. Since a RAID-6 array can withstand the loss of two drives, the array can be automatically rebuilt with a hot-spare drive without risking total data loss should an unrecoverable bit error occur during the rebuild process. On the other hand, a backup of a degraded RAID-5 array should be run before attempting a rebuild. If an unrecoverable bit error occurs while rebuilding a degraded RAID-5 array, the rebuild will fail and data stored on the array will be lost. When performing writes, the variation in throughput between the disk volumes reached 15 percent. Interestingly, it was SATA RAID-5 volumes that consistently provided the best streaming performance for large-block writes. In particular, using 128KB writes to a SATA RAID-5 volume, throughput averaged 430MB per second. Given the low cost and high capacity advantages provided by 2TB SATA drives, the addition of exceptional write throughput makes the SASBeast an exceptional asset for Disk-to-Disk (D2D) backup operations and other disaster recovery functions. To assess the upper I/O throughput limits of our Nexsan SASBeast for D2D and other I/O intense applications, we used Iometer with multiple streaming read and write processes in order to scale total throughput. Using four Iometer worker processes—two reading and one writing on three RAID-5 SAS volumes and one writing on a RAID-5 SATA volume—we measured total full-duplex throughput from the SASBeast at 1GB per second. ISCSI NICHE iSCSI Sequential Access I/O Throughput Windows Server 2008 R2 — Jumbo frames, iSCSI HBAs, and Round Robin MPIO on a 1GbE iSCSI SAN Read Throughput Write Throughput Application Throughput RAID & Disk Type Iometer benchmark Iometer benchmark Veeam Backup & Replication 4.1 128KB blocks 128KB blocks 4 VM backups in parallel 82 MB/sec (1 HBA) 83 MB/sec (1 HBA) RAID-5 SAS 146 MB/sec (2 HBAs) 146 MB/sec (2 HBAs) 80 MB/sec (1 HBA 85 MB/sec (1 HBA) 136 MB/sec RAID-5 SATA 145 MB/sec (2 HBAs) 150 MB/sec (2 HBAs) writing backup image On the other hand, streaming throughput on a 1GbE iSCSI fabric has a hard limit of 120MB per second on each connection. What’s more, to approach the upper end of that comparatively limited of performance, IT must pay close attention to the selection of equipment. Most low-end switches and even some Ethernet NICs that are typically found at SMB sites do not support jumbo Ethernet frames or port trunking, which are important functions for maximizing iSCSI throughput. What’s more, it’s also important to isolate iSCSI data traffic from normal LAN traffic. For iSCSI testing, we utilized jumbo Ethernet frames—9,000 bytes rather than 1,500 14
  • 15. SASBeast Performance Spectrum bytes—with QLogic iSCSI HBAs, which offload iSCSI protocol processing and optimize throughput of large data packets. Our throughput results paralleled our FC fabric results: Streaming throughput differed by about 2 to 5 percent among logical volumes created on SAS and SATA arrays. Once again, the highest read throughput was measured on SAS- based volumes and the highest write throughput was measured on SATA-based volumes. PUSHING IOPS In addition to streaming throughput, there is also a need to satisfy small random I/O requests. On the server side, applications built on Oracle or SQL Server must be able to handle large numbers of I/O operations that transfer small amounts of data using small block sizes from a multitude of dispersed locations on a disk. Commercial applications that rely on transaction processing (TP) include such staples as SAP and Microsoft Exchange. More importantly, TP applications seldom exhibit steady-state characteristics. Typical TP loads for database-driven applications in an SMB environment average several hundred IOPS. These applications often experience occasional heavy processing spikes, such as at the end of a financial reporting period that can rise by an order of magnitude to several thousand IOPS. That variability makes systems running TP applications among the most difficult for IT to consolidate and among the most ideal to target for virtualization. A well-managed VOE is capable of automatically marshaling the resources needed to support peak processing demands. Random Access Throughput Windows Server 2008 — Iometer (80% Reads and 20% Writes) 4Gbps FC Fabric 4Gbps FC Fabric 1GbE iSCSI Fabric MS Exchange RAID & Disk Type 1 logical disk 2 logical disks 1 logical disk Heavy use (75% reads) 30ms average access time 30ms average access time 30ms average access time 4KB I/O 2,000 mail boxes 2,330 IOPS (4KB I/O) 4,318 IOPS (4KB I/O) 1,910 IOPS (4KB I/O) RAID-5 SAS 1,500 IOPS 2,280 IOPS (8KB I/O) 4,190 IOPS (8KB I/O) 1,825 IOPS (8KB I/O) 1,970 IOPS (4KB I/O) 1,350 IOPS (4KB I/O) RAID-6 SAS 1,915 IOPS (8KB I/O) 1,275 IOPS (8KB I/O) 1,165 IOPS (4KB I/O) 795 IOPS (4KB I/O) RAID-5 SATA 1,120 IOPS (8KB I/O) 755 IOPS (8KB I/O) We fully expected to sustain our highest IOPS loads on SAS RAID-5 volumes and were not disappointed. In these tests, we used a mix of 80 percent reads and 20 percent writes. In addition we limited the I/O request load with the restriction that the average I/O request response time had to be less than 30ms. Using 4KB I/O requests—the size used by MS Exchange, we sustained 2,330 IOPS on a SAS RAID-5 volume, 1,970 IOPS on a SAS RAID-6 volume, and 1,160 IOPS on a SATA RAID-5 volume. Next, we switched our top performing RAID-5 SAS volume from the FC to the iSCSI fabric and repeated the test. While performance dropped to 1910 15
  • 16. SASBeast Performance Spectrum IOPS, it was still at a par with the FC results of our RAID-6 SAS volume and above the level that Microsoft suggests for supporting 2,000 mail boxes with MS Exchange. Next we ran our database-centric Iometer tests with 8KB I/O requests. In these tests, we doubled the amount of data being transferred; however, this only marginally affected the number of IOPS processed. With 8KB transactions, which typify I/O access with Oracle and SQL Server, we sustained 2,280 IOPS on a SAS RAID-5 volume, 1,915 IOPS on a SAS RAID-6 volume, and 1,120 IOPS on a SATA RAID-5 volume. Once again when we connected our SAS RAID-5 volume over our iSCSI fabric, we measured a 20% drop in performance to 1,825 IOPS, which is more than sufficient to handle peak loads on most database-driven SME applications. To test transaction-processing scalability in a datacenter environment we added another RAID-5 SAS volume to our second SASBeast controller. By using two volumes on our FC fabric, we increased IOPS performance by 85% for both 4KB and 8KB I/O requests. In our two-volume tests with SAS RAID-5 volumes, we sustained levels of 4,320 IOPS and 4150 IOPS with an average response time of less than 30ms with a mix of 80 percent reads and 20 percent writes. STRETCHING I/O IN A VOE VM I/O Throughput Metrics VM: Windows Server 2008 VM — Host: ESXi 4.1 Hypervisor on a 4Gbps FC SAN MS Exchange VM Datastore Random Access Sequential Throughput Heavy use (75% reads) 1 Logical disk, 80% Reads RAID & Disk Type Streaming 128KB blocks 4KB I/O 30ms average access time 2,000 mail boxes 436 MB/sec (Reads) 2,380 IOPS (4KB I/O) RAID-5 SAS 1,500 IOPS 377 MB/sec (Writes) 2,011 IOPS (8KB I/O) 427 MB/sec (Reads) 2,325 IOPS (4KB I/O) RAID-6 SAS 342 MB/sec (Writes) 1,948 IOPS (8KB I/O) 420 MB/sec (Reads) RAID-5 SATA 380 MB/sec (Writes) Within our VOE, we employed a test scenario using volumes created on the same Nexsan RAID arrays that we tested with the physical Windows server. To test I/O throughput, we used Iometer on a VM running Windows Server 2008. Given the I/O randomization that takes place as a VOE host consolidates the I/O requests from multiple VMs, we were not surprised to measure sequential I/O throughput at a level that was 20% lower than the level measured on a similarly configured physical server. Nonetheless, at 420MB to 436MB per second for reads and 342MB to 380MB per second for writes, the levels of streaming throughput that we measured were 60 to 70 percent greater than the streaming I/O throughput levels observed when using high-end 16
  • 17. SASBeast Performance Spectrum applications, such as backup, data mining, and video editing, on physical servers and dedicated workstations. As a result, IT should have no problems supporting streaming applications on server VMs or using packaged VM appliances with storage resources underpinned by a SASBeast. On the other hand, we sustained IOPS levels for random access I/O on RAID-5 and RAID-6 SAS volumes that differed by only 2 to 3 percent from the levels sustained on a physical server. These results are important for VM deployment of mission-critical database-driven applications, such as SAP. What’s more, the ability to sustain 2,380 IOPS using 4KB I/O requests affirms the viability of deploying MS Exchange on a VM. APPLICATIONS & THE BEAST The real value of these synthetic benchmark tests with Iometer rests in the ability to use the results as a means of predicting the performance of applications. To put our synthetic benchmark results into perspective, we next examined full-duplex streaming throughput for a high end IT administrative application: VOE backup. What makes a VOE backup process a bellwether application for streaming I/O throughput is the representation of VM logical disk volumes as single disk files on the host computer, This encapsulation of VM data files into a single container file makes image-level backups faster than traditional file-level backups and enhances VM restoration. Virtual disks can be restored as whole images or individual files can be restored from within the backup image. More importantly, VMFS treats files representing VM virtual disks—dubbed a .vmdk files—analogously to a CD images. Host datastores typically contain only a small number of these files, which can be accessed by only one VM process at a time. It is up to the OS of the VM to handle file sharing for the data files encapsulated within the .vmdk file. This file locking scheme allows vSphere hosts to readily share datastore volumes on a SAN. The ability to share datastores among hosts greatly simplifies the implementation of vMotion, which moves VMs from one host to another for load balancing. With shared datastores, there is no need to transfer data, which makes moving a VM much easier. Before shutting down a VM, its state must be saved. The VM can then be restarted and brought to the saved state on a new host. Sharing datastores over a SAN is also very important for optimizing VM backups. For our VOE backup scenario, we utilized Veeam Backup & Replication 4.1 on a Windows server that shared access to all datastores belonging to hosts in our vSphere VOE. Every VM backup process starts with the backup application sending a VMsnap command to the host server to initiate a VM snapshot. In the snapshot process, the VM host server creates a point-in-time copy of the VM’s virtual disk. The host server then freezes the vmdk file associated with that virtual disk and returns a list of disk blocks for that vmdk file to the backup application. The backup application then uses that block list to read the VM snapshot data residing in the VMFS datastore. 17
  • 18. SASBeast Performance Spectrum To implement the fastest SIMPLIFIED VOE DATASTORE SHARING and most efficient backup process, IT must ensure that all VM data will be retrieved directly from the VMFS datastores using the vStorage API. That means the windows server running Veeam Backup & Replication must be directly connected to the VOE datastore. In other words, the Windows server must share each of the datastores used by VOE hosts. Through integration with VDS, the Nexsan SASBeast makes the configuration and management of shared logical volumes an easy task for IT administrators. In addition to the proprietary Nexsan software, IT administrators Reflecting the interoperability of the Nexsan SASBeast, we were able to use Storage can use Storage Manager for Manager for SANs within the Server Manager tool, to create and manage volumes, such as SANs to create new LUNs the vOperations datastore used by our ESXi host. In particular we were able to drill down on and manage existing ones on the vOperations volume and map it to our Windows Server 2008 R2 system via either our FC fabric or, our iSCSI fabric. the SASBeast. While Storage Manager for SANs provided less fine-grain control when configuring LUNs, wizards automated end-to-end storage provisioning from the creation of a logical volume on the SASBeast, to connecting that volume over either the FC or iSCSI fabric, and then formatting that volume for use on our Windows server. As a result, the Nexsan hardware and software provided an infrastructure that enabled the rapid set up of an optimized environment for Veeam Backup & Recovery. To minimize backup Application Throughput: VOE Full-Backup windows in a vSphere VOE, RAID-5 SAS Datastore — RAID-5 SATA Backup Repository Veeam Backup & Replication ESXi 4.1 Host, Windows Server 2008 R2, Veeam Backup & Recovery 4.1 uses vStorage APIs to 4VMs Processed Sequentially 4VMs Processed in Parallel directly back up files Datastore Access belonging to a VM without Optimal compression, Deduplication Optimal compression first making a local copy. FC SAN Fabric 164 MB/sec 245 MB/sec What’s more, Veeam Backup and Replication recognizes iSCSI SAN Fabric 97 MB/sec 136 MB/sec disks with VMFS thin provisioning to avoid 18
  • 19. SASBeast Performance Spectrum backing up what is in effect empty space. In addition, the Veeam software accelerates processing incremental and differential backups by leveraging Changed-Block Tracking within VMFS, As a result, we were able to leverage the VOE awareness for advanced options within our software solution to backup 4VMs at 245MB per second over our FC fabric and 136MB per second over our iSCSI fabric. GREEN CONSOLIDATION IN A VOE Equally important for IT, the Nexsan SASBeast was automatically making significant savings in power and cooling costs all throughout our testing. A key feature provided within the Nexsan management suite provides for the optimization of up to three power- saving modes. These settings modes are applied array-wide; however, the modes available to an array depend upon the disk drives used in the array. For example, the rotational speed of SAS drives cannot be slowed. Once the AUTOMAID SAVINGS disks enter into a power saving mode, they can be automatically restored to full speed with only the first I/O delayed when the array is accessed. More importantly, over the period that we ran extensive tests of sequential and random data access, the power savings for each of our Over our testing period, we kept the AutoMAID feature aggressively seeking to enact power savings. Over three disk arrays the test period, AutoMAID provided fairly uniform power savings across all arrays, amounting to just over 50% was remarkable of the expected power consumption. uniform. The bottom line over our testing regime was an average power savings of 52 percent. Even more importantly, this savings was garnered over a period that saw each of the 42 disks in our SASBeast average 1,500,000 reads and 750,000 writes. Also of note for the mechanical design of the SASBeast, over our entire test there was not one I/O transfer or media retry. 19
  • 20. Customer Value Customer Value hile Nexsan’s storage management software provides a number of “W important features to enhance IT productivity, the most important feature for lowering OpEx costs in a complex SAN topology is tight integration with VDS on Windows.” MEETING SLA METRICS As companies struggle to achieve maximum efficiency, the top-of-mind issue for all corporate decision makers is how to reduce the cost of IT operations. Universally, the leading solutions center on resource utilization, consolidation, and virtualization. These strategies, however, can exacerbate the impact of a plethora of IT storage costs, from failed disk drives to excessive administrator overhead costs. As resources are consolidated and virtualized, the risk of catastrophic disaster increases as the number of virtual systems grows and the number of physical devices underpinning those systems dwindle. While reducing OpEx and CapEx costs NEXSAN SASBEAST FEATURE BENEFITS are the critical divers in justifying the 1) Application-centric Storage: Storage volumes can be created from acquisition of storage resources, those a hierarchy of drives to meet multiple application-specific metrics. resources must first and foremost meet the 2) I/O Retries Minimized: During our testing we executed a total of 95 performance metrics needed by end-user million read and write I/Os without a single retry. organizations and frequently codified in 3) Automatic Power Savings: AutoMAID technology can be set to SLAs set up between IT and Line of place drives in a hierarchy of idle states to conserve energy, while Business (LoB) divisions. These metrics only delaying the first I/O request returning to a normal state. address all of the end user’s needs for 4) High Streaming Throughput: Running with SAS- and SATA-based successfully completing LoB tasks. volumes, application performance mirrored benchmark performance Typically these requirements translate into as backups of multiple VMs streamed total full-duplex data at data throughput (MB per second) and data upwards of 500MB per second. response (average access time) metrics. 5) Linear Scaling of IOPS in a VOE: Using random 4KB and 8KB I/O requests—typical of Exchange Server and SQL Server—VMs In terms of common SLA metrics, our sustained IOPS rates for both I/O sizes that differed by less than 2.5 benchmarks for a single SASBeast reached percent on a SAS RAID-5 volume and scaled by over 80% with the levels of performance that should easily addition of a second target volume. meet the requirements of most applications. What’s more, the scale-out- oriented architecture of the SASBeast presents an extensible infrastructure that can meet even the requirements of many High Performance Computing (HPC) applications. With a single logical disk from a RAID-5 base array, we were able to drive read throughput well over 500MB per second and write throughput well over 400MB per second. This is double the throughput rates needed for HDTV editing. With a single SASBeast and four logical drives, we scaled total full-duplex throughput to 1GB per second. 20
  • 21. Customer Value We measured equally impressive IOPS rates for random access small-block—4KB and 8KB—I/O requests. With one logical disk, we sustained more than 2,000 IOPS for both 4KB and 8KB I/O requests which scaled to over 4,000 IOPS with two logical disks. To put this into perspective, Microsoft recommends a storage infrastructure that can sustain 1,500 IOPS for an MS Exchange installation supporting 2,000 active mail boxes. BUILDING IN RELIABILITY In addition to performance specifications, storage reliability expressed in guaranteed uptime is another important component of SAL requirements. Starting with the mechanical design of the chassis and moving through to the embedded management software, Nexsan’s SASBeast provides a storage platform that promotes robust reliability and performance while presenting IT administrators with an easy to configure and manage storage resource. In particular, the design of the SASBeast chassis proactively seeks to maximize the life span of disks by minimizing vibrations and maximizing cooling. For IT operations— especially with respect to SLAs—that design helps ensure storage performance guaranties concerning I/O throughput and data access will not be negatively impacted by data access errors induced by the physical environment. By extending disk drive life cycles, there will be fewer drive failures for IT to resolve and fewer periods of degraded performance as an array is rebuilt with a new drive. During our testing of a SASBeast, openBench Labs generated a total of 95 million read and write requests without the occurrence of a single read or write retry. SIMPLIFIED MANAGEMENT DRIVES SAVINGS While Nexsan’s storage management software provides a number of important features to enhance IT productivity, the most important feature for lowering OpEx costs in a complex SAN topology is tight integration with VDS on Windows. For IT administrators, VDS integration makes the Nexsan storage management GUI available from a number of the standard tools on a Windows server. In particular, IT administrators are able to use Storage Manager for SANs to implement full end-to-end storage provisioning. By invoking just one tool, an administrator can configure a Nexsan array, create and map a logical volume to a host server, and then format that volume on the host. Nexsan also leverages very sophisticated SAN constructs to simplify administrative tasks. All storage systems with multiple controllers need to handle the dual issues of array ownership—active service processors—and SAN load balancing—active ports. Through Nexsan’s implementation of Asymmetric Logical Unit Access (ALUA), host systems with advanced MPIO software can access a SASBeast and discern the subtle difference between an active port and active service processor. As a result, an IT administrator is able to map a LUN to each FC port on each controller and allow the server’s MPIO software to optimize FC port aggregation and controller failover. Using this scheme openBench Labs was able to scale streaming reads and writes to four drives at upwards of 1GB per second. 21
  • 22. Customer Value REVOLUTIONARY GREEN SAVINGS In addition to providing an innovative solution for reliability and performance, Nexsan’s AutoMAID power management scheme automatically reduces SASBeast power consumption, which is a significant OpEx cost at large sites. Using three levels of automated power-saving algorithms, the SASBeast, eliminates any need for administrator intervention when it comes to the green IT issues of power savings and cooling. Nexsan automatically reduces the power consumed by idle disk drives via a feature dubbed AutoMAID™. AutoMAID works on a per-disk basis, but within the context of a RAID set, to provide multiple levels of power savings—from parking heads to slowing rotational speed—to further contain OpEX costs. While testing the SASBeast, openBench Labs garnered a 52% power savings on our arrays. 22