SlideShare ist ein Scribd-Unternehmen logo
1 von 28
Accelerate with IBM Storage
© Copyright IBM Corporation 2015
Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM.
IBM Spectrum Virtualize
HyperSwap Deep Dive
Bill Wiegand
Spectrum Virtualize – Consulting IT Specialist
IBM
Agenda
• High Availability vs Disaster Recovery
• Overview of HyperSwap Function
• Overview of Demo Lab Setup
• Outline of Steps and Commands to Configure HyperSwap
• Show Host View of Its Storage
• Demo Scenario 1
© Copyright IBM Corporation 2015
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
1
High Availability vs Disaster Recovery
Site 1
HA
Site 2
DR
© Copyright IBM Corporation 2015
ISL 1
Volume Mirroring Metro Mirror
or
Global Mirror
Cluster 2Cluster 1
ISL 2
Manual intervention required:
1.Stop all running servers
2.Perform failover operations
3.Remove server access in Site 1
4.Grant server access in Site 2
5.Start the servers in Site 2
6.Import Volume Groups
7.Vary on Volume Groups
8.Mount Filesystems
9.Recover applications
2
Today: SVC Enhanced Stretched Cluster
• Today’s stretched cluster technology splits an SVC’s two-way cache
across two sites
• Allows host I/O to continue without loss of access to data if a site is lost
• Enhanced Stretched Cluster in version 7.2 introduced site concept to
the code for policing configurations and optimizing data flow
© Copyright IBM Corporation 2015
Quorum storage
Power domain 3
Node 2
Power domain 2
Storage
Switch
Host
Node 1
Power domain 1
Storage
Switch
Host
Read Read
Write
3
HyperSwap
• HyperSwap is next step of HA (High Availability) solution
• Provides most disaster recovery (DR) benefits of Metro Mirror as well
• Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities
along with existing change volume and access I/O group technologies
• Essentially makes a host’s volumes accessible across two Storwize or
SVC I/O groups in a clustered system by making the primary and
secondary volumes of the Metro Mirror relationship, running under the
© Copyright IBM Corporation 2015
secondary volumes of the Metro Mirror relationship, running under the
covers, look like one volume to the host
4
High Availability with HyperSwap
• Hosts, SVC nodes, and storage are in one of two failure domains/sites
• Volumes visible as a single object across both sites (I/O groups)
HostA
HostB
Site 1 Site 2
Vol-1p Vol-2p
© Copyright IBM Corporation 2015
I/O group 0
Node 1 Node 2
I/O group 1
Node 3 Node 4
Vol-1p Vol-2pVol-1sVol-2s
5
High Availability with HyperSwap
Site 1 Site 2
Host A Host B
Clustered Host C
Public
Fabric 1A
Public
Fabric 2A
Public ISL
IBM Spectrum
Virtualize system
IBM Spectrum
Virtualize system
Hosts’ ports can be
• Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically
configured to use correct paths.
• Zoned only locally to simplify configuration, which only loses the ability for a host on one site to
continue in the absence of local IBM Spectrum Virtualize system nodes
Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap:
• Private SAN for node-to-node communication
• Public SAN for everything else
See Redbook SG24-8211-00 for more details
© Copyright IBM Corporation 2015
Public
Fabric 1B
Public
Fabric 2B
Storage Storage
Site 3
Quorum
Private ISL
Private
Fabric 1
Private
Fabric 2
6
Public ISL
continue in the absence of local IBM Spectrum Virtualize system nodes
Storage Systems can be
• IBM SVC for either HyperSwap or Enhanced Stretched Cluster
• IBM Storwize V5000, V7000 for HyperSwap only
Quorum provided by SCSI controller marked with “Extended Quorum support” on the
interoperability matrix.
Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes.
Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched
and hyperswap topologies only, excluding quorum storage)
HyperSwap – What is a Failure Domain
• Generally a failure domain will
represent a physical location, but
depends on what type of failure
you are trying to protect against
• Could all be in one building on different
floors/rooms or just different power domains in
same data center
• Could be multiple buildings on the same
© Copyright IBM Corporation 2015
• Could be multiple buildings on the same
campus
• Could be multiple buildings up to 300KM apart
• Key is the quorum disk
• If only have two physical sites and quorum disk
to be in one of them then some failure
scenarios won’t allow cluster to survive
automatically
• Minimum is to have active quorum disk system
on separate power grid in one of the two failure
domains
7
HyperSwap – Overview
• Stretched Cluster requires splitting nodes in an I/O group
• Impossible with Storwize family since an I/O group is confined to an enclosure
• After a site fails write cache is disabled
• Could affect performance
• HyperSwap keeps nodes in an I/O group together
• Copies data between two I/O groups
• Suitable for Storwize family of products as well as SVC
© Copyright IBM Corporation 2015
• Retains full read/write performance with only one site
8
HyperSwap – Overview
• SVC Stretched Cluster is not application aware
• If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t
pause at the same point, likely making the site’s data unusable for disaster recovery
• HyperSwap allows grouping of multiple volumes together in a
consistency group
• Data will be maintained consistently across the volumes
• Significantly improves the use of HyperSwap for disaster recovery scenarios as well
• There is no remote copy partnership configuration since this is a single
© Copyright IBM Corporation 2015
• There is no remote copy partnership configuration since this is a single
clustered system
• Intra-cluster replication initial sync and resync rates can be configured normally using the
‘chpartnership’ CLI command
9
HyperSwap – Overview
• Stretched Cluster discards old data during resynchronization
• If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data
isn’t available for disaster recovery, giving windows where both sites are online but loss of one site
could lose data
• HyperSwap uses Global Mirror with Change Volumes technology to
retain the old data during resynchronization
• Allows a site to continually provide disaster recovery protection throughout its lifecycle
• Stretched cluster did not know which sites hosts were in
© Copyright IBM Corporation 2015
• Stretched cluster did not know which sites hosts were in
• To minimize I/O traffic across sites more complex zoning and management of preferred nodes for
volumes was required
• Can use HyperSwap function on any Storwize family system supporting
multiple I/O groups
• Two Storwize V5000 control enclosures
• Two-four Storwize V7000 Gen1/Gen2 control enclosures
• Four-eight SVC node cluster
• Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered
10
HyperSwap – Overview
• Limits and Restrictions
• Max of 1024 HyperSwap volumes per cluster
• Each HyperSwap volume requires four FC mappings and max mappings is 4096
• Max capacity is 1PB per I/O group or 2PB per cluster
• Much lower limit for Gen1 Storwize V7000
• Run into limit of remote copy bitmap space
• Can’t replicate HyperSwap volumes to another cluster for DR using remote copy
• Limited FlashCopy Manager support
• Can’t do reverse flashcopy to HyperSwap volumes
• Max of 8 paths per HyperSwap volume same as regular volume
© Copyright IBM Corporation 2015
• Max of 8 paths per HyperSwap volume same as regular volume
• AIX LPM not supported today
• No GUI support currently
• Requirements
• Remote copy license
• For Storwize configurations an external virtualization license is required
• Minimum one enclosure license for the storage system providing active quorum disk
• Size public/private SANs as we do with ESC today
• Only applicable if using ISLs between sites/IO groups
• Recommended Use Cases
• Active/Passive site configuration
• Hosts access given volumes from one site only
11
Example Configuration
IOGroup-0 IOGroup-1
Local
Host
Vol-1 HyperSwap
Volume
Primary Secondary
Federated
Host
Federated
Host
SVC SVC
© Copyright IBM Corporation 2015
12
EMC
EMC
IBM
IBM
2TB
HPHP
3TB
IBMIBM
Local Host Connectivity
Local
Host
Fab-A Fab-B
2 HBA’s
4 Path’s
SVC
© Copyright IBM Corporation 2015
13
IOGroup-0 IOGroup-1
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
SVC SVC
SVC
Federated Host Connectivity
Federated
Host
Fab-B
Fab-A
2 HBA’s
8 Path’s
© Copyright IBM Corporation 2015
14
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
SVC SVC SVC SVC
Storage Connectivity
IOGroup-0 IOGroup-1
Fab-A Fab-B
2
2 2 2
SVC SVC SVC SVC
© Copyright IBM Corporation 2015
15
Storage Controller
2 2
HyperSwap – Understanding Quorum Disks
• By default clustered system selects three quorum disk candidates
automatically
• With SVC it is on the first three MDisks it discovers from any supported disk controller
• On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first
three MDisks discovered
• When cluster topology is set to “hyperswap” the quorum disks are
dynamically changed for proper configuration for a HyperSwap enabled
clustered system
© Copyright IBM Corporation 2015
clustered system
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• There is only ever one active quorum disk
• Used solely for tie-break situations when two sites loss access to each other
• Must be on externally virtualized storage that supports Extended Quorum
• The three are used to store critical cluster configuration data
16
• Quorum disk configuration not exposed in GUI
• ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is
currently the active one
• No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster
• Active quorum disk must be external and on a storage system that
supports “Extended Quorum” as noted on support matrix
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741
HyperSwap – Understanding Quorum Disks
© Copyright IBM Corporation 2015
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658
• Only certain IBM disk systems support extended quorum
17
HyperSwap – Lab Setup
Expansion Expansion
Host
Volume
• A HyperSwap clustered system
provides high availability between
different sites or within same data
center
• I/O Group assigned to each site
• A copy of the data is at each site
• Host associated with a site
• If you lose access to I/O Group 0
© Copyright IBM Corporation 2015
Storwize V7000
Clustered System
I/O Group 0
Control Enclosure
Enclosures
Expansion
Enclosures
Storwize V7000
Clustered System
I/O Group 1
Control Enclosure
Enclosures
Expansion
Enclosures
Site 1 Site 2
Clustered System
Separated at distance
• If you lose access to I/O Group 0
from the host then the host multi-
pathing will automatically access
the data via I/O Group 1
• If you only lose primary copy of
data then HyperSwap function will
forward request to I/O Group 1 to
service I/O
• If you lose I/O Group 0 entirely then
the host multi-pathing will
automatically access the other
copy of the data on I/O Group 1
18
HyperSwap – Configuration
• NAMING THE 3 DIFFERENT SITES:
• IBM_Storwize:ATS_OXFORD3:superuser> lssite
id site_name
1 Site1
2 Site2
3 Site3
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3
© Copyright IBM Corporation 2015
• LIST THE 4 CLUSTER NODES:
• IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard
ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30-
1:30:1:78G00PV
2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30-
2:30:2:78G00PV
3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50-
1:50:1:78REBAX
4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50-
2:50:2:78REBAX
19
HyperSwap – Configuration
• ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4
• ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX):
© Copyright IBM Corporation 2015
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1
• ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE:
• IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A
20
HyperSwap – Configuration
• LIST QUORUM LOCATIONS:
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• DEFINE TOPOLOGY:
© Copyright IBM Corporation 2015
• DEFINE TOPOLOGY:
• IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap
21
HyperSwap – Configuration
• MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
• Virtual Disk, id [2], successfully created
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
© Copyright IBM Corporation 2015
-iogrp 1 -mdiskgrp GBURG-05_POOL
• MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
22
HyperSwap – Configuration
• ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE
(IOGRP1):
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20
• DEFINE CONSISTENCY GROUP :
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP
© Copyright IBM Corporation 2015
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP
• DEFINE THE TWO REMOTE COPY RELATIONSHIPS:
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux
GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp
GBURG_CONGRP
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux
GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp
GBURG_CONGRP
23
HyperSwap – Configuration
• ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED:
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20
VOL20REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20
VOL20REL
© Copyright IBM Corporation 2015
• At this point the replication between master and aux volumes starts
automatically
• Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes
are in sync, then state changes to “consistent synchronized”
• MAP HYPERSWAP VOLUMES TO HOST:
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20
** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the
Metro Mirror relationship created earlier
24
HyperSwap – Configuration
© Copyright IBM Corporation 2015
25
Demonstration
• Show Host View of Its Storage
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
© Copyright IBM Corporation 2015
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
26
Miscellaneous
• Recommended to use 8 FC ports per node canister so we can dedicate
some ports strictly for the synchronous mirroring between the IO groups
• Link to HyperSwap whitepaper in Techdocs
• https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538
© Copyright IBM Corporation 2015
27

Weitere ähnliche Inhalte

Was ist angesagt?

Virtualization presentation
Virtualization presentationVirtualization presentation
Virtualization presentation
Mangesh Gunjal
 

Was ist angesagt? (20)

Open vSwitch - Stateful Connection Tracking & Stateful NAT
Open vSwitch - Stateful Connection Tracking & Stateful NATOpen vSwitch - Stateful Connection Tracking & Stateful NAT
Open vSwitch - Stateful Connection Tracking & Stateful NAT
 
Citrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.XCitrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.X
 
Advancements in-tiled-rendering
Advancements in-tiled-renderingAdvancements in-tiled-rendering
Advancements in-tiled-rendering
 
Monitoring in CloudStack
Monitoring in CloudStackMonitoring in CloudStack
Monitoring in CloudStack
 
Volume Encryption In CloudStack
Volume Encryption In CloudStackVolume Encryption In CloudStack
Volume Encryption In CloudStack
 
News And Development Update Of The CloudStack Tungsten Fabric SDN Plug-in
News And Development Update Of The CloudStack Tungsten Fabric SDN Plug-inNews And Development Update Of The CloudStack Tungsten Fabric SDN Plug-in
News And Development Update Of The CloudStack Tungsten Fabric SDN Plug-in
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
 
Introduction to virtualization
Introduction to virtualizationIntroduction to virtualization
Introduction to virtualization
 
Virtualization presentation
Virtualization presentationVirtualization presentation
Virtualization presentation
 
presentation on Docker
presentation on Dockerpresentation on Docker
presentation on Docker
 
Docker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshopDocker and Kubernetes 101 workshop
Docker and Kubernetes 101 workshop
 
Docker 101 : Introduction to Docker and Containers
Docker 101 : Introduction to Docker and ContainersDocker 101 : Introduction to Docker and Containers
Docker 101 : Introduction to Docker and Containers
 
VXLAN Integration with CloudStack Advanced Zone
VXLAN Integration with CloudStack Advanced ZoneVXLAN Integration with CloudStack Advanced Zone
VXLAN Integration with CloudStack Advanced Zone
 
Docker 101 - Nov 2016
Docker 101 - Nov 2016Docker 101 - Nov 2016
Docker 101 - Nov 2016
 
Routed Fabrics For Ceph
Routed Fabrics For CephRouted Fabrics For Ceph
Routed Fabrics For Ceph
 
How to upgrade like a boss to MySQL 8.0 - PLE19
How to upgrade like a boss to MySQL 8.0 -  PLE19How to upgrade like a boss to MySQL 8.0 -  PLE19
How to upgrade like a boss to MySQL 8.0 - PLE19
 
100Gbps OpenStack For Providing High-Performance NFV
100Gbps OpenStack For Providing High-Performance NFV100Gbps OpenStack For Providing High-Performance NFV
100Gbps OpenStack For Providing High-Performance NFV
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVM
 
Performance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li XiaoyanPerformance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li Xiaoyan
 
Docker Swarm for Beginner
Docker Swarm for BeginnerDocker Swarm for Beginner
Docker Swarm for Beginner
 

Andere mochten auch

Flash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling PointFlash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling Point
CTI Group
 

Andere mochten auch (12)

Providing best response times, tightest security and highest availability for...
Providing best response times, tightest security and highest availability for...Providing best response times, tightest security and highest availability for...
Providing best response times, tightest security and highest availability for...
 
New Generation of Storage Tiering
New Generation of Storage TieringNew Generation of Storage Tiering
New Generation of Storage Tiering
 
The benefits of IBM FlashSystems
The benefits of IBM FlashSystemsThe benefits of IBM FlashSystems
The benefits of IBM FlashSystems
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016
 
Xiv svc best practices - march 2013
Xiv   svc best practices - march 2013Xiv   svc best practices - march 2013
Xiv svc best practices - march 2013
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
IBM MQ Disaster Recovery
IBM MQ Disaster RecoveryIBM MQ Disaster Recovery
IBM MQ Disaster Recovery
 
IBM Integration Bus High Availability Overview
IBM Integration Bus High Availability OverviewIBM Integration Bus High Availability Overview
IBM Integration Bus High Availability Overview
 
Flash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling PointFlash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling Point
 
Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2
 
Replication for Business Continuity, Disaster Recovery and High Availability
Replication for Business Continuity, Disaster Recovery and High AvailabilityReplication for Business Continuity, Disaster Recovery and High Availability
Replication for Business Continuity, Disaster Recovery and High Availability
 

Ähnlich wie Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive

CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
ke4qqq
 
Virtualization 101 - DeepDive
Virtualization 101 - DeepDiveVirtualization 101 - DeepDive
Virtualization 101 - DeepDive
Amit Agarwal
 

Ähnlich wie Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive (20)

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep diveAccelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
 
Whats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and StorageWhats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and Storage
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Technical sales education enterprise- svc and ibm flash best practices update
Technical sales education   enterprise- svc and ibm flash best practices updateTechnical sales education   enterprise- svc and ibm flash best practices update
Technical sales education enterprise- svc and ibm flash best practices update
 
Txlf2012
Txlf2012Txlf2012
Txlf2012
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Getting Started with Apache CloudStack
Getting Started with Apache CloudStackGetting Started with Apache CloudStack
Getting Started with Apache CloudStack
 
Elastic vSphere, Now With More Stretch
Elastic vSphere, Now With More StretchElastic vSphere, Now With More Stretch
Elastic vSphere, Now With More Stretch
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
 
Inter connect2016 yss1841-cloud-storage-options-v4
Inter connect2016 yss1841-cloud-storage-options-v4Inter connect2016 yss1841-cloud-storage-options-v4
Inter connect2016 yss1841-cloud-storage-options-v4
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
Virtualization 101 - DeepDive
Virtualization 101 - DeepDiveVirtualization 101 - DeepDive
Virtualization 101 - DeepDive
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications
 
HA and DR for Cloud Workloads
HA and DR for Cloud WorkloadsHA and DR for Cloud Workloads
HA and DR for Cloud Workloads
 

Mehr von xKinAnx

Mehr von xKinAnx (20)

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rollout
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualization
 
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based services
 

Kürzlich hochgeladen

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Kürzlich hochgeladen (20)

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive

  • 1. Accelerate with IBM Storage © Copyright IBM Corporation 2015 Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM. IBM Spectrum Virtualize HyperSwap Deep Dive Bill Wiegand Spectrum Virtualize – Consulting IT Specialist IBM
  • 2. Agenda • High Availability vs Disaster Recovery • Overview of HyperSwap Function • Overview of Demo Lab Setup • Outline of Steps and Commands to Configure HyperSwap • Show Host View of Its Storage • Demo Scenario 1 © Copyright IBM Corporation 2015 • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 1
  • 3. High Availability vs Disaster Recovery Site 1 HA Site 2 DR © Copyright IBM Corporation 2015 ISL 1 Volume Mirroring Metro Mirror or Global Mirror Cluster 2Cluster 1 ISL 2 Manual intervention required: 1.Stop all running servers 2.Perform failover operations 3.Remove server access in Site 1 4.Grant server access in Site 2 5.Start the servers in Site 2 6.Import Volume Groups 7.Vary on Volume Groups 8.Mount Filesystems 9.Recover applications 2
  • 4. Today: SVC Enhanced Stretched Cluster • Today’s stretched cluster technology splits an SVC’s two-way cache across two sites • Allows host I/O to continue without loss of access to data if a site is lost • Enhanced Stretched Cluster in version 7.2 introduced site concept to the code for policing configurations and optimizing data flow © Copyright IBM Corporation 2015 Quorum storage Power domain 3 Node 2 Power domain 2 Storage Switch Host Node 1 Power domain 1 Storage Switch Host Read Read Write 3
  • 5. HyperSwap • HyperSwap is next step of HA (High Availability) solution • Provides most disaster recovery (DR) benefits of Metro Mirror as well • Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities along with existing change volume and access I/O group technologies • Essentially makes a host’s volumes accessible across two Storwize or SVC I/O groups in a clustered system by making the primary and secondary volumes of the Metro Mirror relationship, running under the © Copyright IBM Corporation 2015 secondary volumes of the Metro Mirror relationship, running under the covers, look like one volume to the host 4
  • 6. High Availability with HyperSwap • Hosts, SVC nodes, and storage are in one of two failure domains/sites • Volumes visible as a single object across both sites (I/O groups) HostA HostB Site 1 Site 2 Vol-1p Vol-2p © Copyright IBM Corporation 2015 I/O group 0 Node 1 Node 2 I/O group 1 Node 3 Node 4 Vol-1p Vol-2pVol-1sVol-2s 5
  • 7. High Availability with HyperSwap Site 1 Site 2 Host A Host B Clustered Host C Public Fabric 1A Public Fabric 2A Public ISL IBM Spectrum Virtualize system IBM Spectrum Virtualize system Hosts’ ports can be • Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically configured to use correct paths. • Zoned only locally to simplify configuration, which only loses the ability for a host on one site to continue in the absence of local IBM Spectrum Virtualize system nodes Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap: • Private SAN for node-to-node communication • Public SAN for everything else See Redbook SG24-8211-00 for more details © Copyright IBM Corporation 2015 Public Fabric 1B Public Fabric 2B Storage Storage Site 3 Quorum Private ISL Private Fabric 1 Private Fabric 2 6 Public ISL continue in the absence of local IBM Spectrum Virtualize system nodes Storage Systems can be • IBM SVC for either HyperSwap or Enhanced Stretched Cluster • IBM Storwize V5000, V7000 for HyperSwap only Quorum provided by SCSI controller marked with “Extended Quorum support” on the interoperability matrix. Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes. Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched and hyperswap topologies only, excluding quorum storage)
  • 8. HyperSwap – What is a Failure Domain • Generally a failure domain will represent a physical location, but depends on what type of failure you are trying to protect against • Could all be in one building on different floors/rooms or just different power domains in same data center • Could be multiple buildings on the same © Copyright IBM Corporation 2015 • Could be multiple buildings on the same campus • Could be multiple buildings up to 300KM apart • Key is the quorum disk • If only have two physical sites and quorum disk to be in one of them then some failure scenarios won’t allow cluster to survive automatically • Minimum is to have active quorum disk system on separate power grid in one of the two failure domains 7
  • 9. HyperSwap – Overview • Stretched Cluster requires splitting nodes in an I/O group • Impossible with Storwize family since an I/O group is confined to an enclosure • After a site fails write cache is disabled • Could affect performance • HyperSwap keeps nodes in an I/O group together • Copies data between two I/O groups • Suitable for Storwize family of products as well as SVC © Copyright IBM Corporation 2015 • Retains full read/write performance with only one site 8
  • 10. HyperSwap – Overview • SVC Stretched Cluster is not application aware • If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t pause at the same point, likely making the site’s data unusable for disaster recovery • HyperSwap allows grouping of multiple volumes together in a consistency group • Data will be maintained consistently across the volumes • Significantly improves the use of HyperSwap for disaster recovery scenarios as well • There is no remote copy partnership configuration since this is a single © Copyright IBM Corporation 2015 • There is no remote copy partnership configuration since this is a single clustered system • Intra-cluster replication initial sync and resync rates can be configured normally using the ‘chpartnership’ CLI command 9
  • 11. HyperSwap – Overview • Stretched Cluster discards old data during resynchronization • If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data isn’t available for disaster recovery, giving windows where both sites are online but loss of one site could lose data • HyperSwap uses Global Mirror with Change Volumes technology to retain the old data during resynchronization • Allows a site to continually provide disaster recovery protection throughout its lifecycle • Stretched cluster did not know which sites hosts were in © Copyright IBM Corporation 2015 • Stretched cluster did not know which sites hosts were in • To minimize I/O traffic across sites more complex zoning and management of preferred nodes for volumes was required • Can use HyperSwap function on any Storwize family system supporting multiple I/O groups • Two Storwize V5000 control enclosures • Two-four Storwize V7000 Gen1/Gen2 control enclosures • Four-eight SVC node cluster • Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered 10
  • 12. HyperSwap – Overview • Limits and Restrictions • Max of 1024 HyperSwap volumes per cluster • Each HyperSwap volume requires four FC mappings and max mappings is 4096 • Max capacity is 1PB per I/O group or 2PB per cluster • Much lower limit for Gen1 Storwize V7000 • Run into limit of remote copy bitmap space • Can’t replicate HyperSwap volumes to another cluster for DR using remote copy • Limited FlashCopy Manager support • Can’t do reverse flashcopy to HyperSwap volumes • Max of 8 paths per HyperSwap volume same as regular volume © Copyright IBM Corporation 2015 • Max of 8 paths per HyperSwap volume same as regular volume • AIX LPM not supported today • No GUI support currently • Requirements • Remote copy license • For Storwize configurations an external virtualization license is required • Minimum one enclosure license for the storage system providing active quorum disk • Size public/private SANs as we do with ESC today • Only applicable if using ISLs between sites/IO groups • Recommended Use Cases • Active/Passive site configuration • Hosts access given volumes from one site only 11
  • 13. Example Configuration IOGroup-0 IOGroup-1 Local Host Vol-1 HyperSwap Volume Primary Secondary Federated Host Federated Host SVC SVC © Copyright IBM Corporation 2015 12 EMC EMC IBM IBM 2TB HPHP 3TB IBMIBM
  • 14. Local Host Connectivity Local Host Fab-A Fab-B 2 HBA’s 4 Path’s SVC © Copyright IBM Corporation 2015 13 IOGroup-0 IOGroup-1 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM SVC SVC SVC
  • 15. Federated Host Connectivity Federated Host Fab-B Fab-A 2 HBA’s 8 Path’s © Copyright IBM Corporation 2015 14 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM SVC SVC SVC SVC
  • 16. Storage Connectivity IOGroup-0 IOGroup-1 Fab-A Fab-B 2 2 2 2 SVC SVC SVC SVC © Copyright IBM Corporation 2015 15 Storage Controller 2 2
  • 17. HyperSwap – Understanding Quorum Disks • By default clustered system selects three quorum disk candidates automatically • With SVC it is on the first three MDisks it discovers from any supported disk controller • On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first three MDisks discovered • When cluster topology is set to “hyperswap” the quorum disks are dynamically changed for proper configuration for a HyperSwap enabled clustered system © Copyright IBM Corporation 2015 clustered system • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • There is only ever one active quorum disk • Used solely for tie-break situations when two sites loss access to each other • Must be on externally virtualized storage that supports Extended Quorum • The three are used to store critical cluster configuration data 16
  • 18. • Quorum disk configuration not exposed in GUI • ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is currently the active one • No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster • Active quorum disk must be external and on a storage system that supports “Extended Quorum” as noted on support matrix • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741 HyperSwap – Understanding Quorum Disks © Copyright IBM Corporation 2015 • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658 • Only certain IBM disk systems support extended quorum 17
  • 19. HyperSwap – Lab Setup Expansion Expansion Host Volume • A HyperSwap clustered system provides high availability between different sites or within same data center • I/O Group assigned to each site • A copy of the data is at each site • Host associated with a site • If you lose access to I/O Group 0 © Copyright IBM Corporation 2015 Storwize V7000 Clustered System I/O Group 0 Control Enclosure Enclosures Expansion Enclosures Storwize V7000 Clustered System I/O Group 1 Control Enclosure Enclosures Expansion Enclosures Site 1 Site 2 Clustered System Separated at distance • If you lose access to I/O Group 0 from the host then the host multi- pathing will automatically access the data via I/O Group 1 • If you only lose primary copy of data then HyperSwap function will forward request to I/O Group 1 to service I/O • If you lose I/O Group 0 entirely then the host multi-pathing will automatically access the other copy of the data on I/O Group 1 18
  • 20. HyperSwap – Configuration • NAMING THE 3 DIFFERENT SITES: • IBM_Storwize:ATS_OXFORD3:superuser> lssite id site_name 1 Site1 2 Site2 3 Site3 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3 © Copyright IBM Corporation 2015 • LIST THE 4 CLUSTER NODES: • IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim : id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number 1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30- 1:30:1:78G00PV 2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30- 2:30:2:78G00PV 3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50- 1:50:1:78REBAX 4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50- 2:50:2:78REBAX 19
  • 21. HyperSwap – Configuration • ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4 • ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX): © Copyright IBM Corporation 2015 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1 • ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE: • IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A 20
  • 22. HyperSwap – Configuration • LIST QUORUM LOCATIONS: • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • DEFINE TOPOLOGY: © Copyright IBM Corporation 2015 • DEFINE TOPOLOGY: • IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap 21
  • 23. HyperSwap – Configuration • MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • Virtual Disk, id [2], successfully created • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL © Copyright IBM Corporation 2015 -iogrp 1 -mdiskgrp GBURG-05_POOL • MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand 22
  • 24. HyperSwap – Configuration • ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE (IOGRP1): • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20 • DEFINE CONSISTENCY GROUP : • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP © Copyright IBM Corporation 2015 • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP • DEFINE THE TWO REMOTE COPY RELATIONSHIPS: • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp GBURG_CONGRP • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp GBURG_CONGRP 23
  • 25. HyperSwap – Configuration • ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED: • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20 VOL20REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20 VOL20REL © Copyright IBM Corporation 2015 • At this point the replication between master and aux volumes starts automatically • Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes are in sync, then state changes to “consistent synchronized” • MAP HYPERSWAP VOLUMES TO HOST: • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20 ** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the Metro Mirror relationship created earlier 24
  • 26. HyperSwap – Configuration © Copyright IBM Corporation 2015 25
  • 27. Demonstration • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 © Copyright IBM Corporation 2015 • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 26
  • 28. Miscellaneous • Recommended to use 8 FC ports per node canister so we can dedicate some ports strictly for the synchronous mirroring between the IO groups • Link to HyperSwap whitepaper in Techdocs • https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538 © Copyright IBM Corporation 2015 27