2. Agenda
• Does virtualization make sense for you?
• Supported/Unsupported virtualization features
• Sizing recommendations for virtualized Exchange
deployments
• Common problem areas and how to avoid them
3. Customers virtualize Exchange
because of
• Internal standardization of virtualization
platform
• Deployment optimizations
• Management optimizations
• Monitoring optimizations
• Hardware utilization
• Cost
5. Exchange Team’s Recommendation
• We support virtualizing Exchange because it
makes sense for some of our customers
• Customers should pick the simple solution
• Physical is often the simple solution, but not for every
customer
• Customers that virtualize Exchange should have a
clear idea as to what they get out of virtualizing
7. Supported
•
Hypervisors
•
•
•
Any version of Windows Server with
Hyper-V technology or Microsoft
Hyper-V Server
Third-party hypervisors validated
under SVVP
Exchange roles
•
•
Both Exchange roles supported
•
•
Block-level
Same requirements as Exchange
2010*
Storage
•
Host-based clustering
•
•
•
•
Both Exchange roles supported
Both Exchange roles supported
•
Only on supported Windows
hypervisors or ESX 4.1 or newer
Migration
Jetstress testing in guests
8. Not Supported
• Dynamic
memory, memory
overcommit, memory
reclamation
• Configure static memory
for all Exchange VMs
• Significant processor
oversubscription
• Limited to 2:1, best
practice is 1:1
• Hypervisor snapshots
• Differencing/delta disks
• Apps on the root
• Only deploy
management, monitoring
, AV, etc.
9. Exchange and Hyper-V
• The most tested hypervisor for Exchange
• Every day, thousands of Exchange test machines run on Hyper-V
• Strong feedback loop between Exchange and Hyper-V teams
• Ongoing cross-group engineering relationship
• Hyper-V 2012 adds many new features
• Removal of 4 vCPU per-VM limit fantastic for Exchange
• Increased memory per-VM important for Exchange 2013
• Customers who virtualize Exchange (and size correctly) will
have a great experience on Windows Server 2012 Hyper-V
10. Windows Server 2012 and SMB 3.0
• Great platform for inexpensive, simple
storage of Hyper-V virtual machines
• Scalable to meet various levels of demand
• Use large low-cost disks and take full
advantage of functionality in the virtualization
stack
12. Virtualized Exchange and SMB 3.0
• Exchange 2013 supports VHD storage on SMB 3.0 file
shares
• Can be shares presented from Windows Server 2012 or other
implementations of SMB 3.0
• Specific to VHD storage – no direct access to shares from
Exchange
• No change to our support of downlevel SMB & NFS
• SMB 3.0 provides the ability to survive hardware failures
that would otherwise impact file access
• Still need to design for HA & failure handling
13. NFS is not supported because
• Particular NFS implementations have shown
• Significant performance issues historically, and Exchange is very
sensitive to high IO latencies
• Reliability issues that can result in database corruption and the
need to perform database reseeds
• NFS is a standard – there are many implementations, some
better than others
• Given that there are a number of alternatives for presenting
storage to a hypervisor, we don’t support NFS (or older
versions of SMB)
14. Host-based Migration
• We support Live Migration and similar 3rdparty technologies with Exchange
• We don’t support Hyper-V’s Quick Migration
or any other solution that saves point-in-time
state to disk
• VM has to remain online during migration
15. Hyper-V Replica
• Replica provides DR for a VM via log shipping
to a remote hypervisor
• Makes sense for applications that don’t have DR
capability built-in to the product
• Not supported for Exchange
• Use DAGs for HA, SR and DR
17. CPU Resources
• Plan to add CPU overhead to Exchange VMs
• ~10% for Hyper-V
• Follow vendor guidance for SVVP hypervisors
• Use the Server Role Requirements Calculator
• http://aka.ms/e2013calc
• Note “Server Role Virtualization” and “Hypervisor
CPU Adjustment Factor”
18. General Considerations
• Memory sizing generally unchanged between physical and
virtualize deployments
• Exchange is not NUMA aware, but will take advantage of OS
optimizations provided to host
• Storage should be optimized for low IO latency and high
service and data availability
• Take advantage of hypervisor networking flexibility to provide
availability and performance
• In general: size using guidance for physical, apply to virtual
19. Co-location with other VMs
• As a best practice, never overcommit any
resources in a way that could impact
Exchange VMs
• Use reservation options to ensure that
Exchange gets the resources it needs
• Allows other workloads to take advantage of
overcommit
20. Users per Host
• Avoid extreme scale-up to ensure high
availability of Exchange service
• Modern hypervisors are capable of hosting
hundreds of VMs per host, but plan for a
small number of Exchange 2013 mailbox VMs
per-host
• Use remaining capacity for other workloads
22. Failure Domains
• Understanding failure domains is critical for virtualized
Exchange design
• Stuff fails – embrace failure and prepare for it with
redundancy and multiple paths to infrastructure
• Placing multiple copies of the same mailbox database on
the same infrastructure lowers availability
• Placing any dependencies of Exchange on the same
infrastructure also lowers availability
• Active Directory
• Witness servers
23. Oversubscription
•
•
•
•
Hypervisors don’t make CPU resources appear out of thin air
Oversubscription can help with hardware consolidation, but it
doesn’t help provide reliable high-performance Exchange services
Proper Exchange sizing ensures that resources are available ondemand, so don’t allow hypervisors to yank those resources away
CPU constrained Exchange servers will experience reduced
throughput:
•
•
•
Delivery throughput reduction = queue growth
Content indexing throughput reduction = increased IOPS
Store ROP processing throughput reduction = RPC latency & end-user pain
24. Workload Management
• Dynamically adjusts background tasks to
ensure that resources are being consumed
efficiently
• Monitors resource consumption and makes
decisions based on resource availability
• Inconsistent resource assignment results in
bad WLM decisions
25. Dynamic Memory
• Hyper-V’s Dynamic Memory and VMware’s
Ballooning are fantastic for lab environments
• Not supported for production Exchange
servers
• Statically assign memory resources to
Exchange VMs
26. Host-based Failover Clustering
•
•
Enables you to automatically failover VMs to another server in the
event of a hardware failure
This
•
•
•
•
•
Is not an Exchange-aware solution
Only protects against server hardware/network failure
Does not provide HA in the event of storage failure / data corruption
Requires a more expensive and complicated shared storage deployment
If you are going to deploy host-based failover clustering, deploy
the Exchange mailbox server VMs in a DAG
27. Host-based Migration
• Migration of Mailbox servers can result in cluster
heartbeat timeouts
• Result is eviction of server from cluster (DAG) and
failover of active database copies
• Consider carefully adjusting heartbeat timeout
settings
Import-module FailoverClusters
(Get-Cluster).SameSubnetThreshold=5
(Get-Cluster).SameSubnetDelay=1000
28. Hypervisor Snapshots
• Hypervisor snapshots are great for labs
• Multiple machines need to roll back simultaneously
• Not supported for production servers
• Exchange system components cannot travel
backwards in time (log shipping, etc.)
29. Hyperthreading
• It’s OK to use hyperthreading on VM
hosts, but size for physical processor cores
(not virtual)
32. Summary
• It’s important to understand when to
virtualize Exchange – it’s not always the right
choice
• Don’t oversubscribe; Ensure that Exchange
always gets resources it needs
• Not all hypervisor features make sense for
Exchange