8. Hardware Compatibility Guide HCG lists all the various hardware components that are supported by each version of ESX & ESXi Split up into different sub-guides which include systems (server make/models), storage devices (SAN/iSCSI/NFS) and I/O devices (NICs/Storage Controllers)
9. Hardware Compatibility Guide Updated frequently with new hardware being added and older hardware removed Why this guide is important? ESX/ESXi has a limited set of hardware device drivers VMware only provides support for server hardware that is listed on the HCG
10. Hardware Compatibility Guide Hardware may still work if not listed on the HCG Critical area is with I/O adapters Vendors are responsible for certifying that their h/w for HCG Must fill out application, after VMware approval 3rd party testing lab certifies h/w for vSphere
11. Hardware Compatibility Guide VMware does not enforce an expiration period for h/w added to the HCG, up to each vendor to certify their h/w for the most current VMware product releases VMware GSS will provide support for vSphere running on h/w not listed on HCG if problem is not h/w related
12. Hardware Compatibility Guide Check guide before buying h/w! Also check un-official guides (vm-help.com) For newer hardware not yet listed on HCG contact h/w vendor
13. Ensuring Hardware Compatibility If you plan on using features that require specific h/w (i.e Fault Tolerance), do your homework Check with vendors to see if they have the required h/w (i.e. Intel VT-d), also check HCG CPU choice can be critical, check VMware KB and Intel/AMD websites for CPU features
14. Ensuring Hardware Compatibility Checking for CPU p-state/c-state support can be tricky Make no assumptions with I/O adapters, on-board whitebox NICs are often not supported SATA adapters is OK, but SATA w/RAID is not supported Almost all shared storage will work
45. Use the same processor make & model if you want to use “fun” features such as VMotion incl. DRS, HA
46. CPU Considerations – CPU ID For CPU Details including 64 bit details use CPU ID Utility from VMware Download from http://www.vmware.com/download/ shared_utilities.html
47. CPU Considerations - EVC Designed to further ensure CPU compatibility between ESX hosts Enhanced VMotion Compatibility (EVC)
48. CPU Considerations – FT List of Fault Tolerance (FT) compatible CPUs: http://kb.vmware.com/kb/1008027 Also, VMware SiteSurvey
49.
50. Enhanced PowerNow! by AMD These technologies enable a server to dynamically switch CPU frequencies and voltages (referred to as Dynamic Voltage & Frequency Scaling or DVFS)
52. Memory – ECC & Registered More Lower Capacity DIMMs Vs Less Higher Capacity DIMMS ECC or Non ECC? (That is the question) Registered Vs Non-Registered DIMMS
53. Disks & Storage Controller Most problematic component with regard to compatibility Lots of choices: RAID, SAS, SATA, SSD. IOPS versus Capacity ESXi can be run from USB memory stick/SD Card & if shared storage appliance used local disk controller not important
54.
55. Dedicated hardware based (eg: PCIe) array controllers are preferable
68. Use Vyatta Core VA for routingrequirements – it’s free!
69. Installing ESXi on to a USB flash drive Very convenient and easy way to use ESXi Simple requirements: 1Gb flash drive, ESXi Installable ISO image
70. Installing ESXi on to a USB flash drive Can use any flash drive, officially only supported on h/w vendor supplied flash drives Performance can vary widely between brands, sizes & models Server must support booting from USB drive Use internally instead of externally
71. Installing ESXi on to a USB flash drive Install ESXi as normal but select USB flash drive instead Can also use Workstation to install to a VM Quality flash drives can last many years and over 10,000 write cycles Use USB image tools to clone or backup flash drives
72. Shared Storage – Physical Devices Lots of devices to choose from
74. Shared Storage – Physical Devices When using shared storage 1GB networking is a must iSCSI/NFS are built into vSphere and work with any pNICs Most affordable shared storage devices are listed on vSphere HCG Many units have lots of advanced features, are multi-functional, multi-RAID levels & multi-NICs
75. Shared Storage – Physical Devices Choosing between iSCSI & NFS often personal preference Offer similar performance but have different characteristics Some storage units support both Budget often dictates what you get In general, the more you spend, the better performance you’ll get
76. Shared Storage – Physical Devices Many units offer special RAID technology, try not to mix drive speeds/sizes More spindles – better performance Many units are expandable Low cost rack mount units available as well (Synology RS409, Iomega ix12-300r, NetgearReadyNAS 2100)
77. Shared Storage - VSAs Virtual Storage Appliances can turn local storage into iSCSI/NFS shared storage Can run physical or virtual Available to any host Can be cheaper then buying a dedicated device More complicated to setup and maintain
78. Shared Storage - VSAs Many VSA products to chose from Paid apps offer more features such as clustering, replicationand snapshots
79. Shared Storage - VSAs OpenFiler a popular choice Available as ISO image to install bare-metal on a server or as a pre-built virtual machine Managed via web browser Many advanced features: NIC-bonding, iSCSI or NFS, clustering Paid support is available
83. Transportable Awfully Revolutionary Datacentre of Invisible Servers {small}(vT.A.R.D.I.S:nano) 1 x Physical HP ML115 G5 with 8Gb RAM 128Gb SSD iSCSI Virtual SAN(s) vSphere 4 Classic 8 x ESXi Virtual Machines 60 x Nested Virtual Machines It’s bigger inside than the outside 45
84. Nested VMs - .VMX Hackery Asprin at the ready…. ESX as a Virtual Machine, running its own virtual machines Run a VM INSIDE another VM This isn’t a supported configuration, but hey it’s for lab/playing Enable VM’s to be run inside another VM monitor_control.restrict_backdoor TRUE on Virtual ESXi hosts only 46
85. Nested ESX, cool…but what about nested…? Hyper-V With .VMX hacks can install the role in a VM, but it cannot run nested VMs – not possible XenServer Can run Nested Linux VMs (not tried) Can’t run Nested Windows VMs 47
86. vTARDIS:nano Demo VM Provisioning Script (PowerShell) It’s bigger on the inside than it is outside .VMX hackery 48
87.
88. ESX VM Template with multiple vNICs & mounted .ISO, ready to start Install (or use PXE)
96. vTARDIS – Network Diagram 50 VM Network for guest iSCSI VLAN Physical Host Network Config VM Network for guest vMotion VLAN 10.0.0.x Admin Network VMKernel Ports For physical hosts
97. vTARDIS – Network Diagram 51 These are really vNICs VMKernel Ports in ESXi Guest Virtual ESXi Guest Network Note: no need to specify VLAN tag – it is done on host
98. vTARDIS: nanoNetworking All in-memory, no external switching Cross-over cable to admin console (my laptop) Physical vSwitch to promiscuous mode dvSwitch for VM Traffic 52
99. Layer 3 Routing Complete Software Solution Multiple vNICs to VLANs Simple routing configuration VyattaCore virtual router community edition - free Internet Access Smoothwallor IPCop – Opensource firewall/NAT router and proxy Simon Gallagher (vinf.net), VMworld 2010 53
100. Storage - Performance SSD & SATA combo is the way to go 128Gb SSD – lots of IOPS! ~$400 OpenFiler Virtual Machine 30Gb VMDK on SSD iSCSI Target for ESXi cluster nodes All disk access is in-memory, no physical networking Heavy use of thin-provisioning & linked clones 54
Simon S:Why Build a vSphere Lab?- Small Business Exam Study Hands-on Learning Centralized Home Infrastructure
Simon S:There are many components that make up a vSphere lab: Server Storage (physical & virtual (VSA)) Network (switches & in some cases routers – though there are VA router options) Hypervisor (vSphere ESX/ESXi) Operating System (eg: Windows, RedHat) Power & Cooling: this is a particular consideration if running your lab from home. Time: Large amounts of time can be spent building and working with your lab Be warned.
Eric:
Eric:
Eric:
Eric:
Eric:
Eric:
Eric:
Eric:
Simon S:vSphere lab servers can come in a range of different sizes and form factors – all varying in age, physical resource capabilities and manufacturers: Laptop/Desktop PCWhite BoxEntry Level ServerOld Enterprise Server
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:
Simon S:You can never have enough memory. In the average lab and production vSphere environment you will experience memory limitations before that of any of the other physical server resources which as CPU, network and often storage. *Though providing insufficient IOPS to a VM is also a common source of performance bottleneck.Most laptops, PCs and white box solutions based on commodity mother/system boards will only have 4-6 DIMM sockets with a 8GB limit. This of course is changing with time as higher capacity DIMMs are becoming more of norm. More high end commodity mother/system boards are now starting to provide 12GB+ of maximum memory capacity as standard. Event entry level SMB servers such as the HP ML110/115 have a relatovely limited maximum memory configuration of 8GB. The benefit of using enterprise level servers is that provide more DIMM sockets though the downside is that populating these DIMM sockets with enterprise level registered memory can be a costly affair.
Simon S:Error Correction Code (ECC) memory - This type of memory is often found in servers, as it is able to detect multiple- bit and correct single-bit errors during the transmission and storage of data on the DIMM. On ECC memory DIMMs, there are extra modules that store parity or ECC information. ECC memory is generally (though not always on low end DIMMs) more expensive than non-ECC.Registered (aka buffered)and unregistered memory – Often confused with ECC/Non-ECC memory. It contains a register on the DIMM that operates as a temporary holding area (buffer) for address and command signals moving between the memory module and CPU which increases the reliability of the data flow to and from the DIMM. Almost always found in enterprise level servers only.
Simon S:
Simon S:
Simon S:Most home lab switches will be Layer 2 (ie: non-routing) For routing within a vSphere lab environment consider using the popular Vyatta router VA – there is a free version!What to look for in a network switch:VLAN TaggingQoSJumbo FramesPopular gigabit switches – Linksys SLM series smart switches, HP ProCurve 1810G