SlideShare ist ein Scribd-Unternehmen logo
1 von 120
vSphere 4.1: Delta to 4.0Tech Sharing for Partners Iwan ‘e1’ Rahabok, Senior Systems Consultant e1@vmware.com  |  virtual-red-dot.blogspot.com  |  tinyurl.com/SGP-User-Group  |  facebook.com/e1ang August 2010
Audience Assumption This is a level 200 - 300 presentation. It assumes: Good understanding of vCenter 4, ESX 4, ESXi 4.  Preferably hands-on We will only cover the delta between 4.1 and 4.0 Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc Good understanding of related storage, server, network technology Target audience VMware Specialist: SE + Delivery from partners
Agenda New features Server Storage Network Management Upgrade
4.1 New Feature (over 4.0, not 3.5): Server
4.1 New Feature (over 4.0, not 3.5): Server
4.1 New Feature (over 4.0, not 3.5): Storage
4.1 New Feature (over 4.0, not 3.5): Network
4.1 New Feature: Management
Builds: ESX build 260247 VC build 258902 Some stats: 4000 development weeks were spent to get to FC 5100 QA weeks were spent to get to FC 872 beta customers downloaded and tried it out 2012 servers, 2277 storage arrays, and 2170 IO devices are already on the HCL  
Consulting Services: Kit The vSphere Fundamentals services kit Includes core services enablement materials for vSphere Jumpstarts, Upgrades, Converter/P2V and PoCs.   The update reflects what’s new in vSphere 4.1 - including new resource limits, memory compression, Storage IO Control, vNetwork Traffic Management, and vSphere Active Directory Integration.  The kit is intended for use by PSO Consultants, TAMs, and SEs to help with delivering services engagements, PoCs, or knowledge transfer sessions with customers.  Located at Partner Central – Services IP Assets https://na6.salesforce.com/sfc/#version?selectedDocumentId=069800000000SSi For delivery partner:  Please  download this.
4.1 New Features: Server
PXE Boot Retry Virtual Machine -> Edit Settings -> Options -> Boot Options Failed Boot Recovery disabled by default Enable and set the automatically retry boot after X Seconds 12
Wide NUMA Support Wide VM Wide-VM is defined as a VM that has more vCPUs than the available cores on a NUMA node.  A 5-vCPU VM in a quad-core server Only the cores count, and hyperthreading threads don’t ESX 4.1 scheduler introduces wide-VM NUMA support Improves memory locality for memory-intensive workloads.  Based on testing with micro benchmarks, the performance benefit can be up to 11–17%. How it works ESX 4.1 allows wide-VMs to take advantage of NUMA management. NUMA management means that a VM is assigned a home node where memory is allocated and vCPUs are scheduled. By scheduling vCPUs on a NUMA node where memory is allocated, the memory accesses become local, which is faster than remote accesses
ESXi Enhancements to ESXi. Not applicable to ESX
Transitioning to ESXi ESXi is our architecturegoing forward
Moving toward ESXi Permalink to: VMware ESX and ESXi 4.1 Comparison Service Console (COS) Agentless vAPI-based Management Agents Hardware Agents Agentless CIM-based Commands forconfiguration anddiagnostics vCLI, PowerCLI Local Support Console CIM API vSphere API Infrastructure Service Agents Native Agents:NTP, Syslog, SNMP VMware ESXi “Classic” VMware ESX
Software Inventory - Connected to ESXi/ESX From vSphere 4.1 Before Enumerate instance of CIM_SoftwareIdentity Enhanced CIM provider now displays great detail on installed software bundles.
18 Software Inventory – Connected to vCenter Before From vSphere 4.1 Enumerate instance of CIM_SoftwareIdentity ,[object Object],[object Object]
Additional Deployment Option Scripted Installation Numerous choices for installation Installer booted from CD-ROM (default) Preboot Execution Environment (PXE) ESXi Installation image on CD-ROM (default), HTTP/S, FTP, NFS Script can be stored and accessed Within the ESXi Installer ramdisk On the installation CD-ROM HTTP / HTTPS, FTP, NFS  Config script (“ks.cfg”) can include Preinstall Postinstall First boot Cannot use scripted installation to install to a USB device
PXE Boot Requirements PXE-capable NIC. DHCP Server (IPv4). Use existing one. Media depot + TFTP server + gPXE A server hosting the entire content of ESXi  media.  Protocal: HTTP/HTTPS, FTP, or NFS server. OS: Windows/Linux server. Info We recommend the method that uses gPXE. If not, you might experience issues while booting the ESXi installer on a heavily loaded Network. TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers.
PXE boot PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS over network. How it works A host makes a DHCP request to configure its NIC.  A host downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi.  To complete the installation, you must provide the contents of the ESXi DVD  Once ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media must be specified.
Additional Deployment Option
Sample ks.cfg file # Accept the EULA (End User Licence Agreement) vmaccepteula # Set the root password to vmware123 rootpw vmware123 # Install the ESXi image from CDROM install cdrom # Auto partition the first disk – if a VMFS exists it will overwrite it. autopart --firstdisk --overwritevmfs # Create a partition called Foobar # Partition the disk identified with vmhba1:c0:t1:l0 to grow to a maxsize of 4000 partition Foobar --ondisk=mpx.vmhba1:C0:T1:L0 --grow –maxsize=4000 # Set up the management network on the vmnic0 using DHCP network –bootproto=dhcp --device=vmnic0 --addvmportgroup=0 %firstboot --level=90.1 --unsupported --interpreter=busybox # On this first boot, save the current date to a temporary file date > /tmp/foo # Mount an nfs share and put it at /vmfs/volumes/www esxcfg-nas -add -host 10.20.118.5 -share /var/www www
Full Support of Tech Support Mode There you go  2 types Remote: SSH Local: Direct Console
Full Support of Tech Support Mode Enter to toggle. That’s it! Disable/Enable  Timeout automatically disables TSM (local and remote) Running sessions are not terminated. All commands issued in Tech Support Mode are sent to syslog
Full Support of Tech Support Mode Recommended uses Support, troubleshooting, and break-fix Scripted deployment preinstall, postinstall, and first boot scripts Discouraged uses Any other scripts Running commands/scripts periodically (cron jobs) Leaving open for routine access or permanent SSH connection Admin will benotified when active
Full Support of Tech Support Mode We can also enable it via GUI Can enable in vCenter or DCUI Enable/Disable
Security Banner A message that is displayed on the direct console Welcome screen.
Total Lockdown
Total Lockdown ,[object Object],DCUI Lockdown Mode (disallows all access except root on DCUI) Tech Support Mode (local and remote) If all configured, then no local activity possible (except pull the plugs)
Additional commands in Tech Support Mode vscsciStats is now available in the console. Output is raw data for histogram. Use spreadsheet to plot the histogram Some use cases: Identify whether IO are sequential or random Optimizing for IO Sizes Checking for disk mis-alignment Looking at storage latency in moredetails
Additional commands in Tech Support Mode Additional commands for troubleshooting nc (netcat) http://en.wikipedia.org/wiki/Netcat tcpdump-uw http://en.wikipedia.org/wiki/Tcpdump
More ESXi Services listed More services are now shown in GUI. Ease of control For example, if SSH is not running, you can turn it on from GUI. ESXi 4.0 ESXi 4.1
ESXi Diagnostics and Troubleshooting ,[object Object]
 During normal operations:DCUI: misconfigs / restart mgmt agents  vCLI vCenter  vSphere APIs TSM: Advanced troubleshooting (GSS)  ESXi Remote Access Local Access
Common Enhancements for both ESX and ESXi 64 bit User World Running VMs with very large memory footprints implies that we need a large address space for the VMX.  32-bit user worlds (VMX32) do not have sufficient address space for VMs with large memory. 64-bit User worlds overcome this limitation. NFS The number of NFS volumes supported is increased from 8 to 64. Fiber Channel End-To-End Support for 8 GB (HBA, Switch & Array). VMFS Version changed to 3.46. No customer visible changes. Changes related to algorithms in the vmfs3 driver to handle new VMware APIs for Array Integration (VAAI).
Common Enhancements for both ESX and ESXi VMkernel TCP/IP Stack Upgrade Upgraded to version based on BSD 7.1.  Result:  improving FT logging, VMotion and NFS client performance. Pluggable Storage Architecture (PSA) New naming convention. New filter plugins to support VAAI (vStorage APIs for Array Integration). New PSPs (Path Selection Policies) for ALUA arrays. New PSP from DELL for the EqualLogic arrays.
USB pass-through New Features for both ESX/ESXi
USB Devices 2 steps: Add USB Controller Add USB Devices
USB Devices Only devices listed on the manual is supported. Mostly for ISV licence dongle. A few external USB drives. Limited list of device for now
Example 1 After vMotion, the VM will be on another (remote) ESXi. Communication inter-ESXi will use Mgmt Network (ESXi has no SC network) You cannot multi-select devices at this stage – add them one by one. ,[object Object],[object Object]
Example 2: adding UPS ,[object Object],[object Object],[object Object]
USB Devices Up to 20 devices per VM. Up to 20 devices per ESX host. 1 device can only be owned by 1 VM at a given time. No sharing. Supported vMotion Communication via the management network DRS Unsupported DPM. DPM is not aware of the device and may turn it off. This may cause loss of data. So disable DRS for this VM so it stays in this host only. Fault Tolerance Design consideration Take note of situation when the ESX host is not available (planned or unplanned downtime)
MS AD integration New Features for both ESX/ESXi
AD Service Provides authentication for all local services vSphere Client Other access based on vSphere API  DCUI Tech Support Mode (local and remote) Has nominal AD groups functionality Members of “ESX Admins” AD group have Administrative privilege Administrative privilege includes: Full Administrative role in vSphere Client and vSphere API clients DCUI access Tech Support Mode access (local and remote)
The Likewise Agent ESX uses an agent from Likewise to connect to MS AD and to authenticate users with their domain credentials.  The agent integrates with the VMkernel to implement the mapping for applications such as the logon process (/bin/login) which uses a pluggable authentication module (PAM).  As such, the agent acts as an LDAP client for authorization (join domain) and as a Kerberos client for authentication (verify users). The vMA appliance also uses an agent from Likewise. ESX and vMA use different versions of the Likewise agent to connect to the Domain Controller. ESX uses version 5.3 whereas vMA uses version 5.1. 49
Joining AD: Step 1
Joining AD: Step 2 1. Select “AD” 2. Click “Join Domain” 3. Join the domain. Full name. @123.com
AD Service A third method for joining ESX/ESXi hosts and enabling Authentication Services to utilize AD is to configure it through Host Profiles
AD Likewise Daemons on ESX ,[object Object]
netlogond is the Likewise Site Affinity service - detects optimal AD domain controller, global catalogue and data caches. Launched from /etc/init.d/netlogond script.
lsassd is the Likewise Identity & Authentication service. It does authentication, caching and idmap lookups. This daemon depends on the other two daemons running. Launched from /etc/init.d/lsassd script.root     18015     1  0 Dec08 ?     00:00:00 /sbin/lsassd --start-as-daemon root     31944     1  0 Dec08 ?     00:00:00 /sbin/lwiod --start-as-daemon root     31982     1  0 Dec08 ?     00:00:02 /sbin/netlogond --start-as-daemon
ESX Firewall Requirements for AD Certain ports in SC are automatically opened in the Firewall Configuration to facilitate AD.  Not applicable to ESXi Before After
Time Sync Requirement for AD Time must be in sync between the ESX/ESXi server and the AD server.  For the Likewise agent to communicate over Kerberos with the domain controller, the clock of the client must be within the domain controller's maximum clock skew, which is 300 seconds, or 5 minutes, by default.  The recommendation would be that they share the same NTP server.
vSphere Client Now when assigning permissions to users/groups, the list of users and groups managed by AD can be browsed by selecting the Domain.
Info in AD The host should also be visible on the Domain Controller in the AD Computers objects listing. Looking at the ESX Computer Properties shows a Name of RHEL(as it the  Service Console on the ESX) & Service pack of ‘Likewise Identity 5.3.0’
Memory Compression New Features for both ESX/ESXi
Memory Compression VMKernel implement a per-VM compression cache to store compressed guest pages.  When a guest page (4 KB page) needs to swapped, VMKernel will first try to compress the page. If the page can be compressed to 2 KB or less, the page will be stored in the per-VM compression cache.  Otherwise, the page will be swapped out to disk. If a compressed page is again accessed by the guest, the page will decompressed online.
Changing the value of cache size
Virtual Machine Memory Compression Virtual Machine -> Resource Allocation Per-VM statistic showing compressed memory
Monitoring Compression 3 new counters introduced to monitor Host level, not VM level.
Power Management
Power consumption chart Per ESX, not per cluster Need hardware integration. Difference HW makes have different info
Performance Graphs – Power Consumption We can now track the Power consumption of VMs in real-time Enabled through Software Settings ->Advanced Settings -> Power -> Power.ChargeVMs 65
Host power consumption In some situation,  may need to edit /usr/share/sensors/vmware to get support for the host Different HW makers have different API. VM power consumption Experimental. Off by default
ESX Features only for ESX (not ESXi)
ESX: Service Console firewall Changes in ESX 4.1 ESX 4.1 introduces these additional configuration files located in /etc/vmware/firewall/chains: usercustom.xml userdefault.xml Relationship between the 2 files “user” overwrites. The default files custom.xml and default.xml are overridden by usercustom.xml and userdefault.xml. All configuration is saved in usercustom.xml and userdefault.xml. Copy the original custom.xml and default.xml files.  Use them as a template for usercustom.xml and userdefault.xml.
Cluster HA, FT, DRS & DPM
Availability Feature Summary HA and DRS Cluster Limitations High Availability (HA) Diagnostic and Reliability Improvements FT Enhancements  vMotionEnhancements Performance Usability Enhanced Feature Compatibility VM-host Affinity (DRS) DPM Enhancements Data Recovery Enhancements
DRS: more HA-awareness vSphere 4.1 adds logic to prevent imbalance that may not be good from HA point of view. Example 20 small VM and 2 very large VM. 2 ESXi hosts. Same workload with the above 20 collectively. vSphere 4.0 may put 20 small VM on Host A and 2 very large VM on Host B. From HA point of view, this may result in risks when Host A fails. vSphere 4.1 will try to balance the number of VM.
HA and DRS Cluster Improvements Increased cluster limitations ,[object Object]
Increased limits for VMs/host and VMs/cluster
Cluster limits for HA and DRS:
32 hosts/cluster
320 VMs/host (regardless of # of hosts/cluster)
3000 VMs/cluster
Note that these limits also apply to post-failover scenarios. Be sure that these limits will not be violated even after the maximum configured number of host failovers.,[object Object]
Supports 320x5 VMs/cluster?  NO
Cluster can only support 320x4 VMsX 5-host cluster, tolerate 2 host failures ,[object Object]
Cluster can only support 320x3 VMsX X
HA Diagnostic and Reliability Improvements HA Healthcheck Status ,[object Object],Improved HA-DRS interoperability during HA failover ,[object Object],[object Object]
HA Operational Status Just another example 
HA: Application Awareness Application Monitoring can restart a VM if the heartbeats for an application it is running are not received Expose APIs for 3rd party app developers Application Monitoring works much the same way that VM Monitoring:  If the heartbeats for an application are not received for a specified time via VMware Tools, its VM is restarted. ESXi 4.0 ESXi 4.1
Fault Tolerance
FT Enhancements DRS FT fully integrated with DRS ,[object Object],Versioning control lifts requirement on ESX build consistency ,[object Object],Events for Primary VM vs. Secondary VM differentiated ,[object Object],FT PrimaryVM FT SecondaryVM Resource Pool
No data-loss Guarantee vLockStep: 1 CPU step behind Primary/backup approach A common approach to implementing fault-tolerant servers is the primary/backup approach. The execution of a primary server is replicated by a backup server. Given that the primary and backup servers execute identically, the backup server can take over serving client requests without any interruption or loss of state if the primary server fails
New versioning feature FT now has a version number to determine compatibility  Restriction to have identical ESX build # has been lifted Now FT checks it’s own version number to determine compatibility Future versions might be compatible with older ones, but possibly not vice-versa Additional information on vSphere Client FT version displayed in host summary tab # of FT enabled VMs displayed there For hosts prior to ESX/ESXi 4.1, this tab lists the host build number instead. FT versions included in vm-support output /etc/vmware/ft-vmk-version:product-version = 4.1.0build = 235786ft-version = 2.0.0
FT logging improvements FT traffic was bottlenecked to 2 Gbit/s even on 10 Gbit/s pNICs Improved by implementing ZeroCopy feature for FT traffic Tx, too For sending only (Tx) Instead of copying from FT buffer into pNIC/socket buffer just a link to the memory holding the data is transferred Driver accesses data directly- no copy needed
FT: unsupported vSphere features Snapshots.  Snapshots must be removed or committed before FT can be enabled on a VM. It is not possible to take snapshots of VMs on which FT is enabled. Storage vMotion.  Cannot invoke Storage vMotion for FT VM. To migrate the storage, temporarily turn off FT, do Storage vMotion, then turn on FT.  Linked clones.  Cannot enable FT on a VM that is a linked clone, nor can you create a linked clone from an FT-enabled VM. Back up.  Cannot back up an FT VM using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a VM snapshot, as performed by ESXi. To back up VM in this manner, first disable FT, then re-enable FT after backup is done.  Storage array-based snapshots do not affect FT. Thin Provisioning, NPIV, IPv6, etc
FT: performance sample  MS Exchange 2007 1 core handles 2000 Heavy Online user profile VM CPU utilisation is only 45%. ESX is only 8% Based on previous “generation” Xeon 5500, not 5600 vSphere 4.0, not 4.1 Opportunity Higher uptime forcustomer emailsystem
Integration with HA Improved FT host management Move host out of vCenter DRS able to vMotion FT VMs Warning if HA gets disabled and following operations will be disabled Turn on FT Enable FT Power on a FT VM  Test failover  Test secondary restart
VM-to-Host Affinity
Background Different servers in a datacenter is a common scenario Differences by memory size, CPU generation or # or type of pNICs Best practice up to now Separate different hosts in different clusters Workarounds Creating affinity/ anti-affinity rules Pinning a VM to a single host by disabling DRS on the VM. Disadvantage Too expensive as each cluster needed to have HA failover capacity New feature: DRS Groups Host and VM groups  Organize ESX hosts and VMs into groups Similar memory Similar usage profile …
VM-host Affinity (DRS) Required rules Preferential rules Rule enforcement: 2 options ,[object Object]
Preferential: DRS/HA will violate the rule if necessary for failover or for maintaining availability,[object Object]
Soft Rules Soft Rules DRS will follow a soft rule if possible Will allow actions  User-initiated DRS-mandatory HA actions Rules are applied as long as their application does not impact satisfying current VM cpu or memory demand DRS will report a warning if the rule isn’t followed DRS does not produce a move recommendation to follow the rule Soft VM/host affinity rules are treated by DRS as "reasonable effort"
Grouping Hosts with different capabilities DRS Groups Manager Defines Groups VM groups Host groups
Managing ISV Licensing Example Customer has 4-node cluster Oracle DB and Oracle BEA are charged for every hosts that can run it. vSphere 4.1 introduces “hard partitioning” Both DRS and HA will honour this boundary. Rest of VMs Oracle DB DMZ VM Oracle BEA DMZ LAN Production LAN
Managing ISV Licensing Hard partitioning If a host is in a VM-host must affinity rule, they are considered compatible hosts, all the others are tagged as incompatible hosts. DRS, DPM and HA are unable to place the VMs on incompatible hosts.Due to the incompatible host designation, the mandatory VM-Host is a feature what can be (undeniably) described as hard partioning. You cannot place and run a VM on incompatible host Oracle has not acknowledged this as hard partitioning. Sources http://frankdenneman.nl/2010/07/vm-to-hosts-affinity-rule/ http://www.latogalabs.com/2010/07/vsphere-41-hidden-gem-host-affinity-rules/
Example of setting-up: Step 1 In this example, we are adding the “WinXPsp3” VM to the group. The group name is “Desktop VMs”
Example of setting-up: Step 2 Just like we can group VM, we can also group ESX
Example of setting-up: Step 3 We have grouped the VMs in the cluster into 2 We have grouped the ESX in the cluster into 2
Example of setting-up: Step 4 This is the screen where we do themapping. VM Group mapped to Host Group
Example of setting-up: Step 5 Mapping is done. The Cluster Settings dialog box now display the new rules type.
HA/ DRS DRS lists rules Switch on or off Expand to display DRS Groups  Rule details Rule policy Involved Groups
Enhancement for Anti-affinity rules Now more than 2 VMs in a rule Each rule can have a couple of VMs Keep them all together Separate them through cluster For each VM at least 1 host is needed 101
DPM Enhancements Scheduling DPM Turning on/off DPM is now a scheduled task DPM can be turned off prior to business hours in anticipation for higher resource demands Disabling DPM It brings hosts out of standby Eliminates risk of ESX hosts being stuck in standby mode while DPM is disabled.  Ensures that when DPM is disabled, all hosts are powered on and ready to accommodate load increases.
DPM Enhancements
vMotion
vMotionEnhancements Significantly decreased the overall migration time (time will vary depending on workload) Increased number of concurrent vMotions: ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps network Datastore: 128 (both VMFS and NFS) Maintenance mode evacuation time is greatly decreased due to above improvements
vMotion Re-write of the previous vMotion code Sends memory pages bundled together instead of one after the other Less network/ TCP/IP overhead Destination pre-allocates memory pages Multiple senders/ receivers Not only a single world responsible for each vMotion thus limit based on host CPU Sends list of changed pages instead of bitmaps Performance improvement Throughput improved significantly for single vMotion ESX 3.5 – ~1.0Gbps ESX 4.0 – ~2.6Gbps ESX 4.1 – max 8 Gbps Elapsed reduced by 50%+ on 10GigE tests.  Mix of different bandwidth pNICs not supported
vMotion Aggressive Resume Destination VM resumes earlier Only workload memory pages have been received Remaining pages transferred in background Disk-Backed Operation Source host creates a circular buffer file on shared storage Destination opens this file and reads out of it Works only on VMFS storage In case of network failure during transfer vMotion falls back to disk based transfer Works together with aggressive resume feature above
Enhanced vMotion Compatibility Improvements Preparation for AMD Next Generation without 3DNow! Future AMD CPUs may not support 3DNow! To prevent vMotion incompatibilities, a new EVC mode is introduced.
EVC Improvements Better handling of powered-on VMs vCenter server now uses a live VM's CPU feature set to determine if it can be migrated into an EVC cluster Previously, it relied on the host's CPU features A VM could run with a different vCPU than the host it runs on I.e. if it was initially started on an older ESX host and vMotioned to the current one So the VM is compatible to an older CPU and could possibly be migrated to the EVC cluster even if the ESX hosts the VM runs on is not compatible
Enhanced vMotionCompatibility Improvements Usability Improvements VM's EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs. VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM.
EVC (3/3) Earlier Add-Host Error detection Host-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC cluster Up to now this error occurred after all needed steps were done by the administrator Now it’ll warn earlier
Licencing Host-Affinity, Multi-core VM, Licence Reporting Manager
Multi-core CPU inside a VM Click this
Multi-core CPU inside a VM 2-core, 4-core, 8 core. No 3-core, 5 core, 6 core, etc Type this manually

Weitere ähnliche Inhalte

Was ist angesagt?

C3 Citrix Cloud Center
C3 Citrix Cloud CenterC3 Citrix Cloud Center
C3 Citrix Cloud CenterRui Lopes
 
Multiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsMultiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsAndrey Klyachkin
 
VIO LPAR Introduction | Basics | Demo
VIO LPAR Introduction | Basics | DemoVIO LPAR Introduction | Basics | Demo
VIO LPAR Introduction | Basics | DemoKernel Training
 
#IBMEdge: Brocade SAN Health Session
#IBMEdge: Brocade SAN Health Session#IBMEdge: Brocade SAN Health Session
#IBMEdge: Brocade SAN Health SessionBrocade
 
Simple Virtualization Overview
Simple Virtualization OverviewSimple Virtualization Overview
Simple Virtualization Overviewbassemir
 
ARM Architecture-based System Virtualization: Xen ARM open source software pr...
ARM Architecture-based System Virtualization: Xen ARM open source software pr...ARM Architecture-based System Virtualization: Xen ARM open source software pr...
ARM Architecture-based System Virtualization: Xen ARM open source software pr...The Linux Foundation
 
Brocade: Storage Networking For the Virtual Enterprise
Brocade: Storage Networking For the Virtual Enterprise Brocade: Storage Networking For the Virtual Enterprise
Brocade: Storage Networking For the Virtual Enterprise EMC
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0sprdd
 
Advanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAdvanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAlan Renouf
 
Win2k8 cluster kaliyan
Win2k8 cluster kaliyanWin2k8 cluster kaliyan
Win2k8 cluster kaliyanKaliyan S
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" Brocade
 

Was ist angesagt? (20)

C3 Citrix Cloud Center
C3 Citrix Cloud CenterC3 Citrix Cloud Center
C3 Citrix Cloud Center
 
XS 2008 Boston Capacity Planning
XS 2008 Boston Capacity PlanningXS 2008 Boston Capacity Planning
XS 2008 Boston Capacity Planning
 
Multiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsMultiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power Systems
 
VIO LPAR Introduction | Basics | Demo
VIO LPAR Introduction | Basics | DemoVIO LPAR Introduction | Basics | Demo
VIO LPAR Introduction | Basics | Demo
 
#IBMEdge: Brocade SAN Health Session
#IBMEdge: Brocade SAN Health Session#IBMEdge: Brocade SAN Health Session
#IBMEdge: Brocade SAN Health Session
 
Simple Virtualization Overview
Simple Virtualization OverviewSimple Virtualization Overview
Simple Virtualization Overview
 
ARM Architecture-based System Virtualization: Xen ARM open source software pr...
ARM Architecture-based System Virtualization: Xen ARM open source software pr...ARM Architecture-based System Virtualization: Xen ARM open source software pr...
ARM Architecture-based System Virtualization: Xen ARM open source software pr...
 
Brocade: Storage Networking For the Virtual Enterprise
Brocade: Storage Networking For the Virtual Enterprise Brocade: Storage Networking For the Virtual Enterprise
Brocade: Storage Networking For the Virtual Enterprise
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0
 
Advanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAdvanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtop
 
XS Boston 2008 Project Status
XS Boston 2008 Project StatusXS Boston 2008 Project Status
XS Boston 2008 Project Status
 
XS Boston 2008 Self IO Emulation
XS Boston 2008 Self IO EmulationXS Boston 2008 Self IO Emulation
XS Boston 2008 Self IO Emulation
 
XS 2008 Boston VTPM
XS 2008 Boston VTPMXS 2008 Boston VTPM
XS 2008 Boston VTPM
 
XS Boston 2008 VT-D PCI
XS Boston 2008 VT-D PCIXS Boston 2008 VT-D PCI
XS Boston 2008 VT-D PCI
 
Win2k8 cluster kaliyan
Win2k8 cluster kaliyanWin2k8 cluster kaliyan
Win2k8 cluster kaliyan
 
I/O Scalability in Xen
I/O Scalability in XenI/O Scalability in Xen
I/O Scalability in Xen
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal"
 
LAN v podání Brocade
LAN v podání BrocadeLAN v podání Brocade
LAN v podání Brocade
 
XS Japan 2008 Oracle VM English
XS Japan 2008 Oracle VM EnglishXS Japan 2008 Oracle VM English
XS Japan 2008 Oracle VM English
 
Xen RAS Status and Progress
Xen RAS Status and ProgressXen RAS Status and Progress
Xen RAS Status and Progress
 

Andere mochten auch

Albert Speer In umbra lui hitler vol.1
Albert Speer   In umbra lui hitler vol.1Albert Speer   In umbra lui hitler vol.1
Albert Speer In umbra lui hitler vol.1Tataie Micu
 
Dirección de video
Dirección de videoDirección de video
Dirección de videosmenjivarm
 
Especies en peligro
Especies en peligroEspecies en peligro
Especies en peligroDiana Schez
 
Guía del trabajo de titulación
Guía del trabajo de titulaciónGuía del trabajo de titulación
Guía del trabajo de titulaciónWilson Toaquiza
 
HLU Presentation.Email.Secure
HLU Presentation.Email.SecureHLU Presentation.Email.Secure
HLU Presentation.Email.SecureTad Krafft, CEBS
 
Módulo atendimento emfils
Módulo atendimento emfilsMódulo atendimento emfils
Módulo atendimento emfilscassioferrer
 
Propuesta_Universidades_AIESEC
Propuesta_Universidades_AIESECPropuesta_Universidades_AIESEC
Propuesta_Universidades_AIESECNathielli Zart
 
Finance data model
Finance data modelFinance data model
Finance data modelsridhark1981
 
Yachts Docks & Slips Management for Joomla By Latitude 26
Yachts Docks & Slips Management for Joomla By Latitude 26Yachts Docks & Slips Management for Joomla By Latitude 26
Yachts Docks & Slips Management for Joomla By Latitude 26Yachting.vg
 
Software para la Inteligencia Tecnológica de Patentes
Software para la Inteligencia Tecnológica de PatentesSoftware para la Inteligencia Tecnológica de Patentes
Software para la Inteligencia Tecnológica de Patentesminiera
 
Arnold Classic Europe 2011
Arnold Classic Europe 2011Arnold Classic Europe 2011
Arnold Classic Europe 2011Carlos Garcia
 
Productividad colectiva en el Thyssen
Productividad colectiva en el ThyssenProductividad colectiva en el Thyssen
Productividad colectiva en el ThyssenNacho Muñoz
 
2010 tema 01 patología de esófago [modo de compatibilidad]
2010 tema 01 patología de esófago [modo de compatibilidad]2010 tema 01 patología de esófago [modo de compatibilidad]
2010 tema 01 patología de esófago [modo de compatibilidad]Arianna Crachiolo
 
Boe a-2012-3919
Boe a-2012-3919Boe a-2012-3919
Boe a-2012-3919blog11
 
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di RomaOrdine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Romauniversitaeuropeadiroma
 
Experimentación beneficio mar2014-2
Experimentación beneficio mar2014-2Experimentación beneficio mar2014-2
Experimentación beneficio mar2014-2CamiloMonroyGomez
 
The extra mile magazine june 2013, Leadership, HR and Personal Development
The extra mile magazine june 2013, Leadership, HR and Personal DevelopmentThe extra mile magazine june 2013, Leadership, HR and Personal Development
The extra mile magazine june 2013, Leadership, HR and Personal DevelopmentPeople Development Network
 

Andere mochten auch (20)

Albert Speer In umbra lui hitler vol.1
Albert Speer   In umbra lui hitler vol.1Albert Speer   In umbra lui hitler vol.1
Albert Speer In umbra lui hitler vol.1
 
Dirección de video
Dirección de videoDirección de video
Dirección de video
 
Especies en peligro
Especies en peligroEspecies en peligro
Especies en peligro
 
Guía del trabajo de titulación
Guía del trabajo de titulaciónGuía del trabajo de titulación
Guía del trabajo de titulación
 
HLU Presentation.Email.Secure
HLU Presentation.Email.SecureHLU Presentation.Email.Secure
HLU Presentation.Email.Secure
 
Módulo atendimento emfils
Módulo atendimento emfilsMódulo atendimento emfils
Módulo atendimento emfils
 
Propuesta_Universidades_AIESEC
Propuesta_Universidades_AIESECPropuesta_Universidades_AIESEC
Propuesta_Universidades_AIESEC
 
Finance data model
Finance data modelFinance data model
Finance data model
 
Yachts Docks & Slips Management for Joomla By Latitude 26
Yachts Docks & Slips Management for Joomla By Latitude 26Yachts Docks & Slips Management for Joomla By Latitude 26
Yachts Docks & Slips Management for Joomla By Latitude 26
 
Software para la Inteligencia Tecnológica de Patentes
Software para la Inteligencia Tecnológica de PatentesSoftware para la Inteligencia Tecnológica de Patentes
Software para la Inteligencia Tecnológica de Patentes
 
Multiculturalismo eugenia gonzalez
Multiculturalismo eugenia gonzalezMulticulturalismo eugenia gonzalez
Multiculturalismo eugenia gonzalez
 
Arnold Classic Europe 2011
Arnold Classic Europe 2011Arnold Classic Europe 2011
Arnold Classic Europe 2011
 
Productividad colectiva en el Thyssen
Productividad colectiva en el ThyssenProductividad colectiva en el Thyssen
Productividad colectiva en el Thyssen
 
2010 tema 01 patología de esófago [modo de compatibilidad]
2010 tema 01 patología de esófago [modo de compatibilidad]2010 tema 01 patología de esófago [modo de compatibilidad]
2010 tema 01 patología de esófago [modo de compatibilidad]
 
Boe a-2012-3919
Boe a-2012-3919Boe a-2012-3919
Boe a-2012-3919
 
Boot Pass
Boot PassBoot Pass
Boot Pass
 
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di RomaOrdine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
 
Experimentación beneficio mar2014-2
Experimentación beneficio mar2014-2Experimentación beneficio mar2014-2
Experimentación beneficio mar2014-2
 
Chef-Hisham2
Chef-Hisham2Chef-Hisham2
Chef-Hisham2
 
The extra mile magazine june 2013, Leadership, HR and Personal Development
The extra mile magazine june 2013, Leadership, HR and Personal DevelopmentThe extra mile magazine june 2013, Leadership, HR and Personal Development
The extra mile magazine june 2013, Leadership, HR and Personal Development
 

Ähnlich wie VMware vSphere 4.1 deep dive - part 1

VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesGrit Suwa
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answersvivaankumar
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Manoj Kumar S
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master classCitrix
 
VMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptxVMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptxssuser4d1c08
 
Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2Tudor Damian
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationStephen Foskett
 
Using openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual MachinesUsing openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual MachinesKris Buytaert
 
Safe checkup - vmWare vSphere 5.0 22feb2012
Safe checkup - vmWare vSphere 5.0  22feb2012Safe checkup - vmWare vSphere 5.0  22feb2012
Safe checkup - vmWare vSphere 5.0 22feb2012M.Ela International Srl
 
Oracle VM 3.4.1 Installation
Oracle VM 3.4.1 InstallationOracle VM 3.4.1 Installation
Oracle VM 3.4.1 InstallationSimo Vilmunen
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISORVanika Kapoor
 
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...KiwiSi
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmoxOriol Izquierdo Vibalda
 
Microsoft Hyper V Server 2008
Microsoft Hyper V Server 2008Microsoft Hyper V Server 2008
Microsoft Hyper V Server 2008vncson
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Louis Göhl
 
Open stack implementation
Open stack implementation Open stack implementation
Open stack implementation Soumyajit Basu
 

Ähnlich wie VMware vSphere 4.1 deep dive - part 1 (20)

VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation Notes
 
Vdi pre req
Vdi pre reqVdi pre req
Vdi pre req
 
Vmware inter
Vmware interVmware inter
Vmware inter
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answers
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01
 
Migrating to ESXi: How To
Migrating to ESXi: How ToMigrating to ESXi: How To
Migrating to ESXi: How To
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master class
 
VMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptxVMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptx
 
Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2Upgrading your Private Cloud to Windows Server 2012 R2
Upgrading your Private Cloud to Windows Server 2012 R2
 
OpenQrm
OpenQrmOpenQrm
OpenQrm
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
Using openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual MachinesUsing openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual Machines
 
Safe checkup - vmWare vSphere 5.0 22feb2012
Safe checkup - vmWare vSphere 5.0  22feb2012Safe checkup - vmWare vSphere 5.0  22feb2012
Safe checkup - vmWare vSphere 5.0 22feb2012
 
Oracle VM 3.4.1 Installation
Oracle VM 3.4.1 InstallationOracle VM 3.4.1 Installation
Oracle VM 3.4.1 Installation
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
 
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmox
 
Microsoft Hyper V Server 2008
Microsoft Hyper V Server 2008Microsoft Hyper V Server 2008
Microsoft Hyper V Server 2008
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...
 
Open stack implementation
Open stack implementation Open stack implementation
Open stack implementation
 

Mehr von Louis Göhl

Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Louis Göhl
 
Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Louis Göhl
 
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.Louis Göhl
 
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...Louis Göhl
 
Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Louis Göhl
 
Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Louis Göhl
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009Louis Göhl
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...Louis Göhl
 
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...Louis Göhl
 
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...Louis Göhl
 
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?Louis Göhl
 
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...Louis Göhl
 
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...Louis Göhl
 
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainMGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainLouis Göhl
 
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...Louis Göhl
 
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...Louis Göhl
 
Windows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopWindows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopLouis Göhl
 
Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Louis Göhl
 

Mehr von Louis Göhl (18)

Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011
 
Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011
 
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
 
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
 
Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]
 
Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
 
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
 
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
 
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
 
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
 
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
 
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainMGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
 
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
 
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
 
Windows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopWindows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized Desktop
 
Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7
 

Kürzlich hochgeladen

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 

Kürzlich hochgeladen (20)

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 

VMware vSphere 4.1 deep dive - part 1

  • 1. vSphere 4.1: Delta to 4.0Tech Sharing for Partners Iwan ‘e1’ Rahabok, Senior Systems Consultant e1@vmware.com | virtual-red-dot.blogspot.com | tinyurl.com/SGP-User-Group | facebook.com/e1ang August 2010
  • 2. Audience Assumption This is a level 200 - 300 presentation. It assumes: Good understanding of vCenter 4, ESX 4, ESXi 4. Preferably hands-on We will only cover the delta between 4.1 and 4.0 Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc Good understanding of related storage, server, network technology Target audience VMware Specialist: SE + Delivery from partners
  • 3. Agenda New features Server Storage Network Management Upgrade
  • 4. 4.1 New Feature (over 4.0, not 3.5): Server
  • 5. 4.1 New Feature (over 4.0, not 3.5): Server
  • 6. 4.1 New Feature (over 4.0, not 3.5): Storage
  • 7. 4.1 New Feature (over 4.0, not 3.5): Network
  • 8. 4.1 New Feature: Management
  • 9. Builds: ESX build 260247 VC build 258902 Some stats: 4000 development weeks were spent to get to FC 5100 QA weeks were spent to get to FC 872 beta customers downloaded and tried it out 2012 servers, 2277 storage arrays, and 2170 IO devices are already on the HCL  
  • 10. Consulting Services: Kit The vSphere Fundamentals services kit Includes core services enablement materials for vSphere Jumpstarts, Upgrades, Converter/P2V and PoCs.  The update reflects what’s new in vSphere 4.1 - including new resource limits, memory compression, Storage IO Control, vNetwork Traffic Management, and vSphere Active Directory Integration.  The kit is intended for use by PSO Consultants, TAMs, and SEs to help with delivering services engagements, PoCs, or knowledge transfer sessions with customers.  Located at Partner Central – Services IP Assets https://na6.salesforce.com/sfc/#version?selectedDocumentId=069800000000SSi For delivery partner: Please download this.
  • 12. PXE Boot Retry Virtual Machine -> Edit Settings -> Options -> Boot Options Failed Boot Recovery disabled by default Enable and set the automatically retry boot after X Seconds 12
  • 13. Wide NUMA Support Wide VM Wide-VM is defined as a VM that has more vCPUs than the available cores on a NUMA node. A 5-vCPU VM in a quad-core server Only the cores count, and hyperthreading threads don’t ESX 4.1 scheduler introduces wide-VM NUMA support Improves memory locality for memory-intensive workloads. Based on testing with micro benchmarks, the performance benefit can be up to 11–17%. How it works ESX 4.1 allows wide-VMs to take advantage of NUMA management. NUMA management means that a VM is assigned a home node where memory is allocated and vCPUs are scheduled. By scheduling vCPUs on a NUMA node where memory is allocated, the memory accesses become local, which is faster than remote accesses
  • 14. ESXi Enhancements to ESXi. Not applicable to ESX
  • 15. Transitioning to ESXi ESXi is our architecturegoing forward
  • 16. Moving toward ESXi Permalink to: VMware ESX and ESXi 4.1 Comparison Service Console (COS) Agentless vAPI-based Management Agents Hardware Agents Agentless CIM-based Commands forconfiguration anddiagnostics vCLI, PowerCLI Local Support Console CIM API vSphere API Infrastructure Service Agents Native Agents:NTP, Syslog, SNMP VMware ESXi “Classic” VMware ESX
  • 17. Software Inventory - Connected to ESXi/ESX From vSphere 4.1 Before Enumerate instance of CIM_SoftwareIdentity Enhanced CIM provider now displays great detail on installed software bundles.
  • 18.
  • 19. Additional Deployment Option Scripted Installation Numerous choices for installation Installer booted from CD-ROM (default) Preboot Execution Environment (PXE) ESXi Installation image on CD-ROM (default), HTTP/S, FTP, NFS Script can be stored and accessed Within the ESXi Installer ramdisk On the installation CD-ROM HTTP / HTTPS, FTP, NFS Config script (“ks.cfg”) can include Preinstall Postinstall First boot Cannot use scripted installation to install to a USB device
  • 20. PXE Boot Requirements PXE-capable NIC. DHCP Server (IPv4). Use existing one. Media depot + TFTP server + gPXE A server hosting the entire content of ESXi media. Protocal: HTTP/HTTPS, FTP, or NFS server. OS: Windows/Linux server. Info We recommend the method that uses gPXE. If not, you might experience issues while booting the ESXi installer on a heavily loaded Network. TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers.
  • 21. PXE boot PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS over network. How it works A host makes a DHCP request to configure its NIC. A host downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi. To complete the installation, you must provide the contents of the ESXi DVD Once ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media must be specified.
  • 23. Sample ks.cfg file # Accept the EULA (End User Licence Agreement) vmaccepteula # Set the root password to vmware123 rootpw vmware123 # Install the ESXi image from CDROM install cdrom # Auto partition the first disk – if a VMFS exists it will overwrite it. autopart --firstdisk --overwritevmfs # Create a partition called Foobar # Partition the disk identified with vmhba1:c0:t1:l0 to grow to a maxsize of 4000 partition Foobar --ondisk=mpx.vmhba1:C0:T1:L0 --grow –maxsize=4000 # Set up the management network on the vmnic0 using DHCP network –bootproto=dhcp --device=vmnic0 --addvmportgroup=0 %firstboot --level=90.1 --unsupported --interpreter=busybox # On this first boot, save the current date to a temporary file date > /tmp/foo # Mount an nfs share and put it at /vmfs/volumes/www esxcfg-nas -add -host 10.20.118.5 -share /var/www www
  • 24. Full Support of Tech Support Mode There you go  2 types Remote: SSH Local: Direct Console
  • 25. Full Support of Tech Support Mode Enter to toggle. That’s it! Disable/Enable Timeout automatically disables TSM (local and remote) Running sessions are not terminated. All commands issued in Tech Support Mode are sent to syslog
  • 26. Full Support of Tech Support Mode Recommended uses Support, troubleshooting, and break-fix Scripted deployment preinstall, postinstall, and first boot scripts Discouraged uses Any other scripts Running commands/scripts periodically (cron jobs) Leaving open for routine access or permanent SSH connection Admin will benotified when active
  • 27. Full Support of Tech Support Mode We can also enable it via GUI Can enable in vCenter or DCUI Enable/Disable
  • 28. Security Banner A message that is displayed on the direct console Welcome screen.
  • 30.
  • 31. Additional commands in Tech Support Mode vscsciStats is now available in the console. Output is raw data for histogram. Use spreadsheet to plot the histogram Some use cases: Identify whether IO are sequential or random Optimizing for IO Sizes Checking for disk mis-alignment Looking at storage latency in moredetails
  • 32. Additional commands in Tech Support Mode Additional commands for troubleshooting nc (netcat) http://en.wikipedia.org/wiki/Netcat tcpdump-uw http://en.wikipedia.org/wiki/Tcpdump
  • 33. More ESXi Services listed More services are now shown in GUI. Ease of control For example, if SSH is not running, you can turn it on from GUI. ESXi 4.0 ESXi 4.1
  • 34.
  • 35. During normal operations:DCUI: misconfigs / restart mgmt agents vCLI vCenter vSphere APIs TSM: Advanced troubleshooting (GSS) ESXi Remote Access Local Access
  • 36. Common Enhancements for both ESX and ESXi 64 bit User World Running VMs with very large memory footprints implies that we need a large address space for the VMX. 32-bit user worlds (VMX32) do not have sufficient address space for VMs with large memory. 64-bit User worlds overcome this limitation. NFS The number of NFS volumes supported is increased from 8 to 64. Fiber Channel End-To-End Support for 8 GB (HBA, Switch & Array). VMFS Version changed to 3.46. No customer visible changes. Changes related to algorithms in the vmfs3 driver to handle new VMware APIs for Array Integration (VAAI).
  • 37. Common Enhancements for both ESX and ESXi VMkernel TCP/IP Stack Upgrade Upgraded to version based on BSD 7.1. Result: improving FT logging, VMotion and NFS client performance. Pluggable Storage Architecture (PSA) New naming convention. New filter plugins to support VAAI (vStorage APIs for Array Integration). New PSPs (Path Selection Policies) for ALUA arrays. New PSP from DELL for the EqualLogic arrays.
  • 38. USB pass-through New Features for both ESX/ESXi
  • 39. USB Devices 2 steps: Add USB Controller Add USB Devices
  • 40. USB Devices Only devices listed on the manual is supported. Mostly for ISV licence dongle. A few external USB drives. Limited list of device for now
  • 41.
  • 42.
  • 43. USB Devices Up to 20 devices per VM. Up to 20 devices per ESX host. 1 device can only be owned by 1 VM at a given time. No sharing. Supported vMotion Communication via the management network DRS Unsupported DPM. DPM is not aware of the device and may turn it off. This may cause loss of data. So disable DRS for this VM so it stays in this host only. Fault Tolerance Design consideration Take note of situation when the ESX host is not available (planned or unplanned downtime)
  • 44. MS AD integration New Features for both ESX/ESXi
  • 45. AD Service Provides authentication for all local services vSphere Client Other access based on vSphere API DCUI Tech Support Mode (local and remote) Has nominal AD groups functionality Members of “ESX Admins” AD group have Administrative privilege Administrative privilege includes: Full Administrative role in vSphere Client and vSphere API clients DCUI access Tech Support Mode access (local and remote)
  • 46. The Likewise Agent ESX uses an agent from Likewise to connect to MS AD and to authenticate users with their domain credentials. The agent integrates with the VMkernel to implement the mapping for applications such as the logon process (/bin/login) which uses a pluggable authentication module (PAM). As such, the agent acts as an LDAP client for authorization (join domain) and as a Kerberos client for authentication (verify users). The vMA appliance also uses an agent from Likewise. ESX and vMA use different versions of the Likewise agent to connect to the Domain Controller. ESX uses version 5.3 whereas vMA uses version 5.1. 49
  • 48. Joining AD: Step 2 1. Select “AD” 2. Click “Join Domain” 3. Join the domain. Full name. @123.com
  • 49. AD Service A third method for joining ESX/ESXi hosts and enabling Authentication Services to utilize AD is to configure it through Host Profiles
  • 50.
  • 51. netlogond is the Likewise Site Affinity service - detects optimal AD domain controller, global catalogue and data caches. Launched from /etc/init.d/netlogond script.
  • 52. lsassd is the Likewise Identity & Authentication service. It does authentication, caching and idmap lookups. This daemon depends on the other two daemons running. Launched from /etc/init.d/lsassd script.root 18015 1 0 Dec08 ? 00:00:00 /sbin/lsassd --start-as-daemon root 31944 1 0 Dec08 ? 00:00:00 /sbin/lwiod --start-as-daemon root 31982 1 0 Dec08 ? 00:00:02 /sbin/netlogond --start-as-daemon
  • 53. ESX Firewall Requirements for AD Certain ports in SC are automatically opened in the Firewall Configuration to facilitate AD. Not applicable to ESXi Before After
  • 54. Time Sync Requirement for AD Time must be in sync between the ESX/ESXi server and the AD server. For the Likewise agent to communicate over Kerberos with the domain controller, the clock of the client must be within the domain controller's maximum clock skew, which is 300 seconds, or 5 minutes, by default. The recommendation would be that they share the same NTP server.
  • 55. vSphere Client Now when assigning permissions to users/groups, the list of users and groups managed by AD can be browsed by selecting the Domain.
  • 56. Info in AD The host should also be visible on the Domain Controller in the AD Computers objects listing. Looking at the ESX Computer Properties shows a Name of RHEL(as it the Service Console on the ESX) & Service pack of ‘Likewise Identity 5.3.0’
  • 57. Memory Compression New Features for both ESX/ESXi
  • 58. Memory Compression VMKernel implement a per-VM compression cache to store compressed guest pages. When a guest page (4 KB page) needs to swapped, VMKernel will first try to compress the page. If the page can be compressed to 2 KB or less, the page will be stored in the per-VM compression cache. Otherwise, the page will be swapped out to disk. If a compressed page is again accessed by the guest, the page will decompressed online.
  • 59. Changing the value of cache size
  • 60. Virtual Machine Memory Compression Virtual Machine -> Resource Allocation Per-VM statistic showing compressed memory
  • 61. Monitoring Compression 3 new counters introduced to monitor Host level, not VM level.
  • 63. Power consumption chart Per ESX, not per cluster Need hardware integration. Difference HW makes have different info
  • 64. Performance Graphs – Power Consumption We can now track the Power consumption of VMs in real-time Enabled through Software Settings ->Advanced Settings -> Power -> Power.ChargeVMs 65
  • 65. Host power consumption In some situation, may need to edit /usr/share/sensors/vmware to get support for the host Different HW makers have different API. VM power consumption Experimental. Off by default
  • 66. ESX Features only for ESX (not ESXi)
  • 67. ESX: Service Console firewall Changes in ESX 4.1 ESX 4.1 introduces these additional configuration files located in /etc/vmware/firewall/chains: usercustom.xml userdefault.xml Relationship between the 2 files “user” overwrites. The default files custom.xml and default.xml are overridden by usercustom.xml and userdefault.xml. All configuration is saved in usercustom.xml and userdefault.xml. Copy the original custom.xml and default.xml files. Use them as a template for usercustom.xml and userdefault.xml.
  • 68. Cluster HA, FT, DRS & DPM
  • 69. Availability Feature Summary HA and DRS Cluster Limitations High Availability (HA) Diagnostic and Reliability Improvements FT Enhancements vMotionEnhancements Performance Usability Enhanced Feature Compatibility VM-host Affinity (DRS) DPM Enhancements Data Recovery Enhancements
  • 70. DRS: more HA-awareness vSphere 4.1 adds logic to prevent imbalance that may not be good from HA point of view. Example 20 small VM and 2 very large VM. 2 ESXi hosts. Same workload with the above 20 collectively. vSphere 4.0 may put 20 small VM on Host A and 2 very large VM on Host B. From HA point of view, this may result in risks when Host A fails. vSphere 4.1 will try to balance the number of VM.
  • 71.
  • 72. Increased limits for VMs/host and VMs/cluster
  • 73. Cluster limits for HA and DRS:
  • 75. 320 VMs/host (regardless of # of hosts/cluster)
  • 77.
  • 79.
  • 80. Cluster can only support 320x3 VMsX X
  • 81.
  • 82. HA Operational Status Just another example 
  • 83. HA: Application Awareness Application Monitoring can restart a VM if the heartbeats for an application it is running are not received Expose APIs for 3rd party app developers Application Monitoring works much the same way that VM Monitoring: If the heartbeats for an application are not received for a specified time via VMware Tools, its VM is restarted. ESXi 4.0 ESXi 4.1
  • 85.
  • 86. No data-loss Guarantee vLockStep: 1 CPU step behind Primary/backup approach A common approach to implementing fault-tolerant servers is the primary/backup approach. The execution of a primary server is replicated by a backup server. Given that the primary and backup servers execute identically, the backup server can take over serving client requests without any interruption or loss of state if the primary server fails
  • 87. New versioning feature FT now has a version number to determine compatibility Restriction to have identical ESX build # has been lifted Now FT checks it’s own version number to determine compatibility Future versions might be compatible with older ones, but possibly not vice-versa Additional information on vSphere Client FT version displayed in host summary tab # of FT enabled VMs displayed there For hosts prior to ESX/ESXi 4.1, this tab lists the host build number instead. FT versions included in vm-support output /etc/vmware/ft-vmk-version:product-version = 4.1.0build = 235786ft-version = 2.0.0
  • 88. FT logging improvements FT traffic was bottlenecked to 2 Gbit/s even on 10 Gbit/s pNICs Improved by implementing ZeroCopy feature for FT traffic Tx, too For sending only (Tx) Instead of copying from FT buffer into pNIC/socket buffer just a link to the memory holding the data is transferred Driver accesses data directly- no copy needed
  • 89. FT: unsupported vSphere features Snapshots. Snapshots must be removed or committed before FT can be enabled on a VM. It is not possible to take snapshots of VMs on which FT is enabled. Storage vMotion. Cannot invoke Storage vMotion for FT VM. To migrate the storage, temporarily turn off FT, do Storage vMotion, then turn on FT. Linked clones. Cannot enable FT on a VM that is a linked clone, nor can you create a linked clone from an FT-enabled VM. Back up. Cannot back up an FT VM using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a VM snapshot, as performed by ESXi. To back up VM in this manner, first disable FT, then re-enable FT after backup is done. Storage array-based snapshots do not affect FT. Thin Provisioning, NPIV, IPv6, etc
  • 90. FT: performance sample MS Exchange 2007 1 core handles 2000 Heavy Online user profile VM CPU utilisation is only 45%. ESX is only 8% Based on previous “generation” Xeon 5500, not 5600 vSphere 4.0, not 4.1 Opportunity Higher uptime forcustomer emailsystem
  • 91. Integration with HA Improved FT host management Move host out of vCenter DRS able to vMotion FT VMs Warning if HA gets disabled and following operations will be disabled Turn on FT Enable FT Power on a FT VM Test failover Test secondary restart
  • 93. Background Different servers in a datacenter is a common scenario Differences by memory size, CPU generation or # or type of pNICs Best practice up to now Separate different hosts in different clusters Workarounds Creating affinity/ anti-affinity rules Pinning a VM to a single host by disabling DRS on the VM. Disadvantage Too expensive as each cluster needed to have HA failover capacity New feature: DRS Groups Host and VM groups Organize ESX hosts and VMs into groups Similar memory Similar usage profile …
  • 94.
  • 95.
  • 96. Soft Rules Soft Rules DRS will follow a soft rule if possible Will allow actions User-initiated DRS-mandatory HA actions Rules are applied as long as their application does not impact satisfying current VM cpu or memory demand DRS will report a warning if the rule isn’t followed DRS does not produce a move recommendation to follow the rule Soft VM/host affinity rules are treated by DRS as "reasonable effort"
  • 97. Grouping Hosts with different capabilities DRS Groups Manager Defines Groups VM groups Host groups
  • 98. Managing ISV Licensing Example Customer has 4-node cluster Oracle DB and Oracle BEA are charged for every hosts that can run it. vSphere 4.1 introduces “hard partitioning” Both DRS and HA will honour this boundary. Rest of VMs Oracle DB DMZ VM Oracle BEA DMZ LAN Production LAN
  • 99. Managing ISV Licensing Hard partitioning If a host is in a VM-host must affinity rule, they are considered compatible hosts, all the others are tagged as incompatible hosts. DRS, DPM and HA are unable to place the VMs on incompatible hosts.Due to the incompatible host designation, the mandatory VM-Host is a feature what can be (undeniably) described as hard partioning. You cannot place and run a VM on incompatible host Oracle has not acknowledged this as hard partitioning. Sources http://frankdenneman.nl/2010/07/vm-to-hosts-affinity-rule/ http://www.latogalabs.com/2010/07/vsphere-41-hidden-gem-host-affinity-rules/
  • 100. Example of setting-up: Step 1 In this example, we are adding the “WinXPsp3” VM to the group. The group name is “Desktop VMs”
  • 101. Example of setting-up: Step 2 Just like we can group VM, we can also group ESX
  • 102. Example of setting-up: Step 3 We have grouped the VMs in the cluster into 2 We have grouped the ESX in the cluster into 2
  • 103. Example of setting-up: Step 4 This is the screen where we do themapping. VM Group mapped to Host Group
  • 104. Example of setting-up: Step 5 Mapping is done. The Cluster Settings dialog box now display the new rules type.
  • 105. HA/ DRS DRS lists rules Switch on or off Expand to display DRS Groups Rule details Rule policy Involved Groups
  • 106.
  • 107. Enhancement for Anti-affinity rules Now more than 2 VMs in a rule Each rule can have a couple of VMs Keep them all together Separate them through cluster For each VM at least 1 host is needed 101
  • 108. DPM Enhancements Scheduling DPM Turning on/off DPM is now a scheduled task DPM can be turned off prior to business hours in anticipation for higher resource demands Disabling DPM It brings hosts out of standby Eliminates risk of ESX hosts being stuck in standby mode while DPM is disabled. Ensures that when DPM is disabled, all hosts are powered on and ready to accommodate load increases.
  • 111. vMotionEnhancements Significantly decreased the overall migration time (time will vary depending on workload) Increased number of concurrent vMotions: ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps network Datastore: 128 (both VMFS and NFS) Maintenance mode evacuation time is greatly decreased due to above improvements
  • 112. vMotion Re-write of the previous vMotion code Sends memory pages bundled together instead of one after the other Less network/ TCP/IP overhead Destination pre-allocates memory pages Multiple senders/ receivers Not only a single world responsible for each vMotion thus limit based on host CPU Sends list of changed pages instead of bitmaps Performance improvement Throughput improved significantly for single vMotion ESX 3.5 – ~1.0Gbps ESX 4.0 – ~2.6Gbps ESX 4.1 – max 8 Gbps Elapsed reduced by 50%+ on 10GigE tests. Mix of different bandwidth pNICs not supported
  • 113. vMotion Aggressive Resume Destination VM resumes earlier Only workload memory pages have been received Remaining pages transferred in background Disk-Backed Operation Source host creates a circular buffer file on shared storage Destination opens this file and reads out of it Works only on VMFS storage In case of network failure during transfer vMotion falls back to disk based transfer Works together with aggressive resume feature above
  • 114. Enhanced vMotion Compatibility Improvements Preparation for AMD Next Generation without 3DNow! Future AMD CPUs may not support 3DNow! To prevent vMotion incompatibilities, a new EVC mode is introduced.
  • 115. EVC Improvements Better handling of powered-on VMs vCenter server now uses a live VM's CPU feature set to determine if it can be migrated into an EVC cluster Previously, it relied on the host's CPU features A VM could run with a different vCPU than the host it runs on I.e. if it was initially started on an older ESX host and vMotioned to the current one So the VM is compatible to an older CPU and could possibly be migrated to the EVC cluster even if the ESX hosts the VM runs on is not compatible
  • 116. Enhanced vMotionCompatibility Improvements Usability Improvements VM's EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs. VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM.
  • 117. EVC (3/3) Earlier Add-Host Error detection Host-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC cluster Up to now this error occurred after all needed steps were done by the administrator Now it’ll warn earlier
  • 118. Licencing Host-Affinity, Multi-core VM, Licence Reporting Manager
  • 119. Multi-core CPU inside a VM Click this
  • 120. Multi-core CPU inside a VM 2-core, 4-core, 8 core. No 3-core, 5 core, 6 core, etc Type this manually
  • 121. Multi-core CPU inside a VM How to enable (per VM, not batch) Turn off VM. Can not be done online. Click Configuration Parameters Click Add Row and type cpuid.coresPerSocket in the Name column. Type a value (2, 4, or 8) in the Value column. The number of virtual CPUs must be divisible by the number of cores per socket. The coresPerSocket setting must be a power of two. Notes: If enabled, CPU Hot Add is disabled
  • 122. Multi-core CPU inside a VM Once enabled, it is not readily shown to administrator This is not shown easily in the UI. VM listing in vSphere Client does not show core Possible to write scripts Iterates per VM Sample tools CPU-Z MS SysInternals
  • 123. Customers Can Self-Enforce Per VM License Compliance When customer use more than they bought Alert by vCenter But will be able to continue managing additional VMs. So can over use. Customers are responsible for purchasing additional licenses and any back-SNS. So Support & Subscription must be back dated. This is consistent with current vSphere pricing.
  • 124. Thank You I’m sure you are tired too 
  • 125. Useful references http://vsphere-land.com/news/tidbits-on-the-new-vsphere-41-release.html http://www.petri.co.il/virtualization.htm http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm http://www.petri.co.il/vmware-data-recovery-backup-and-restore.htm http://www.delltechcenter.com/page/VMware+Tech http://www.kendrickcoleman.com/index.php?/Tech-Blog/vm-advanced-iso-free-tools-for-advanced-tasks.html http://www.ntpro.nl/blog/archives/1461-Storage-Protocol-Choices-Storage-Best-Practices-for-vSphere.html http://www.virtuallyghetto.com/2010/07/script-automate-vaai-configurations-in.html http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1516821,00.html http://vmware-land.com/esxcfg-help.html http://virtualizationreview.com/blogs/everyday-virtualization/2010/07/esxi-hosts-ad-integrated-security-gotcha.aspx http://www.MS.com/licensing/about-licensing/client-access-license.aspx#tab=2 http://www.MSvolumelicensing.com/userights/ProductPage.aspx?pid=348 http://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.html
  • 126. vSphere Guest API It provides functions that management agents and other software can use to collect data about the state and performance of a VM. The API provides fast access to resource management information, without the need for authentication. The Guest API provides read‐only access. You can read data using the API, but you cannot send control commands. To issue control commands, use the vSphere Web Services SDK. Some information that you can retrieve through the API: Amount of memory reserved for the VM. Amount of memory being used by the VM. Upper limit of memory available to the VM. Number of memory shares assigned to the VM. Maximum speed to which the VM’s CPU is limited. Reserved rate at which the VM is allowed to execute. An idling VM might consume CPU cycles at a much lower rate. Number of CPU shares assigned to the VM. Elapsed time since the VM was last powered on or reset. CPU time consumed by a particular VM. When combined with other measurements, you can estimate how fast the VM’s CPUs are running compared to the host CPUs

Hinweis der Redaktion

  1. Isn’t cluster supported in 4.0.1? Compared the 2 manuals closely.Design here can mean better design, or you can fix/propose things that you can’t before, or give you more options to take on larger or more complex design.Cost here can mean lower Product cost, Services cost (e.g. reduce effort from partner) or less effort (if internal IT is doing it).Scalability means you can do more, like do more VM per ESX. Performance means can do the same thing but faster. For example, backing up a VM is faster.Memory Compression reduces cost: more VM per ESX means less ESX host, or smaller RAM expense.Scripted install improves security as it reduces risk of variance among installation.ESXi SAN boot improves security as ESXi config are not stored in a hundred places.vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled VMs are deployed in a cluster, allowing for cluster maintenance operations without turning off FT.Compare with 4.0. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster.
  2. Hyper-V import: without it, it will be more complex and may require longer down time.ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters. Need screenshot and new machine.Faster vMotion improves management as you spend less time waiting for 10 VMs to complete vMotion as you prepare to do hardware maintenance.In some cases, you are given a fixed window to do your maintenance. And you want the 5 or 15 VMs in that host to vmotion as fast as possible.vSphere 4.1 reduces the amount of overhead memory required, especially when running large VMs on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT).vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!™) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for VMs, more timely error detection, better error messages, and the reduced need to restart VMsVmware Tools now have CLI, which
  3. VMware Data Recovery is actually available in 4.0.1 too, as it’s compatibleVMFS enhancements: minor. Transparent to usersThere have been many algorithm changes between v3.33 and and 3.46 VMFS-3.46 driver uses hardware accelerated locking and hardware accelerated Storage VMotion, Virtual Machine provisioning, and cold migrate functions on such hardware. This improved the performance and scalability of workloads that require the above functions.Personally, there are those who are 100% convinced on the benefit of iSCSI boot. This is because it’s mixing storage and network, and can make troubleshooting/support complex.VADP: VSS on Win08NFS performance improvement. Quantified?NFS Performance Enhancements. Networking performance for NFS has been optimized to improve throughput and reduce CPU usage
  4. Nexus is not released yet.vDS: scalabilityvNIC enhancements: E1000 vNIC supports jumbo frames
  5. You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configurationUnattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with AD and commands to configure the connectionUpdate Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed.The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter database
  6. an 8-vCPU SMP VM is considered wide on an Intel Xeon 55xx system because the processor has only four cores per NUMA node
  7. ESXi was released around 2 years ago. Just sharing my experience as SE. In this short period of 2 years, the discussions that I have with customers or partners have progressed, from “what is ESXi” to “why should we use ESXi” to “we are using or planning to use ESXi”. For a platform software, it is doing very well since it needs to build its ecosystem.
  8. We can say that vSphere 4.1 is the release for ESXi. In this release ESXi takes center stage. 4.1 is our strongest message that we are going toward ESXi as the sole hypervisor. A lot of customers, even some of the largest deployment, have decided to go ESXi going forward. If your customers have not, 4.1 is a good opportunity for you offer a migration services or hardware refresh.As SE, we also know that there are some features that we wish we have in the 4.0 release. For example, while the remote CLI helps, none of the Linux command works as the execution context is the VMA OS, not the ESXi kernel. And in some troubleshooting scenario, customers do need to issue linux command. Another thing we can’t do automatic installation and boot from network.
  9. One of the most popular requests among customers is to improve the deployment and management of ESXi.First in the line is boot From SAN is now fully supported in ESXi 4.1. It was as only experimentally supported in ESXi 4.0. Boot from SAN will be supported for FC, iSCSI, and FCoE. For iSCSI and FCoE, it will depend upon hardware qualification, so please check the HCL and Release Notes when vSphere 4.1 is released.Dependent Hardware iSCSI means the card depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware. So properties like IP, MAC, and other parameters used for the iSCSI sessions are configured from VMware GUI/CLI.http://www.vmware.com/resources/compatibility/info.php?deviceCategory=san&mode=san_introductionFor ESXi text installer we have a screen that warns if the user is trying to install image onto an existing data store. It will not prevent user from installing if he/she desires to do so. For scripted install, unless user specifies an override VMFS flag, scripted install will not proceed with installation when a user tries to install on an existing datastore. We will only support a booting of host on a unique LUN. This LUN *cannot be* shared by other hosts. User is expected to set proper LUN masking to avoid this scenario. If the luns were to be shared it could result in data corruption. ----------- copied from 3rd party site iSCSI SW boot: the only currently supported network card is the Broadcom 57711 10GBe NIC. When booting from software iSCSI the boot firmware on the network adapter logs into an iSCSI target. The firmware than saves the network and iSCSI boot parameters in the iBFT which is stored in the host’s memory. Before you can use iBFT you need to configure the boot order in your server’s BIOS so the iBFT NIC is first before all other devices. You than need to configure the iSCSI configuration and CHAP authentication in the BIOS of the NIC before you can use it to boot ESXi from. The ESXi installation media has special iSCSI initialization scripts that use iBFT to connect to the iSCSI target and present it to the BIOS. Once you select the iSCSI target as your boot device the installer copies the boot image to it. Once the media is removed and the host rebooted the iSCSI target is used to boot and the initialization script runs in first boot mode which configures the networking which afterwards is persistent.
  10. Second features we have implemented is more choice during install. We can now do PXE boot, and we can script it too.Scripted Installation, the equivalent of Kickstart, is now available. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode or in Python. The Tech Support Mode shell is a highly stripped down version of bash.You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting. You cannot use scripted installation to install ESXi to a USB device
  11. The media depot is a network-accessible location that contains the ESXi installation media. You can use HTTP/HTTPS, FTP, or NFS to access the depot. The depot must be populated with the entire contents of the ESXi installation DVD, preserving directory structure.If you are performing a scripted installation, you must point to the media depot in the script by including the install command with the nfs or url option.The following code snippet from an ESXi installation script demonstrates how to format the pointer to the media depot if you are using NFS:install nfs --server=example.com --dir=/nfs3/VMware/ESXi/41
  12. The preboot execution environment (PXE) is an environment to boot computers using a network interfaceindependently of available data storage devices or installed OS. These topics discuss thePXELINUX and gPXE methods of PXE booting the ESXi installer.PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS (OS) over a network.Network booting with PXE is similar to booting with a DVD, but it requires some network infrastructure anda machine with a PXE-capable network adapter. Once the ESXi installer is booted, it works like a DVD-based installation,except that the location of the ESXi installation media (the contents of the ESXi DVD) must be specified.A host first makes a DHCP request to configure its network adapter and then downloads and executes a kerneland support files. PXE booting the installer provides only the first step to installing ESXi. To complete theinstallation, you must provide the contents of the ESXi DVD either locally or on a networked server throughHTTP/HTTPS, FTP, or NFS.TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems orloading firmware on network devices such as routers.If you do not use gPXE, you might experience issues while booting the ESXi installer on a heavily loadednetwork. This is because TFTP is not a robust protocol and is sometimes unreliable for transferring largeamounts of data. If you use gPXE, only the gpxelinux.0 binary and configuration file are transferred via TFTP.gPXE enables you to use a Web server for transferring the kernel and ramdisk required to boot the ESXi installer.If you use PXELINUX without gPXE, the pxelinux.0 binary, the configuration file, and the kernel and ramdiskare transferred via TFTP.Setting up a new DHCP server is not recommended if your network already has one. If multipleDHCP servers respond to DHCP requests, machines can obtain incorrect or conflicting IP addresses, or canfail to receive the proper boot information. Seek the guidance of a network administrator in your organizationbefore setting up a DHCP
  13. Scripted Installation, the equivalent of Kickstart, will be supported on ESXi 4.1. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode shell (which is a highly stripped down version of bash) or in Python.
  14. The firstboot scripts are run as initscripts. All initscripts have a numerical part in their filenames. They are sorted by that numerical part to determine the order in which they are run. So a script with "90.1" would run after a script with "90.0" and before a script with "90.2"
  15. Finally, the Tech Support Mode is fully supported. We support both the local, when you are in front of the server, or remote, when you are using SSH.In ESXi 4.0, Tech Support Mode usage was ambiguous. We stated that you should only use it with guidance from VMware Support, but VMware also issued several KBs telling customers how to use it. Getting into Tech Support Mode was also not very user-friendly.The warning not to use TSM has been removed from the login screen. However, anytime TSM is enabled (either local or remote), a warning banner will appear in vSphere Client for that host. This is meant to reinforce the recommendation that TSM only be used for fixing problems, not on a routine basis.The SysAdminTools URL in the message above will take you to vMA, PowerCLI, CLI, etc.
  16. To enable or disable from the console, it’s pretty straight forward. By default, after you enable TSM (both local and remote), they will automatically become disabled after 10 minutes. This time is configurable, and the timeout can also be disabled entirely. When TSM times out, running sessions are not terminated, allowing you to continue a debugging session. All commands issued in TSM are logged by hostd and sent to syslog, allowing for an incontrovertible audit trail.When lockdown mode is enabled, DCUI access is restricted to the root user (so root can still go in), while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter. Direct access to the host using the vSphere Client is not permitted.
  17. As you know, the tech support mode is not for day to day use. So anytime it is enabled, we will flag it.
  18. We can also enable it via the GUI. You select the ESXi you want to manage, then click on the “Configuration” tab. From here, click on the “Security Profile”. Clicking on the properties brings up this dialog box. From here, we can stop and start the relevant services.
  19. Procedure:1 Log in to the host from the vSphere Client.2 From the Configuration tab, select Advanced Settings.3 From the Advanced Settings window, select Annotations.4 Enter a security message.The message is displayed on the direct console Welcome screen.
  20. There is now an ability to totally lock down a host. Lockdown mode in ESXi 4.1 forces all remote access to go through vCenter. So Lockdown mode is only available on ESXi hosts that have been added to vCenter.
  21. The only local access is for root to access the DCUI – this could be used, for example, to turn off lockdown mode in case vCenter is down. However, there is an option to disable DCUI in vCenter. In this case, with Lockdown mode turned on, there is no possible way to manage the host directly – everything must be done through vCenter. If vCenter is down, the only recourse in this case is to reimage the box.Of course, Lockdown Mode can be selectively disable for a host if there is a need to troubleshoot or fix it via TSM, and then enabled again.BTW,
  22. Vscsistats has also been ported and now is available directly in the ESXi console.It is an advanced commands, and can be used to identify the IO patterns.
  23. Other useful utilities for troubleshooting have been added to TSM
  24. You can add multiple USB devices, such as security dongles and mass storage devices, to a VMthat resides on an ESX/ESXi host to which the devices are physically attached. Knowledge of devicecomponents and their behavior, VM requirements, feature support, and ways to avoid data losscan help make USB device passthrough from an ESX/ESXi host to a VM successful.When you attach a USB device to a physical host, the device is available only to VMs that resideon that host. Those VMs cannot connect to a device on another host in the datacenter.A USB device is available to only one VM at a time. When you remove a device from a virtualmachine, it becomes available to other VMs that reside on the host.USB Arbitrator Manages connection requests and routes USB device traffic. The arbitrator isinstalled and enabled by default on ESX/ESXi hosts. It scans the host for USBdevices and manages device connection among VMs that reside onthe host. It routes device traffic to the correct VM instance fordelivery to the guest OS. The arbitrator monitors the USB deviceand prevents other VMs from using it until you release it from theVM it is connected to.If vCenter polling is delayed, a device that is connected to one virtualmachine might appear as though it is available to add to another virtualmachine. In such cases, the arbitrator prevents the second VM fromaccessing the USB device.USB Controller The USB hardware chip that provides USB function to the USB ports that itmanages. The virtual USB Controller is the software virtualization of the USBhost controller function in the VM.USB controller hardware and modules that support USB 2.0 and USB 1.1devices must exist on the host. Only one virtual USB controller is available toeach VM. The controller supports multiple USB 2.0 and USB 1.1USB devices in the virtual computer. The controller must be present before youcan add USB devices to the virtual computer.The USB arbitrator can monitor a maximum of 15 USB controllers. Devicesconnected to controllers numbered 16 or greater are not available to the virtualMachineBefore you hot add memory, CPU, or PCI devices, you must remove any USB devices. Hot adding theseresources disconnects USB devices, which might result in data loss.n Before you suspend a VM, make sure that a data transfer is not in progress. During thesuspend/resume process, USB devices behave as if they have been disconnected, then reconnected. Also,if you use vMotion to migrate a VM away from the host that the USB device is attached to, itwon't be reconnected when the VM is resumedFor compound devices, the virtualization process filters out the USB hub so that it is not visible to the virtualmachine. The remaining USB devices in the compound appear to the VM as separate devices. Youcan add each device to the same VM or to different VMs if they reside on the samehost.
  25. Another feature that was requested a lot is to integrate with MS AD. This further simplify the management of vSphere as we can now be consistent with vCenter.AD integration provides authentication for all local services. This means access via Admin Client, via the console, via remote console are all based on AD.ESX and ESXi should integrate with MS AD for all user authentication. This effectively removes static information from the ESX host and enables the "plug and play" and "stateless appliance" concepts. Customers do not want to manage user accounts on ESX or ESXi because it is additional work to what they would do in a physical environment. Lowers the Opex of managing a VI environment and also competitively positions our platform with Hyper-V which can do this today. Customers don’t want to rely on VC for these functions due to HA of VC.
  26. So how do we do it? One way is to select the ESX that you want to add to AD, and choose the “Configuration” tab. From this page, choose the “authentication service” link. Click on the properties link, the dialog box shown on the next slide is shown.
  27. From the dialog box that pops up, select “AD” from the drop down.Then specify the Domain name.Then click “Join Domain”. The next dialog box will pop up to let you enter the ID which can join a domain. Click on Join Domain button to join the domain. If there is an error, an error message will be prompted. If not, ESXi will join the domain.
  28. I guess a question from customer will be how they can do this automatically, if they have a lot of ESXi and not enough Sys Admin to manage all these things.We have enhanced our host profile. Here is the screen where we can configure the same info in the host profiles.
  29. The idea of memory compression is very straightforward: if the swapped out pages can be compressed and stored in a compression cache located in the main memory, the next access to the page only causes a page decompression which can be an order of magnitude faster than the disk access. With memory compression, only a few uncompressible pages need to be swapped out if the compression cache is not full. This means the number of future synchronous swap-in operations will be reduced. Hence, it may improve application performance significantly when the host is in heavy memory pressure. In ESX 4.1, only the swap candidate pages will be compressed. This means ESX will not proactively compress guest pages when host swapping is not necessary. In other words, memory compression does not affect workload performance when host memory is undercommitted. 3.5.1 Reclaiming Memory Through Compression Figure 8 illustrates how memory compression reclaims host memory compared to host swapping. Assuming ESX needs to reclaim two 4KB physical pages from a VM through host swapping, page A and B are the selected pages (Figure 8a). With host swapping only, these two pages will be directly swapped to disk and two physical pages are reclaimed (Figure 8b). However, with memory compression, each swap candidate page will be compressed and stored using 2KB of space in a per-VM compression cache. Note that page compression would be much faster than the normal page swap out operation which involves a disk I/O. Page compression will fail if the compression ratio is less than 50% and the uncompressible pages will be swapped out. As a result, every successful page compression is accounted for reclaiming 2KB of physical memory. As illustrated in Figure 8c, pages A and B are compressed and stored as half-pages in the compression cache. Although both pages are removed from VM guest memory, the actual reclaimed memory size is one page. If any of the subsequent memory access misses in the VM guest memory, the compression cache will be checked first using the host physical page number. If the page is found in the compression cache, it will be decompressed and push back to the guest memory. This page is then removed from the compression cache. Otherwise, the memory request is sent to the host swap device and the VM is blocked. The per-VM compression cache is accounted for by the VM’s guest memory usage, which means ESX will not allocate additional host physical memory to store the compressed pages. The compression cache is transparent to the guest OS. Its size starts with zero when host memory is undercommitted and grows when VM memory starts to be swapped out. If the compression cache is full, one compressed page must be replaced in order to make room for a new compressed page. An age-based replacement policy is used to choose the target page. The target page will be decompressed and swapped out. ESX will not swap out compressed pages. If the pages belonging to compression cache need to be swapped out under severe memory pressure, the compression cache size is reduced and the affected compressed pages are decompressed and swapped out. The maximum compression cache size is important for maintaining good VM performance. If the upper bound is too small, a lot of replaced compressed pages must be decompressed and swapped out. Any following swap-ins of those pages will hurt VM performance. However, since compression cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste VM memory and unnecessarily create VM memory pressure especially when most compressed pages would not be touched in the future. In ESX 4.1, the default maximum compression cache size is conservatively set to 10% http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_memory_mgmt.pdfNote that this paper is written based on ESX4.0 memory management paper. Besides the new content introduced in ESX4.1, e.g., memory compression, quite a few places have been updated to represent the state of the art of ESX memory management.
  30. What does the counter _really_ mean as it’s an _average_ of a _rate_?
  31. Esxtop also has a power view “p”
  32. (2) The feature of displaying per-VM power consumption is experimentaland off by default. It can be turned on with an advanced config optionas the paragraph describes. The per-VM power consumption feature isdependent on the host power consumption feature
  33. HA and DRS have always been the popular features among our customers. I have quite a number of customers who found that HA is good enough for their SLA and moved from MS clustering. In the 4.1, we have a couple of enhancements in these main features.
  34. Give tips on HATypes of cluster: prod, dmz, tier 2, IT cluster, non prod, desktop, why min host is 4This slides give a summary of the new enhancements. As customers adopt more and more virtualisation, we are entering the phase where mission critical workloads are virtualised. With all these enhancements in 4.1, customers may be tempted to create large clusters and put everything there. By large I mean either large no of nodes, or a lot of VMs in single cluster. Personally, I still prefer the traditional approach, where a cluster is really the building block. So we have multiple clusters, with distinct purpose.From the list above, something that I think customers will appreciate is the
  35. In the past, customers reported that they very occasionally saw DRS "get it wrong" in the sense that DRS would move VMs based on purely performance criteria with scant regard for the availability anxiety. What this means is, in the past it was possible (if somewhat unlikely) for DRS to place 20 VMs on an ESX host and only put 8 VMs on another. While that may have been a good idea from a performance standpoint, it could lead to scenarios where DRS itself created an "eggs in one basket" scenario, as DRS didn't distribute VMs to prevent one ESX host from becoming overpopulated (and with a bigger VM count) than another. In this scenario, DRS would have to carry out VMotions to free up resources so HA can power on a VM.
  36. For Application Monitoring, developers would develop application monitoring agents using the Application Monitoring SDK for specific applications running in the VM. There is support added in VMware Tools for an application to report its heartbeat/status. This gets communicated to vCenter as an "AppHeartbeatStatus" (similar to the "GuestHeartbeatStatus"). HA can respond to that by going red, indicating that the application has died. Thus, Application monitoring would work for those applications that use the new VMware Tools capability along with an application monitoring agent to report application status. To enable Application MonitoringObtain the SDK from VMware (this is for the ISV, not end customers)Use it to set up customized heartbeats for the applications you want to monitor.
  37. Since the hypervisor has full control over the execution of a VM, including delivery of all inputs, the hypervisoris able to capture all the necessary information about non-deterministic operations on theprimary VM and to replay these operations correctly on the backup VM.The tagging scheme doesn’t introduce any significant delay of the replaying VM, since the hypervisorof the recording (primary) VM guarantees that last log entry of each single instructionemulation or a device operation is marked as a go-live point. Since the backup VM cannotbe significantly delayed, the primary VM is also not affected by the use of go-live points
  38. Patches can cause host build numbers to vary between ESX and ESXi installations. To ensure that your hosts are FT compatible, do not mix ESX and ESXi hosts in an FT pair.
  39. FT with vSphere 4.1 still has some incompatibilities Thin Provisioning and Linked ClonesHot plug devices and USB PassthroughIPv6 (as HA does not support)vSMPN-Port ID Virtualization (NPIV)StorageVMotionSerial/ parallel portsPhysical and remote CD/ floppy
  40. Business opportunity: migrate customer from clustering (running 2 instance) to FT, where we have higher up time
  41. #1: If administrators wanted to move an ESX hosts from one vCenter instance to a new one (for whatever reasons) they usually did not bring the ESX host into maintenance mode.But adding the host to the new vCenter server without removing it from the previous one caused FT failures.Now the administrator will get a warning- which can be followed or ignored (yes/no) if he tries to add an ESX host which is managed by a different vCenter.#2: DRS will vMotion FT enabled VMs if needed and will place them according to DRS groups and other rules. Storage vMotion is not supported with FT, though.#3: If an administrator disabled HA he was forced to disable FT first. Now he gets a warning and he can decide to override and accept FT will not work as expected. Following this decision several operations re to FT will be disabled while HA is off.
  42. So how do we do it? We can now create 2 types of group: groups of VM and groups of ESX.We then map the VM group to the ESX group
  43. An ESX host can belong to multiple group?
  44. The separate rules now include more than only two VMs.If you select a “Separate rule” and include 5 VMs you’ll need at least 5 ESX hosts to accommodate this rule as each of them must run on separate host.
  45. vMotion is not a cluster feature. We can vMotion across cluster.? Can we vMotion from 2 clusters with different EVC? We can try this on the lab.We should be able to vMotion from 4.0 to 4.1, as we can do from 3.5 to 4.1
  46. This sound quite complicated but is easy to understand.Assumimg a VM was powered on on an older EVC mode and migrated (without powering off) to a cluster with a newer mode (and newer feature).So in this case the VM is “part” of the new EVC mode, but does not use the new features- instead still the old ones.Previously if you tried to vMotion this VM to and ESX host with the older EVC mode vCenter complained about them not being compatible- as the ESX host was not compatible to the current EVC mode the VM is running.Now it checks which mode the VM itself uses and accepts vMotioning to an older mode- as the VM doesn’t care and is still not using the new features.
  47. Earlier Add-Host Error detection: Host-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC cluster.
  48. in the vSphere VM Administation Guide page 92 vmware writes: "You can verify the CPU settings for the VM on the Resource Allocation tab.“But in this menu you can see no indication to the multi core configuration. what do I have to look for? Is it already implemented in the vSphere 4.1 RC ?When you configure multicore virtual CPUs for a VM, CPU hot Add/remove is disabled.For more information about multicore CPUs, see the vSphere Resource Management Guide. You can also searchthe VMware KNOVA database for articles about multicore CPUshttp://www.cpuid.com/softwares/cpu-z.html provides a more detailed view within each Guest OS________________Need to see if we can use Orchestrator or PowerShell to check this
  49. Need to see the PowerCLI and vSphere API to see if we can do this programmatically
  50. Note that “Average Capacity” in the report refers to the average capacity of all license keys for that product. Products (e.g. vSphere Enterprise) can have multiple keysEach key has a capacity and usage associated with it.In the screen above:Current capacity is total capacity for all the keysAverage capacity is the average capacity for the keys. For example…Product: vSphere Enterprisekey | capacity | usagexxxx-xxxx—xxxx | 1000 | 500yyyy-xxxx—xxxx | 2000 | 100 For the product, vSphere Enterprise we would report:Total Capacity  - 3000Total Usage – 600Average Usage – 300Average Capacity – 1500