Weitere ähnliche Inhalte Ähnlich wie Deploying Applications in Today’s Network Infrastructure (20) Mehr von Cisco Canada (20) Kürzlich hochgeladen (20) Deploying Applications in Today’s Network Infrastructure1. © 2011 Cisco and/or its affiliates. All rights reserved. Cisco Connect 11© 2012 Cisco and/or its affiliates. All rights reserved.
Deploying Applications
in Today's Network
Infrastructure
2. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 2
Why did I create this Presentation?
Prepares networking professionals for the fundamentals of deploying application
in today’s server virtualization infrastructure
2
Gartner sees virtualization workloads become software defined
Infrastructure integration is leading to traditional
silos merging
High percentage of virtualization abstracts
workloads & increases portability
Drive toward x86 servers standardization
Workloads consume infrastructure and have a
personality defined by:
– Function, (e.g., Web App, Database, VDI)
– State (e.g., transaction, publish, share)
– Size (e.g., small, medium, large)
– Availability (e.g., portability & clustering)
– Complexity, security ....
Source: Philip Dawson Gartner
3. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 3
3
A Fabric Resource Pool Manager (FRPM)
FRPM is typically hosted by top of
the rack switch or dedicated
management server
See FRPM as a "uber-management
suite,“ enabling easier component
aggregation/disaggregation
FRPM may be implemented singly or
in conjunction with each other
Cisco UCS Manager and UCS Central are examples of a
Fabric Resource Pool Manager (FRPM)
Virtualization Drives Hardware Abstraction
4. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 4
Workloads are the use cases of infrastructure
Server virtualization: Are we there yet?
4
Penetration Has Reached Critical Mass: 2012 58% of all installed x86 server
workloads are running in a VM
Which Workloads DO You or Do not Virtualize?
1. Large OLTP DBMS
2. Large application servers
3. Large ERP projects
4. Complex BI/DW workloads
5. Large email instances
6. Commercial issues (support and licensing)
7. Clustered environments
8. I/O-intensive applications
9. Workloads that scale above a single socket
From "Virtual Machines Will Slow in the Enterprise, Grow in the Cloud," 4 March 2011, Gartner
5. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 5
Example Workloads is Hosted Virtual Desktops
Understanding the architectures as workloads increase
5
Have You Thought About …
Typically 4-5 users per core (pre-Nehalem), 7
- 9 users per core (Nehalem)
I/O - sufficient bandwidth and throughput?
Memory configurations (2GB to 4GB per VM
running Windows)
Server type; rack, blade, stand-alone, etc
Server density may cause data center
power/cooling issues
Windows 7 images from 15GB to 45GB. With
deduplication technologies from 2GB to 15GB
Expandability
Often a step function in net-new server and
storage infrastructures
Highly dense zones
Space
Power
Cooling
Latency (<150 ms)
Bandwidth - from
100kbps up to 5mbps
Recommend Reading: Workload Considerations for Virtual Desktop Reference Architectures by
VMware - http://www.vmware.com/files/pdf/VMware-WP-WorkloadConsiderations-WP-EN.pdf
6. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 6
Application Deployment Case Study
Initial POD VDI deployment. Scale to additional PODs
6
VMware View 5.1 Deployment “
Design should be scalable with no significant change in performance or stability,
compared to current physical workloads per pod – “Deploy and user will come
model” focusing on 6K POD deployment
Knowledge Worker Profile
– This is a middle of the range performance profile tier
– Applicable to many generic types of users
– Suitable to run basic corporate application suites
– Linked cloned desktop
– Non-persistent type of desktop
7. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 7
VMware View POD Logical Design
Storage is broken down into two (2) NetApp array’s which will each service up to
3,000 users using NFS
7
VDI Cluster Management VDI Cluster VDI Cluster
Block 1 & 2 VDI Storage Management Storage Block 3 & 4 VDI Storage
UCS B230 UCS B230UCS B200
NetApp 3270
Storage Array
NetApp 3270
Storage Array
2 x 318 GB
Server Data store
Nexus 5K Nexus 5K Nexus 5K
500GB
Desktop
Template
Data store
500GB
Desktop
Template
Data store
2500GB
User Data
Store Per
250 User
2500GB
User Data
Store Per
250 User
530GB
Desktop
Data stores
per 250VMs
530GB
Desktop
Data stores
per 250VMs
8. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 8
VMware View POD - Logical Architecture Design
Example of a VMware View Pod
8
A VMware View pod integrates multiple
1,500-user building blocks into a View
Manager installation that you can manage
as one entity
A pod is a unit of organization determined
by VMware View scalability limits. Table
lists the components of a View POD
– View building blocks 4
– Each block consists of 2 ESX Hosts
– View Connection Servers 4 (3 active and 1
failover)
– 10Gb Fabric and Cisco Nexus 1kv DS
Pod Architecture for 6000 View Desktops
PODs change based on requirements - Consult the VMware View Architecture Planning
9. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 9
VMware View POD - Logical Architecture Design
VMware view POD broken down into management and compute blocks
9
Management B200 M3 Small Blade Config
Supports small to medium size VMs / Physical
Compute B230 M2 Small Blade Config
Supports small to medium size VMs / Physical
11. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 11
Application Deployment Tips and Tricks
A collection of hints and tips gathered from over three years of UCS/Nexus 1Kv
deployments
11
What do we see on site?
Majority of deployments (80%+) run a mix of Hypervisors and bare-metal
– Around 20% run with no bare-metal at all
Hypervisor is for the large part (80%) VMware’s vSphere ESXi
– ESXi 4.1 Update 2 primarily, customer moving to ESXi 5.1
– Microsoft’s Hyper-V comes second
– Xenserver comes third
– Open Stack including XEN and KVM increasing in popularity
Bare-metal deployments consist mostly of Windows 2008 R2 server and RHEL
– Either bare-metal deployment imposed by application vendor
– Or virtualization isn’t fully trusted yet (or misunderstood?)
12. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 12
Application Deployment Tips and Tricks
A collection of hints and tips gathered from over three years of UCS/Nexus 1Kv
deployments
12
Boot from SAN has literally exploded … but can be tricky to implement
– From virtually non-existent in mid-2008 to 90% today
– Valid for all OS: Windows, ESXi and Linux
Fairly limited expertise in “advanced” OS deployments
– ESXi HA cluster design options, Nexus 1000V, how many vNICs per blade, etc.
Misunderstanding of certain networking options in UCS
– The infamous Native VLAN checkbox anyone?
Customers love automation of repetitive tasks … but often don’t know how
– OS deployments, configuration of networking in ESXi
Which sensors and objects should I be monitoring?
Recurring patterns, questions and concerns
13. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 13
How do Servers Communicate?
Converged Network Adapters converge the functionality of network and storage
adapters
13
Servers have at least two adapters – FC HBA (Fiber Channel Host Bus Adapter) &
Ethernet NIC (Network Interface Card) to connect to the storage network (Fiber
Channel) and computer network (Ethernet)
Servers have at least two adapters – FC HBA (Fiber Channel Host Bus Adapter) &
Ethernet NIC (Network Interface Card) to connect to the storage network (Fiber
Channel) and computer network (Ethernet)
14. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 14
Future Proofing for Virtualization
Building an environment which can scale
14
Cisco UCS adds support for flexible VLAN configurations on Fabric Interconnect
(FI’s) uplink ports while using End Host Mode. This feature provides support to all
combinations of upstream network configurations:
End Host Mode and Switch Mode
End Host Mode is similar to the hardware
implementations of VMware vSwitches – no
spanning tree, no loops, and does not look like
it is switched to the external network
Switch Mode, means the FIs can act like a
normal switch (use spanning tree, etc.)
I most always recommend using
End Host Mode (default mode)
15. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 15
Inter-Fabric Traffic using Cisco UCS
UCS Release 2.0(2m) adds support for Nexus 2232 fabric extender
15
In order for Cisco UCS to provide the benefits, interoperability and management
simplicity it does, the networking infrastructure is handled in a unique fashion:
UCS rack-mount and blade servers are
connected to a pair of FI’s which handle the
switching and management
The rack-mount servers connect to Nexus
2232s providing local connectivity point 10GE
FCoE without expanding management
Not shown in this diagram are the I/O Modules
(IOM) in the back of the UCS chassis. These
extend to the Fabric Interconnects providing
management
16. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 16
Cisco UCS Logical Connectivity
UCS is a Layer 2 system so any routing (L3 decisions) must occur upstream
16
UCS hardware is designed for low latency environments, such as high performance
computing, and perfect for today’s applications:
All switching occurs at the Fabric Interconnect
and no intra-chassis switching occurs
The only connectivity between FI’s is the
cluster links. Both FI’s are active from a
switching perspective but management UCS
Manager (UCSM) is an Active/Standby
clustered application. This clustering occurs
across L1 and L2 links. These links do not
carry data traffic
17. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 17
Cisco UCS Fabric Failover
When deploying applications in the network, multipath and pinning configuration
is critical
17
Fabric Failover is a capability found in Cisco UCS that allows a server to have a
highly available paths without using NIC teaming drivers or any NIC failover
configuration required in the OS, hypervisor, or virtual machine
Fabric Failover provides the servers with a
virtual cable that can be quickly and
automatically be moved from one upstream
switch to another
interface identifier and MAC address remain
the same
Fabric failover is simple!! Perfect for PXE, Linux
and Windows installs. Just check the box!
18. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 18
VMware ESX vNIC’s for UCS
To minimize error and ensuring uniform service profiles leverage vNIC templates.
vNIC templates provide a mechanism to define interfaces and their policies. An
interface contains a list of VLANs (or a single VLAN if that’s required), whether
CDP is enabled, a QoS policy, which pool the MAC address comes from, the
logical FI routing and the MTU
18
Eight vNIC templates for ESX host;
– ESX-Mgmt-A vmnic0 management for the host and Nexus 1Kv
– ESX-Mgmt-B vmnic1 fabric B
– ESX-NFS-A vmnic2 NFS mounts for fabric A - 9000 MTU
– ESX-NFS-B vmnic3 fabric B
– ESX-PROD-A vmnic4 data traffic for fabric A
– ESX-PROD-B vmnic5 fabric B
– ESX-Vmotion-A vmnic6 Vmotion for fabric A - 9000 MTU
– ESX-Vmotion-B vmnic7 Fabric B
Additional options explored at end of session!!!
1
2
20. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 20
Windows 2K8 vNIC’s for UCS
To minimize error and ensuring uniform service profiles leverage vNIC templates.
vNIC templates provide a mechanism to define interfaces and their policies. An
interface contains a list of VLANs (or a single VLAN if that’s required), whether
CDP is enabled, a QoS policy, which pool the MAC address comes from, the
logical FI routing and the MTU
20
Two vNIC templates for Windows 2008 host
– WIN2K8-PROD-AB windows management interface. Enable Fabric Failover. Pin to fabric A for
management
– WIN2K8-NFS-AB Windows NFS interface. MTU set to 9000. Enable Fabric Failover. Pin to
fabric B for NFS
Use a unique MAC resource pool
1 2
21. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 21
World Wide Names (WWPN/WWNN)
Cisco UCS allows the users to create custom values for World Wide Names
21
Used to logically identify resources for storage fabric zoning,
array LUN masking
Similar to MAC addresses for Ethernet
2 types:
– World Wide Node Name (WWNN) – Identifies node
– World Wide Port Name (WWPN) – Identifies a port on a node
Visible in name server and FLOGI tables
8 bytes, representing:
– Format 1,2, or 5 with the first 2 bytes (ex. 20:00)
– Vendor unique OUI with bytes 3 through 5 (ex. 00:25:B5)
– Assigned serial number with bytes 6 through 8 (ex. 00:01:02)
22. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 22
Suggested WWNN/WWPN Octet Values
22
23. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 23
Suggested WWNN/WWPN Best Practice
23
Always create pools that are multiple of 16 and contain less than 128 entries
– This ensures vHBA-A (SAN A) and vHBA-B (SAN B) have the same low-order byte
Counter-example using 233-entries pools
Much better for both vHBAs to have the same low-order byte and a unique SAN
Fabric identifier
– Presence of “0A” or “0B” in the port WWN indicates SAN Fabric
24. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 24
Port WWN pools
Use Expert setting when creating vHBAs
24
25. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 25
Suggested MAC Format for UCSM
Cisco UCS allows the users to create custom values for MAC address
25
26. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 26
Before we move on …
The Native VLAN checkbox
26
When defining VLANs on a given vNIC inside a SP, there’s a Native VLAN
column
When are you supposed to check that box?
27. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 27
Native VLAN on a vNIC
When to check it
27
The Native VLAN checkbox here is link-local only
– It has zero impact on network uplinks or other SPs
Behind the scenes vNICs are trunk (802.1Q) ports
– FCoE VLAN + classical Ethernet VLAN(s)
A vNIC can have one to N VLANs defined on it but only one can be Native
Native VLAN checked means traffic is sent to the OS with no tag on that VLAN
– Typical with single VLAN vNICs
– The OS just receives traffic on the corresponding interface, no need to define VLAN-based
sub-interfaces
Native VLAN unchecked means the OS must be able to handle 802.1Q tags
28. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 28
Native VLAN examples
Will this work?
28
This Service Profile is associated to a blade
running ESXi
This won’t work! All traffic is
sent tagged to ESXi. A VLAN
must be defined to handle
management traffic!
29. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 29
Native VLAN examples
How about this one?
29
This Service Profile is associated to a blade running Windows 2008 R2 (not a
VM!)
This will work. Traffic on the
“backbone” VLAN arrives
untagged and is handled by
“Cisco VIC Ethernet
Interface”
30. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 30
Boot Process
Booting is an involved, hacky, multi-stage affair – fun stuff
30
Outline of the typical boot process: Outline of the typical boot process:
Once the motherboard is powered up
it initializes its own firmware – chipset
CPU will begin the bootstrap
processor (BSP) that runs all of the
BIOS and kernel initialization code
Pre-Execution "pixie" is an
environment to boot computers using
a network interface independently of
data storage devices (like hard disks)
or installed operating systems
31. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 31
Unattended OS Installation and Boot Process
Booting is simplified using UCS over the network or SAN
31
UCS solves the booting complexity
Create Boot Policy
Complete control of system boot policy
separate from the BIOS settings
– PXE, FC SAN
– Virtual media (CDROM, ISO, USB, floppy)
Control of how to un-provision servers
to factory default when no longer
required
– Called “Scrub Policy”
– Optionally clear BIOS settings
– Optionally wipe local disks
Allows for removing the low-level
configuration state on server
– Easier automation possible
32. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 32
Cisco Server Provisioner automatically installs operating environments for
physical virtual servers and blades, a process known as bare metal provisioning
Simple product to installation
Easy to use & well-documented (w/ graphical tutorials)
3-step process to provision
1. Prepare the ISO (Windows, Linux, ESX)
2. Use Web UI to create:
Provisioning Role Templates (MAC-Spec Provisioning)
MAC-Independent Provisioning menus
3. Assign templates and values to systems based on requirements
Cisco Server Provisioner
Automated System Provisioning, Recovery, and Cloning
32
33. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 33
Cisco Server Provisioner
MAC-Independent ("Pull“) Provisioning MAC-Specific (“Push”) Provisioning:
33
Outline of the typical boot process: Outline of the typical boot process:
MAC address-specific push provisioning can be used in situations where
users rarely touch the computer systems and rely on a provisioning
dashboard to remotely provision servers and blades
34. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 34
My Cloupia Solution “Demo”
34
Key Components Of Cloupia Solution
– Cloupia Unified Infrastructure Controller
– Cloupia Network Services Appliance
The Cloupia Unified Infrastructure Controller (CUIC) is a multi-tenant,
multi-hypervisor provisioning and management solution that provide
comprehensive virtual infrastructure control, management and monitoring
via single pane of glass
The Cloupia Network Services Appliance provides PXE boot capabilities
for bare metal provisioning and acts as a PXE repository
35. 35
What Can I Do in Cloupia Unified Infrastructure Controller
Adding Physical Accounts
Adding Virtual Accounts
Discovery
Policies/Policy Creation
Virtual Data Center (vDC)
Catalog (self-service catalog)
Adding a Cisco UCSM Account 2
You can also add other Compute/Network/Storage platforms
Adding a Cisco UCSM Account
Adding a NetApp OnTap
Discovery Virtual Discovery Physical
1
3
4
4 5
Adding a NetApp OnTap
36. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 36
36
6
7 8
CUIC Policies - A policy is a group of rules which determines
where and how a new VM will be provisioned within the
infrastructure based on the availability of system resources
CUIC needs four policies to be setup by sysadmin in order to
provision VM(s)
Adding Physical Accounts
Adding Virtual Accounts
Discovery
Policies/Policy Creation
Virtual Data Center (vDC)
Catalog (self-service catalog)
CUIC VDC - An environment that combines
– Infrastructure and virtual resources
– Rules and Policies
– Business Operational Processes
– Cost Model
– Enable/Disable Storage Efficiency
– End User Self Service Option
– Network,
– Storage,
– Computing,
– Service Delivery/System Policy
CUIC Catalogs is an catalog combines:
– Group and images
– Application category, application type
– Additional options such as Credentials,
Guest customization, Remote access etc.
– And overall presents as a single ‘Menu Item’
to ‘Self Service’ user to a group(s)
What Can I Do in Cloupia Unified Infrastructure Controller
37. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 37
Preparing Server for Applications
Boot from San Tasks – You don’t need to be a storage guru!
37
UCS Manager Tasks
– Create a Service Profile Template with x number of vHBAs
– Create a Boot Policy that includes SAN Boot as the first device and link it to the Template
– Create x number of Service Profiles from the Template
– Use Server Pools, or associate servers to the profiles
– Let all servers attempt to boot and sit at the “Non-System Disk” style message that UCS servers return
Switch Tasks
– Zone the server WWPN to a zone that includes the storage array controller’s WWPN
– Zone the second fabric switch as well. Note: For some operating systems (Windows for sure), you need to
zone just a single path during OS installation so consider this step optional
Array Tasks
– On the array, create a LUN and allow the server WWPNs to have access to the LUN
– Present the LUN to the host using a desired LUN number (typically zero, but this step is optional and not
available on all array models)
38. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 38
Boot from SAN
Steps required to configure boot from SAN
38
1 2
5 4 3
6 7 8
Note – If you are installing a new OS on the boot LUN you
might need to add a CDROM drive to the Boot Policy
39. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 39
Tune your BIOS policy
Let the server speak up
39
Boot from SAN involves several key components working hand in hand
– Correct UCSM boot-from-SAN policy with the right target port WWNs
– Correct SAN zoning and LUN masking are imperative
– SAN array must present a LUN (storage groups, initiator groups, etc.)
During your first trial a component won’t work the way it’s supposed to
UCSM lets you create BIOS policies that you can attach to the Service Profile
Best Practice: for Boot-from-SAN you always want Quiet Boot disabled
40. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 40
Build your boot policy
One path works, but if resiliency matters
40
UCS can boot from 4 different paths
– You can boot with just a single target boot policy, but not ideal for resiliency
Typically, you’ll want a boot policy that goes like this:
That policy says:
– First try vHBA fc0 pWWN “63” via fc0 Storage Processor A, port A3
– Then try vHBA fc0 pWWN “6B” via fc0 Storage Processor B, port B3
– If those fail, then try fc1 (first pWWN “64” on SP A; then pWWN “6C” on SP B)
Don’t forget to append CD-ROM or PXE after the SAN boot targets
41. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 41
Let’s boot the server
Keep an eye out
41
Associate the boot policy you just defined then boot the server
With a M81KR adapter, this is what you’ll see for each vHBA
If you do not see the array show up
here, there’s probably a zoning or
masking error
42. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 42
Booting from SAN Troubleshooting
Booting from SAN is not necessarily the easiest configuration
42
UCS removes the complexity of booting from SAN by using service profiles,
templates and associated boot policies
logging into an array which has a
WWPN of 20:00:00:1F:93:00:12:9E and
it’s Service Profile is associated to blade
1/1 in chassis 1 slot 1
1
2
3
First connect to the VIC
firmware:
Now list the vNIC ID’s
and force the VIC to log
into the SAN fabric:
Successfully logged into the fabric as
we’ve got a successful PLOGI and report
lUNs available
43. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 43
Preparing Server for Applications
VMware vSphere 5 Auto Deploy and Cisco UCS
43
A new stateless functionality that ships with vSphere 5
– Stateless PXE boot of bare-metal hosts
– Assign a specific configuration to PXE-booted hosts
PXE-booted hosts receive a configuration at boot time
– OS is not installed on disk, it runs from RAM
Configuration applied through Host Profiles “Connect to vCenter”
Auto Deploy works in tandem with a vCenter Server, a DHCP and a TFTP server
– DHCP and TFTP server not part of Auto Deploy. They have to be configured to point to Auto
Deploy (explained in this slide deck)
Auto Deploy can be installed on a Windows VM, on vCenter Server directly. It also
ships with the vCenter Server appliance
Auto Deploy is registered during installation with a vCenter Server instance
44. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 44
Auto Deploy
Technical Overview: 6 step process
44
Host PXE boots and gets an IP address from DHCP
DHCP points the host to the TFTP server via option #66
TFTP server downloads a gPXE configuration file as specified in option #67
gPXE config file instructs host to make HTTP boot request to Auto Deploy Server
Auto Deploy queries the rules engine for information about host
An Image Profile and Host Profile is attached to the host based on a rule set
ESXi is installed into host RAM, is added to vCenter and is configured in the cluster
vCenter maintains Image Profile and Host Profile for each host in its database
2 3
4
5
1
DHCP Option 66
DHCP Option 67
6
45. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 45
UCS Manager Configuration Recommendations
45
1. If you will be installing a lot of systems, know and understand goUCS, CLI
scripting - It dramatically simplifies the setup of several complex objects
2. Always use Policies, Pools, and Templates - I've seen a lot of cases were
manually configured settings for specific service profiles are used.
Always recommend using updating templates.
If you use an actual policy you can quickly see which
elements are using the policy through the "show
policy usage“ action under each policy
3. Fixing service profiles where Policies, Pools, and Templates were not used - If
Service profiles were created without policies, pools, and templates you can
add them later. Since often the systems being "fixed" are probably in production
you have to be very careful with the process
46. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 46
Use Updating Templates for vNICs or vHBAs
46
To update or not to update? With vNIC/vHBA templates is always a question
When creating a vNIC or vHBA template
always use "updating template" option
unless you want to lock down changes
With updating templates all virtual
interfaces bound to the template will be
updated immediately with any change
This can be very powerful with it comes
to adding new VLANs
Take for instance you need to add a new VLAN to your UCS environment. If the
template is updating, all you do is add the new VLAN to the global VLAN list
within UCS and then update your interface template VLAN list
47. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 47
UCS Manager Configuration
47
4. When cloning a the service profile the clone WILL NOT have the same MAC
address, UUID, WWNN, and WWPN's as the original one
5. Be careful about switching or modifying vNIC's and vHBAs since the MAC
address and WWPN's could change if you don't follow the right process. Do not
Delete the current ones then re-add the templates. This is likely to change the
addresses. Could possibly break Storage zoning, boot from San and PXE setup
6. Always and ONLY use vNIC and vHBA templates - You loose a lot of control
and dramatically increase the complexity of troubleshooting and monitoring your
environment
7. Always use a maintenance policy - I suggest using a user-ack policy against
EVERY service profile and EVERY service profile template. Personally I only
use user-ack policy
48. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 48
VMware BIOS Settings
48
Here are the best practices for VMware ESXi on Cisco UCS for deploying
applications in the network
49. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 49
VMware BIOS Settings
49
Enhanced Intel
speed step cannot
be disabled on the
B230 and B440
Manage BIOS
firmware versions
and settings on a
per-service basis
50. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 50
UCS Manager configuration Recommendations
50
8. Always set all blade firmware versions in the policy - I always selecting all
firmware options and version even for hardware that you may not have in the
environment. Sometimes it turns out you might have a component the firmware
applies to and you don't want to leave it at some random firmware version
Secondly you are likely to
acquire new hardware and you
will have to remember to modify
your firmware policy to include
this new hardware. It takes a
seconds to select everything
Note - Newest firmware version is not necessarily at the top or the bottom of the
list. You will need to pay attention to the firmware version values to find the best
choice
51. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 51
Upgrade UCS with Production Applications
51
Firmware Upgrade Process
- All production application
must remain unaffected
Preparation should prior the to
the upgrade - Collect the
operating equivalent of an IOS
show technical support;
system logs, from UCSM
The firmware upgrade process is
broken down into two phases -
upgrading the chassis versus
upgrading the blade firmware
Service profile associated with a
particular blade must be rebooted in
order to affect a firmware upgrade on
that blade
Two firmware binaries which can belong
to either domain and so land in a no-
man’s land of sorts: the Adaptor
software on the Converged network
adaptor, and the CIMC firmware
52. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 52
Upgrade UCS with Production Applications
52
Activating the Fabric
Interconnects - This is the
only step in the process
where all data connections
in the UCSM domain on a
particular path: A or B will be
affected
Preparation should prior the to
the upgrade - firmware
upgrade where service profiles
are managed by service
profile templates - No Update
Template
Direct Updates - following the
steps below to upgrade the UCS
infrastructure without taking
affecting the application running
on the blades
Make sure multipathing is
setup correctly prior to
upgrade
54. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 54
Why Nexus 1000v Architecture
Comparison of a standard physical switch, where
network administrators manage the physical switch
and the server administrators manage the servers
connected to that switch
Moving towards a virtualized environment, the
server administrators still manage the physical
ESXi servers and network administrators
manage the switch
Comparison to a Physical Switch Moving to a Virtual Environment
55. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 55
Why Nexus 1000v Architecture
55
With the Nexus 1000V, the network administrator
will still manage the VSMs of the Nexus 1000V,
along with the physical switch
VEMs are managed by the network
administrators since the port-profiles
configurations are configured on the VSM. This
allows the server administrators to manage the
ESXi hosts without worrying about the
“networking” portion within the ESXi server
VSM: Virtual Supervisor Module
VEM: Virtual Ethernet Module
VSM: Virtual Supervisor Module
56. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 56
Host Connectivity Requirements for Nexus 1000v
Each Physical Host Is Typically on Several Networks
56
Management to talk to vCenter
Storage iSCSI and NFS
vMOTION for moving VMs
VSM to VEM communication the “backplane”
Virtual machine networks—(why we are all here)
Port channels for physical NICs
– Many configurations possible
– From dual 10G to many 1G
Virtual Side
Port Group
Physical Side
57. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 57
Uplink Port Profiles from VMware ESX vNIC’s
57
The port-profiles of type “Ethernet” are utilized for the physical NIC interfaces on
the host. There are two things to note for the uplink port-profile
– N1K Control and Packet VLAN is used for communication between the VSM to the VEM and
MGMT VLAN is used for the service console of the ESXi servers. Those 2 VLANs need to be
configured as “system vlans”. System VLANs are brought up on their ports before talking with
the VSM
– The “channel-group” configuration needs to be configured for “macpinning” since the UCS
blade servers is not able to be configured utilizing a LACP port-channel. Recommended
configuration is to use mac pinning
Typical Nexus 1000v deployment with UCS Recommended Nexus 1000v deployment with high traffic
58. 58
Uplink Port Profiles from VMware ESX vNIC’s example
Data Uplinks port-profile
Vmotion Uplinks port-profile
Management/Control/Packet port-profile
NFS Uplinks port-profile
vmnic 0 and 1 used for mgmt and N1K control and packet
traffic only, and will use the following Port-Profile
Port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 300,406
channel-group auto mode on mac-pinning
no shutdown
system vlan 300,406
state enabled
vmnic 2 and 3 used for NFS traffic only, and will use
the following Port-Profile
port-profile type ethernet NFS-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 402,403
channel-group auto mode on mac-pinning
mtu 9000
no shutdown
system vlan 402
state enabled
vmnic 4 and 5 will be used to carry data production
traffic, and will use the following Port-Profile
vmnic 6 and 7 will be used to vMOTION traffic, and
will use the following Port-Profile
Port-profile type ethernet data-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 410-460
channel-group auto mode on mac-pinning
no shutdown
state enabled
port-profile type ethernet Vmotion-uplink
vmware port-group
switchport mode access
switchport access vlan 400
channel-group auto mode on mac-pinning
mtu 9000
no shutdown
system vlan 400
state enabled
59. 59
Uplink Port Profiles from VMware ESX vNIC’s example
Data Uplinks port-profilemgmt vethernet port-profile
The service console or mgmt port-profile will be created
for service console (vmkernel) interface. It is critical that
this port-profile is also configured as a “system vlan”
port-profile type vethernet mgmt
vmware port-group
switchport mode access
switchport access vlan 300
pinning id 1
no shutdown
system vlan 300
state enabled
port-profile type vethernet NFS-1
vmware port-group
switchport mode access
switchport access vlan 402
pinning id 0
no shutdown
system vlan 402
state enabled
port-profile type vethernet NFS-2
vmware port-group
switchport mode access
switchport access vlan 403
pinning id 1
no shutdown
system vlan 403
state enabled
System VLANs may be used for Storage
VLANs (NFS/iSCSI) and vMotion
Assigns (or pins) a vethernet
interface to a specific port
Control and Packet vethernet port-profile
port-profile type vethernet control-packet
vmware port-group
switchport mode access
switchport access vlan 406
pinning id 0
no shutdown
system vlan 406
state enabled
60. 60
Uplink Port Profiles from VMware ESX vNIC’s example
System VLANs must also be used in vethernet port profiles for VSM
Management (console) VLAN and Nexus 1000V Control VLAN
Vmotion vethernet port-profile
Data vethernet port-profile
The Vmotion port-profile will be created for the Vmotion (vmkernel) interfaces for each of the ESXi servers
port-profile type vethernet vmotion
vmware port-group
switchport mode access
switchport access vlan 400
pinning id 0
no shutdown
system vlan 400
state enabled
port-profile type vethernet Client-One
vmware port-group
switchport access vlan 410
switchport mode access
no shutdown
state enabled
port-profile type vethernet Client-Two
vmware port-group
switchport access vlan 411
switchport mode access
no shutdown
state enabled
port-profile type vethernet Client-Three
vmware port-group
switchport access vlan 412
switchport mode access
no shutdown
state enabled
port-profile type vethernet Client-Four
vmware port-group
switchport access vlan 413
switchport mode access
no shutdown
state enabled
61. 61
Uplink Port Profiles from VMware ESX vNIC’s example
Without a VMkernel port none of these services can be used on the ESX server
A VMkernel port is required on each ESX host where the following services will be
used:
– vMotion
– iSCSI
– NFS
– Fault Tolerance
Choose "VMkernel" for the connection type,
Click Next.
2
31
Give the VMKernel port a label (e.g. iSCSI - if it will
purely be used for iSCSI)
Enter an IP address to assign to the VMkernel port.
No routing!!
62. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 62
Introducing Cisco N1Kv Into VMware Environment
Only migrate data port-profiles on the Cisco N1Kv
62
We keep the management vmknic in
a regular vSwitch and place N1KV
control and packet
Create a vSwitch (or DVS, or N1KV)
for vMotion and make one vNIC
standby so local switching takes
place
The 3rd pair of vNICs are for N1KV
Provisions a 4th pair of interfaces for
future use such as NFS. Set correct
MTU size
Some networking teams struggle to get Cisco N1Kv into a VMware environment
63. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 63
Why I love Nexus 1000v
Deploying Virtual Machines with Nexus 1000V
63
Network admin sets up port profiles in advance based on requirements
– All features are specified that will be needed
– Goes to get coffee or on vacation
Server admin creates VM templates
– Template virtual NICs use port profiles
Server admin clones templates
– Clones bring port profiles along for the ride
Server admin starts up VMs
Nexus 1000V sets up ports from port profiles
– Communicated by VMware on VM startup
Possibly Thousands of VMs!
64. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 64
Nexus 1000v Gotchas
64
1. You cannot change system VLAN command if the port-profile is in used. Use
show port-profile name <name> usage command to check if port-profile is in use
Tried to remove system vlan from port-profile no system vlan 351 This will remove all
system vlans from this port profile. Do you really want to proceed(yes/no)? [yes] ERROR:
Cannot remove system vlans, port-profile currently in use by interface eth7
2. Never use the same Nexus 1000v domain id when installing a new Nexus 1000v
environment. Double check to make sure domain id is unique!!
Workaround - Create another port-profile with the same settings, then change the vmnic
port-profile to the new port-profile
3. VSM gets migrated on same host/storage - preventive/failure
This is driven by vCenter anti-affinity rules. In order for this to occur, an ITIL change would have to
be made to disable this policy. There are no alarms on N1kv to detect this, it would have to be
something within the virtualization tool set to ensure the rules that have been defined are being
followedrevent this!
65. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 65
Cisco Nexus 1100 Series
Cisco Nexus 1100 Series with Four VSBs: Cisco VSMs, VSGs, NAM, and DCNM
65
Nexus 1100 Manager: Cisco management
experience
Manages a total of 5 virtual service blades
(ie. 4 VSMs and 1 NAM)
Each VSM can manage up to 64 VEMs
(256 total VEMs)
A dedicated NX-OS appliance for deploying
multiple Virtual Appliances / Virtual Services
It is NOT a general purpose server to deploy
any VM
Cisco Nexus 1100 Series High-Availability Pair
66. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 66
Network Connectivity Options
Cisco Nexus 1100 Series can be connected to the network in five ways
66
Network Connection Option 4
– Option 4 uses the two LOM interfaces for management traffic, two of the four PCI interfaces for
control and packet traffic, and the other two PCI interfaces for data traffic. Each of these pairs of
interfaces should be split between two upstream switches for redundancy
Option 4 is well suited for customers who want to use the Cisco NAM but require
separate data and control networks. Separating the control from the data network
helps ensure that Cisco NAM traffic does not divert cycles from control traffic and
therefore affect connectivity
68. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 68
Reference Architecture Can Accelerate Any Workloads
Standard Operation Procedures (SOPs) for operational excellence
69. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 69
The Perfect POD for any workload
69
Reference Architecture Can Accelerate Any Workloads
Easy Jet, like Ryan air, borrows its business model
from United States carrier Southwest Airlines
UCS 5108 Chassis
Wire-once to UCS fabric – fast
scale up / scale down
B440 M2 Large Blade Config
Supports small to XL size VMs /
Physical (req’d for large mission
critical apps and DB hosting)
B230 M2 Small Blade Config
Supports small to medium size
VMs / Physical
Cisco Nexus 7k/5k Core
Core + aggregation back to
enterprise network
UCS 6248XP Fabric
Interconnect
Shared connectivity / uplink to
network, storage, backup
Unified management of UCS
fabric
70. Complete Your Paper
“Session Evaluation”
Give us your feedback and you could win
1 of 2 fabulous prizes in a random draw.
Complete and return your paper
evaluation form to the room attendant
as you leave this session.
Winners will be announced today.
You must be present to win!
..visit them at BOOTH# 100
71. © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect 71
Thank you.