2. Transparency in the Eye of the Beholder
With
virtualization, VMs
have a transparent
view of their
resources…
3. Transparency in the Eye of the Beholder
…but
Its difficult
to correlate
from a network
point of view
4. Server Virtualization Issues
1. vMotion moves VMs across
physical ports—the network policy
must follow
2. Impossible to view or apply
network policy to locally switched
traffic
3. Need shared nomenclature for
security policies between network
and server admin
Port
Group
vCenter
Physical Switch Interface
6. VMware Virtual Networking
Virtual Switch (vSwitchN)
• Connects to physical adapters (vmnicN)
Service Console Network
(ESX Server 4 only, not 4i)
VMkernel Network
• 0, 1, 2 or more (up to 32) Gb uplinks
• Up to four 10Gb uplinks
• Port connection types
• Virtual Machine network
• VMkernel network
VMotion, iSCSI, NFS
Host management (ESXi 4 only)
• Service console (host management
network for ESX 4 only, not ESX 4i)
Virtual Machine
Network
vSwitch
(with Port Groups)
• Port groups
• Aggregate/segment virtual switch ports
• Identified by network labels
• Support VLAN tagging
Physical Adapters
(vmnicN)
Physical Network
6
7. Virtual Networking for an ESX Server
Host – Example
VMkernel
TCP/IP
Services
Virtual Machines
service
console
service console
available with
ESX Server 4
only, not 4i
vswif
service
console and
VMkernel
Ports
VLAN Port
Groups
101, 102
vSwitches
Physical Adapters
1000 Mbps
100 Mbps
100 Mbps
1000 Mbps
1000 Mbps
Physical Switches
Production Network
Test Network
(VLAN 101, 102)
Trunk
Port
Host
Management Network
VMkernel Network
(VMotion, iSCSI, NFS)
7
10. VN-Link: Virtual Network Link
• Extends the network to the
virtualization layer
• Requires innovation within
networking equipment
– Virtual Ethernet Interface
– Port Profiles
– Virtual Interface mobility
• Solution Integrated with
hypervisor management solution
vETH
vETH
11. VN-Link View of the Access Layer
Boundary of network visibility
• VN-Link provide visibility
to the individual VMs
• Policy can be configured
per-VM
• Policy is mobile within
the ESX cluster
• VN-Link refers to a literal
link between a VM VNIC
& a Cisco VN-Link Switch
12. Cisco VN-Link
Faster VM Deployment
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
VM
VM
VM
VM
Non-Disruptive
Operational Model
VM
VM
VM
VM
Defined Policies
WEB Apps
HR
Nexus
1000V
VEM
Nexus
1000V
VEM
DB
vSphere
vSphere
DMZ
VM Connection Policy
•
Defined in the network
•
Applied in Virtual Center
•
Linked to VM UUID
vCenter
Nexus 1000V VSM
13. Cisco VN-Link
Richer Network Services
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
VM
VM
VM
VM
Non-Disruptive
Operational Model
VM
VM
VM
VM
VM
VM
VM
VM
VMs Need to Move
•
VMotion
•
DRS
•
SW Upgrade/Patch
•
Hardware Failure
Nexus
1000V
VEM
Nexus
1000V
VEM
vSphere
vSphere
VN-Link Property Mobility
•
VMotion for the network
•
Ensures VM security
•
Maintains connection state
vCenter
Nexus 1000V VSM
14. Cisco VN-Link
Increased Operational Efficiency
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
VM
VM
VM
VM
Non-Disruptive
Operational Model
VM
VM
VM
VM
VI Admin Benefits
•
•
•
•
•
Maintains existing VM mgmt
Reduces deployment time
Improves scalability
Reduces operational workload
Enables VM-level visibility
Nexus
1000V
VEM
Nexus
1000V
VEM
vSphere
vSphere
Network Admin Benefits
•
•
•
•
•
Unifies network mgmt and ops
Improves operational security
Enhances VM network features
Ensures policy persistence
Enables VM-level visibility
vCenter
Nexus 1000V VSM
16. Cisco Nexus 1000V Components
Virtual Supervisor Module(VSM)
Virtual Ethernet Module(VEM)
CLI interface into the Nexus 1000V
Replaces Vmware’s virtual switch
Leverages NX-OS
Controls multiple VEMs as a single network
device
Enables advanced switching capability on
the hypervisor
Provides each VM with dedicated “switch
ports”
A
B
C
D
E
F
vCenter Server
G
17. Virtual Supervisor Modules Options
VSM - Virtual Appliance
ESX Virtual Appliance
Supports 64 VEMs
Installable via GUI, OVA or ISO file
A
B
C
D
E
vCenter Server
Nexus 1010 - Physical Appliance
Cisco Branded Physical Server
Hosts 4 VSM Virtual Appliance
Deployed in pairs for redundancy
F
18. Flexible Deployment Options
• Any type of physical
switch (Cisco & other
vendors)
• 1G & 10G NICs
• All types of servers
supporting vSphere4
/ ESX 4i
19. Cisco Nexus 1000V Component
Communication
Cisco VSMs
vCenter Server
•
•
•
•
Communication using the VMware VIM API over SSL
Connection is setup on the VSM
Requires installation of vCenter plug-in automatically done by installer App
Once established the Nexus 1000V is created in vCenter
Pod1-VSM# show svs connections
connection VC:
hostname: phx2-dc-pod5-vc
ip address: 10.95.5.158
protocol: vmware-vim https
certificate: default
datacenter name: Phx2-Pod5
DVS uuid: df 11 38 50 0a 95 83 4e-95 69 d6 a7 f4 76 4a 7f
config status: Enabled
operational status: Connected
20. Port Profile: Network Admin View
Pod1-VSM# show port-profile name WebProfile
port-profile WebProfile
description:
status: enabled
capability uplink: no
system vlans:
port-group: WebProfile
WebServers
config attributes:
switchport mode access
switchport access vlan 110
no shutdown
evaluated config attributes:
switchport mode access
switchport access vlan 110
no shutdown
assigned interfaces:
Veth10
Support Commands Include:
Port management
VLAN
PVLAN
Port-channel
ACL
Netflow
Port Security
QoS
22. Visibility of the VM
Pod1-VSM# sh int virt
------------------------------------------------------------------------------Port
Adapter
Owner
Mod Host
------------------------------------------------------------------------------Veth1
vmk1
VMware VMkernel
3
esx1.pod1.nexus1000v.la
Veth2
vmk1
VMware VMkernel
4
esx2.pod1.nexus1000v.la
Veth3
Net Adapter 1 Nexus1000V-VSM-Pod1
3
esx1.pod1.nexus1000v.la
Veth4
Net Adapter 1 Nexus1000v-Beta
4
esx2.pod1.nexus1000v.la
Veth5
Net Adapter 1 vShield-esx1
3
esx1.pod1.nexus1000v.la
Veth6
Net Adapter 1 vShield Manager
3
esx1.pod1.nexus1000v.la
Veth7
Net Adapter 1 vShield-esx2
4
esx2.pod1.nexus1000v.la
Veth8
Net Adapter 1 WinXP-01
3
esx1.pod1.nexus1000v.la
Veth9
Net Adapter 1 WinXP-02
4
esx2.pod1.nexus1000v.la
23. Visibility of the VM Traffic
Pod1-VSM# sh int veth8
Vethernet8 is up
< ---- SNIP --- >
Port mode is trunk
5 minute input rate 0 bits/second, 0 packets/second
5 minute output rate 40 bits/second, 0 packets/second
Rx
426 Input Packets 125 Unicast Packets
15 Multicast Packets 286 Broadcast Packets
50941 Bytes
Tx
81182 Output Packets 136 Unicast Packets
18 Multicast Packets 81028 Broadcast Packets 81046 Flood Packets
8387936 Bytes
1 Input Packet Drops 0 Output Packet Drops
24. Cisco Nexus 1000V Communication
The Nexus 1000V is a distributed switch so the VSM
needs to program the VEM over the network.
A
B
C
The Nexus 1000V uses the same backplane
messaging than the Nexus 7000 or MDS
called AIPC
Cloud
There are two ways to extend that connection:
-Over layer 2 using a control and a packet VLAN
-Over layer 3 using the layer 3 control capability
Nexus 1000V VSM
25. Layer 2 connectivity of the VSM and
VEM
Two virtual interfaces are used to communicate between the VSM
and VEM
VM
VM
VM
L2 Cloud
Control
Packet
Nexus 1000V VSM
VM
Control Interface
•Extend the usual backplane of the switch over
the network
•Carries low level messages to ensure proper
configuration of the VEM.
•Maintains a 2sec heartbeat with the VSM to the
VEM (timeout 6 seconds)
•Maintains synchronization between primary and
secondary VSMs
•Maximum of 7Mo of Traffic
Packet Interface
•For control plane processing like CDP, IGMP
snooping or stat collections like SNMP, Netflow
•Maximum of 1Mo of Traffic
25
26. Layer 2 connectivity of the VSM and
VEM
Best Practices
VM
VM
VM
VM
Management, Packet and Control Interfaces
can use the same VLAN
Control and Packet VLAN needs to be
configured end to end to allow
communication between the VSM and the
VEM.
L2 Cloud
Control
Control VLAN and Packet VLAN needs to be
configured as system VLAN on the uplink
port-profile
Packet
Nexus 1000V VSM
26
27. Layer 3 connectivity of the VSM and
VEM
VM
VM
VM
VM
If there is no L2 adjacency between
the VSM and the VEM,
L3 Cloud
VSM use a new svs mode type called layer 3 using
either the control Interface or the management
Interface
User can specify an IP address for control0 to use a
separate network for VEM – VSM communication
Nexus 1000V VSM
svs-domain
svs mode L3 interface (control0 | mgmt0)
27
28. Connectivity of the VSM and VEM
Best Practices
VM
VM
VM
The VSM can use its own VEM as long as the
customer is running 4.0(4)SV1(2) or above.
Before that he should be using the vSwitch to
connect the VSM to the Network
VM
VM
VM
There should always be 2 VSM deployed.
Those 2 VSM should not be on the same Host
35. UCS Virtual Interface Card Overview
10GbE/FCoE
UCS M81KR VIC (Palo) is Converged
Network Adapter designed for both singleOS and VM-based deployments
• Virtualize in Hardware
• PCIe compliant
High Performance
Eth
• 2x 10Gb
• 500K IOPS
FC
The OS/Hypervisor sees up to 128 distinct
PCIe devices
FC
Eth
User
Definable
vNICs
• Ethernet vNIC and FC vHBA
VN-Link in Hardware – Ideal for
Virtualization Environments
0
1
2
3
128
• Bypass vSwitch to deliver VN-Link in
hardware
• Tight integration with VMware vCenter
Cisco Inc., Company Confidential - NDA
Required
PCIe x16
35
36. Cisco UCS VIC Overview
Multiple Separate Interfaces – Ideal for Certain Workloads
Server
Server
n NICs
2 NICs
m HBAs
2 HBAs
Traditional CNA
2 x 10G ports
n + m ~=128
Cisco VIC
2 x 10G ports
• Ideal for workloads/applications that recommend multiple separate
interfaces
• Applicable to both Single OS (e.g. Windows/RHEL) or Virtualized (ESX/HyperV) environments
• Virtualization achieved using classical PCIe devices (no special OS support
necessary)
37. Cisco VIC Offers VN-Link in Hardware
Innovation for Virtual Server Networking
• VN-Link refers to a virtual link between a VM
VNIC & a virtual interface on the Fabric
Interconnect
• Virtual Network Link (VN-Link) Benefits
VM-level network granularity
Policy-based configuration of VM interfaces (Port
Profiles)
Mobility of network and security properties
(follow the VM during Vmotion)
Non-disruptive operational model
Allows virtual host interfaces to be remotely
managed/configured
• VN-LINK in hardware offers best performance
38. Deployment Options for Virtualized Environments
Multiple Options Available, Invisible to VM
VN-link in Software
Nexus 1000V hypervisor switch
uplinks connect to Cisco virtual
interfaces (VIFs)
VN-Link in Hardware
(VM-FEX)
VN-Link in Hardware
(VMDirectPath)
Each VM connects to a Cisco virtual
interface (VIF) and does a pass through
of the hypervisor switch
Each VM bypasses the hypervisor
completely and connects to a Cisco
virtual interface (VIF)
39. Optimize IO for Virtualized Environments
Scenario 1: VN-LINK in Software
VN-LINK in SW = Nexus 1000V
VM
VM
• Each VM vnic connects to Nexus
1000V hypervisor switch
• Nexus 1000V switch uplinks
connect to multiple distinct Cisco
virtual interfaces (VIFs)
ooo
VM
VM
VM
Cisco Nexus 1000V VEM
Hypervisor
Likely Use Case:
• Customer has already
standardized on Nexus 1000V for
advanced network features like
ERSPAN & Netflow
Cisco VIC
• Customer deployment needs
higher scalability wrt # of VMs
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
Non-Disruptive
Operational Model
40. Optimize IO for Virtualized Environments
Scenario 2: VN-LINK in Hardware
VN-LINK in HW
VM
VM
ooo
VM
VM
VM
• Each VM vnic maps to a
different virtual interface (VIF) on
the Fabric Interconnect
Likely Use Case:
• Customer benefits from
centralized Management through
UCSM
• Customer needs higher
performance provided by VN-LINK
in hardware
Hypervisor
Cisco VIC
ooo
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
Non-Disruptive
Operational Model
41. Simplify Management and Facilitate Collaboration
UCS Manager
1. Set Up Connection
4. vDS and VM Port Profiles
available in Vcenter
vCenter Server
5. VM created and
connected to vDS, VM Port
Profile available as Port
Group
Cisco VIC
2. Create vDS (switch)
3. Define VM Port Profiles
6. VM Port Profile applied to dynamic
vnic’s
COS membership (MTU),
VLAN membership,
Pinning group,
Rate limiting applied here
Cisco Virtualized Adapter Benefits
• Tight integration with hypervisor mgmt tool (e.g. vCenter)
• Network admin sets up network policies, server/virtualization admin
applies them – facilitate collaboration between groups
45. Optimize IO for Virtualized
Environments
Scenario 3: VN-LINK in HW
with VM-FEX
46. Optimize IO for Virtualized Environments
Scenario 3: VN-LINK in HW with VMDirectPath
VM
VN-LINK in HW with
VMDirectPath
VM
ooo
VM
• Bypass of Hypervisor
completely… VM talks directly to
Cisco Virtualized adapter
• Much higher Performance
(native HW performance)
Hypervisor
Cisco VIC
Likely Use Case:
• High Performance Workloads
(e.g. Appliances)
ooo
• Vmotion doesn’t work today …
Workload doesn’t need it
Cisco VN-Link: Virtual Network Link
Policy-Based
VM Connectivity
Mobility of Network &
Security Properties
Non-Disruptive
Operational Model
47. Cisco UCS VIC
Deployment Guidelines for Vmware vSphere
Deployment Option
Min Vsphere Package
Vmotion
Allowed?
Where is Port
Policy (aka
port-group)
created?
Vmware vswitch with
Cisco VIC for uplinks
Any vSphere Package
Yes
vCenter
Vmware vDS (vNetwork
Distributed Switch) with
Cisco VIC for uplinks
Enterprise Plus
Yes
vCenter
VN-LINK in Software
(Nexus 1000V) with
Cisco VIC for uplinks
Enterprise Plus (also
need to buy Nexus
1000V)
Yes
VSM in Nexus
1000V
VN-LINK in Hardware
(VM-FEX)
Enterprise Plus (no
need to buy any other
software)
Yes
UCS Manager
VN-LINK in Hardware
(VMDirectPath)
Any vSphere Package
No
(In Future)
UCS Manager
Hello everybody. I'm David Pasek. I workas UCS Engineer for CISCO Services. I will be today presenting the VMware Networking with CISCO enhancements and I would like to show you different concepts available today. First of all I have to say that I’m not Network Expert but because I am focused on CISCO Unified Computing System and VMware virtual infrastructure I have to be somehow familiar with networking as well. So, today we are going to talk about different kinds of networking architectures in VMware Virtual Infrastructure.
Transparency in the Eye of the Beholder------------------------------------------------Let’s start with general server virtualization benefits. One of the biggest virtualization benefits is abstraction and transparent view of hardware resources. The ordinary server administrator don’t need to be care of physical infrastructure complexity. For instance, you have just a simple virtual hard disk which looks like standard SCSI or IDE disk and you don’t need to be care of storage area network details required in fibre channel or iSCSI implementation. Other example of transparency is the network connectivity. Someone has to prepare VMware virtual switches with port groups and ordinary server administrator just connect VMware virtual NIC into particular VMware port group which act as a network segment.
Transparency in the Eye of the Beholder------------------------------------------------So, there are many benefits associated with virtualization, such as ease of management and usage simplicity. After these initial successes, however, IT management often starts to be concerned about the growing sprawl of business critical virtual machines and the lack of visibility required to effectively understand the performance of these IT services.
Server Virtualization Issues--------------------------------The server virtualization is bringing lot of benefits but also some issues. So let’s go through some challenges. When we look at classic physical datacenter all servers are connected statically into particular network ports. It’s quite different in virtual environment.First well know issue, from network point of view, is vMotion. Network administrators are really concerned about virtual machines live migrations because one server may appear in different physical ports and administrators must be sure that network policy follows virtual machine. Second common issue network administrators are complaining for is the lack of visibility and control. They didn’t have the same level of visibility and control that they had with physical switches.Third issue is that in typical virtualized environment you need to share management server – for instance VMware vCenter – between network and server administrators. This is not the optimal way because network administrators must be trained for other management tools instead to use their native tools which they are familiar with.
VMware Networking------------------------Let’s continue with standard VMware networking overview
VMware Virtual Networking---------------------------------The core component of VMware Virtual Networking is Virtual Switch. Virtual Switch is VMware’s defaultsoftware switch in VMware ESX server. VMware supports third party software switches like CISCO Nexus 1000v but standard VMware vSwitch is allways included. Virtual Switch is connected to physical network adapters identified as vmnicN. These NICs act as physical uplinks from vSwitch. It is possible to have more vmware standard vSwitches per one ESX host. vSwitch supports three Port Connection Types: Virtual Machine Network type is used for virtual machine connectivity.Vmkernel Network is used for vMotion, iSCSI,NFS and host management of ESXi Service Console exists just in full ESX (not ESXi) and is used for host managementAnother important constructs are Portgroups. Portgroupagregates virtual switch ports and all settings like VLAN tag, Security Policies, and QoSare applied to particular portgroup. Each Portgroup is identified by unique network label.
Virtual Networking for an ESX Server Host - Example---------------------------------------------------------------On this slide you can see example of VMware Virtual Switches deployment. The first vSwitch from left side is isolated. It means that vSwitch is not connected to any physical adapter so virtual machines connected to this vSwitch cannot communicate outside of ESX host.Second vSwitch is connected to two physical adapters in production network and appropriate NIC teaming mechanism can be chosen. Third switch is connected just to one physical adapter in Test network.Virtual Machine can have one or more virtual NICs. You can see on the picture virtual machine connected to two different networks - production and test network. Virtual Machine can act as dual home server, router, or bridge if you wish.On the right side of this picture you can see vmKernel and Service Console Connection types. Vswif is VMware service console port which is in fact management interface.
VMware vNetwork Distributed Switch--------------------------------------------VmwarevNetwork distributed switch was introduced in vSphere 4.0 back in 2009. VMware vNetwork Distributed Switch simplifies virtual machine networking by enabling you to setup virtual machine networking for your entire datacenter from a centralized interface. VMware’s vNetwork Distributed Switch spans many vSphere hosts and aggregates networking to a centralized cluster level. It abstracts configuration of individual virtual switches and enables centralized provisioning, administration and monitoring through VMware vCenter ServerVMware vNetwork Distributed Switch also maintains network runtime state for VMs as they move across multiple hosts, enabling inline monitoring and centralized firewall services. It provides a framework for monitoring and maintaining the security of virtual machines as they move from physical server to physical server and enables the use of third party virtual switches such as the Cisco Nexus 1000V to extend familiar physical network features and controls to virtual networks.VMware vNetwork Distributed Switch can be purchased as a component of the Enterprise Plus edition of VMware vSphere and vCenter Server is required for the use of this feature.
CISCO Virtual Networking Enhancements = VN-LINK----------------------------------------------------------------Ok, so in this part of presentation I will try to explain what is VN-LINK.
VN-Link: Virtual Network Link------------------------------------Cisco is working with VMware to allow virtual machine interfaces to be individually identified, configured, monitored, migrated, and diagnosed in a way that is consistent with the current network operation models. These features are known as VN-Links.A VN-Link is the virtual link between the vNIC on a virtual machine and a Cisco switch enabled for VN-Links (like the Nexus 1000v or UCS Fabric Interconnect). This logical mapping is equivalent to connecting a physical cable from a host to a switchport.The VN-Link is general concept which can be achieved using different ways. Two ways are available today. First solution is leveraging VN-TAG enabled physical switch like UCS Fabric Interconnect based on Nexus 5000 plus network interface virtualizer like CISCO Palo adapter. Second solution is to use distributed modular switch like Nexus 1000v. We will cover both solutions later in this presentation. VN-Link requires innovation within networking equipment and most important innovations are virtual ethernet interfaces (also know as veths) and port-profiles. Cisco switches that support VN-Links use virtual Ethernet interfaces as opposed to FastEthernet / GigabitEthernet / TenGigabitEthernet physical interfaces. You configure the vEth interface just as you would any physical interface and that interface is associated with the VM. Port profiles are configurations that can be applied to vEth interfaces. Instead of applying VLAN, access-group, and so on to an interface, you apply this configuration to the port-profile.Once that’s complete, all you have to do is apply that port-profile to the vEth interface. Let’s go through key VN-Link Benefits:1.VM-level network granularity – What doe's it actually mean? It means that you can see your virtual machines connected directly to Fabric Interconnect or upstream Nexus 1000v switch. When saying directly connected I mean that virtual machine vNIC is connected by virtual link to Virtual Ethernet Interface running on top of Fabric Interconnect or Nexus 1000v and you don’t need to be care about network infrastructure up the road.2. Policy-based configuration of VM interfaces – You can define Port Profiles in Fabric Interconnect which act as configuration templates and you can apply these configurations to particular virtual machines. You do it by simply linking Port Profile and VMware Port Group. 3. Mobility of network and security properties –VN-link implements integration between physical switch and virtualization management (for instance vCenter) to follow the VM network and security properties during vMotion live migration.
VN-Link View of the Access Layer----------------------------------------VN-Link provide visibility to the individual virtual machines so network administrators can see where particular virtual machines is running and what network port it is connected to.Any policy can be configured per VM so for instance you can set access control list or quality of services per VM.Each policy is mobile within particular ESX cluster.And as I already mentioned VN-LINK refers to a literal link between a virtual machine vNIC and a CISCO VN-Link enabled switch.
VN-Link in software: Nexus 1000v------------------------------------------------------------------Ok, so let’s look in more detail at VN-Link in software implemented by Nexus 1000v
Cisco Nexus 1000V Components--------------------------------------The Nexus 1000v functions is the “virtual access layer” for VMs within the ESX environment. Edge LAN policies (like QoS, vNICACLs, and so on) are implemented at this layer in port-profiles. The Nexus 1000v series consists of two main types of components that can virtually emulate a 66-slot modular Ethernet switch with redundant supervisor functions:First component is Virtual Ethernet Module (VEM)-data plane: This lightweight software component runs inside the hypervisor. It enables advanced networking and security features, performs switching between directly attached virtual machines, provides uplink capabilities to the rest of the network, and effectively replaces the vSwitch. Each hypervisor is embedded with one VEM.Second component is Virtual Supervisor Module (VSM)-control plane: This standalone, external, physical or virtual appliance is responsible for the configuration, management, monitoring, and diagnostics of the overall Cisco Nexus 1000v series system. That is, the combination of the VSM itself and all the VEMs it controls as well as the integration with VMware vCenter. A single VSM can manage up to 64 VEMs. VSMs can be deployed in an active-standby model, helping ensure high availability. Best practice is to deploy a VSM pair per ESX cluster, and the VSMs will reside outside of this cluster.
Virtual Supervisor Module Options-----------------------------------------VSM – Virtual Supervisor Module – is available in two deployment options.First option is virtual appliance which supports up to 64 VEMs.Second option is physical appliance – Nexus 1010 – which is capable to host four VSM virtual appliances. Of course physical appliance should be deployed in pair for redundancy.
Flexible Deployment Options----------------------------------Nexus 1000v is very flexible solution and doesn’t lock you into any particular vendor product. You can use any type of physical switches, network interface cards, and servers supporting vSphere4 hypervisors.
Port Profile: Network Admin View----------------------------------------As I have already mentioned, port profiles are a collection of interface configuration commands that can be dynamically applied at either physical or virtual interfaces. Any changes to a given port-profile are propagated immediately to all ports that have been associated with it. On this slide we can see a port profile which defines a collection of attributes such as switch port mode, VLAN ID, and port state for port-profile WebProfile. Onceenabled, port-profile is dynamically pushed to VMware Virtual Center and show up as Port GroupWebServers. This portgroupcan be immediately selected by the VMware administrator. The VMware administrator creates virtual machines and assigns them to Port Groups as he has always done. The network administrator does not configure the virtual machine interfaces directly. Rather, all configuration settings for virtual machines are made with Port Profiles (which are configured globally), and it’s the VMware administrator who picks which virtual machines are attached to which Port Profile. Once this happens the virtual machine is dynamically assigned a unique Virtual Ethernet interface and inherits the configuration settings from the chosen Port Profile. The VMware administrator no longer needs to manage multiple vSwitch configurations, and no longer needs to associate physical NICs to a vSwitch.
Port Profile: Server Admin View----------------------------------------So, here on this slide we can see how server administrator can choose appropriate portgroup for some particular virtual machine. It’s very simple and absolutely consistent with vmware typical workflow.
Visibility of the VM----------------------On this slide you can see virtual machine visibility from Nexus 1000v point of view. Network administrator can login to Nexus 1000v virtual supervisor module as to another CISCO switch and he can use same or similar commands. In this particular example we list all virtual interfaces and you can see that the output from this command contains extended information from virtual environment. You can see where is particular virtual machine connected. For instance virtual machine WinXP-01 is connected to VETH8 running in virtual ethernet module 3 which is placed in esx1. This is good example of extended visibility for network administrators.
Visibility of the VM Traffic-------------------------------Another example of extended visibility is that you can see VM traffic. From the previous slide we know that WinXP-01 is connected, we can also say VN-linked, to VETH8 so we can get more information about this particular virtual ethernet interface.I hope that these two slides have explained virtual machine visibility and you understand significant benefits of VN-link in Nexus 1000v.
Cisco Nexus 1000V Communication-------------------------------------------The Nexus 1000v is a CISCO virtual distributed switch which substitutes and extends VMware virtual distributed switch. VSM needs to program the VEM over the network. The Nexus 1000v uses the same backplane messaging as the Nexus 7000 or MDS. There are two ways how these components can communicate. Communication over Layer 2 or Layer 3.
Layer 2 connectivity of the VSM and VEM-------------------------------------------------The control interface replace usual backplane and extend it over the network. It carries low level messages to ensure proper VEM configuration. It also maintains a 2 second heartbeat with the VSM to the VEM to have awareness of VEM availability. The control interface is also responsible for data synchronization between primary and secondary VSM. So, what is packet interface for? The Packet Interface processing control plane protocols like CDP, IGMP, snooping , snooping, or statistics collections like SNMP and NetFlow.
Layer 2 connectivity of the VSM and VEM – best practices---------------------------------------------------------------------Let’s go through several best practices. 1. Management, Packet and Control interfaces can use the same VLAN so you don’t need to waste your VLAN range and you can keep it as simple as possible.2. Control and Packet VLAN needs to be configured end to end to allow communication between the VSM and the VEM. If those VLANs are not configured end to end the VEM will not show up even if it looks like it is in vCenter3. Control VLAN and Packet VLAN needs to be configured as system VLAN on the uplink port-profile
Layer 3 connectivity of the VSM and VEM-------------------------------------------------If there is no L2 adjacency between the VSM and the VEM, you can use Layer 3 connectivity between them. User can specify an IP address for control0 interface to use a searate network forrVEM-VSM communication.
Connectivity of the VSM and VEM----------------------------------------Another best practices. The VSM contains its own VEM component which should be installed into ESX host as a plugin. It can be installed manually from ESX local or remote service console or automatically by leveraging VMware Update Manager. VMware Update manager is significantly easier installation method and appropriate VEM is automatically installed when particular ESX host is added into distributed virtual switch operated by VSM.
Virtual vNetwork Comparison /1/---------------------------------------On following slides we will see features comparison among different virtual switch technologies. VMware ESX 3.5 is previous version of VMware hypervisor.VMware vSphere Standard Switch is vSwitch in current VMware hypervisor ESX4 (or 4.1)VMware vSpherevDS is VMware virtual distributed switch available with vCenter and ESX Enterprise Plus licensesand CISCO Nexus 1000v is CISCO enhanced virtual distributed switch available with vCenter, ESX Enterprise Plus and Nexus 1000v licenses. We will share these slides with you so I’ll not explain each feature but I would like to highlight some Nexus 1000v enhancements.On this slide we can see comparison of switching features.I would like to comment LACP Enhancement.The appropriate statement should be that VMware doesn't support the dynamic aspect of the standard but does support static mode in the 802.3ad specification. However, CISCO Nexus 1000v supports dynamic LACP.
Virtual vNetwork Comparison /2/---------------------------------------On this slide I would like to point out to ACL (Access Control List) support on Nexus 1000v.
Virtual vNetwork Comparison /3/---------------------------------------Nexus 1000v support standard CISCO IOSCLI so CISCO network administrators don’t need to use another tools for network administration.Network admins can also leverage CISCO SPAN for port traffic analysis.You can see that VMware software switches support Port Mirroring so good question is what is the difference between SPAN and vmware port mirroring? VMware port mirroring functionality must be done by little trick. You have to enable promiscuous mode in particular port group so you will change vSwitch to vHub. Because of vHub another virtual machines connected to the same port group can see ethernet traffic so you can install sniffer into such virtual machine. Is it really what you want to do in your enterprise environment?
Virtual vNetwork Comparison /4/---------------------------------------SNMP and syslog server functionality is also available only on Nexus 1000v
Virtual vNetwork Comparison /5/---------------------------------------Another important and unique enhancementsof Nexus1000v is active/standby high availability of control plane and ssh/telnet remote access.In that point I would like to mention another significant Nexus 1000v benefit.The virtual distributed switch control plane is off loaded from VMware vCenter server into another software or hardware based appliance.
VN-Link in Hardware-------------------------Ok, so now let’s look in more detail at VN-Link in hardware. It is generally about offload networking from software to hardware so this is typical example of hardware assisted virtualization. There are currently two technologies how to offload networking from software to hardware. First is VMware Pass Through Switching and second is VMware Direct Path. Both technologies will be covered in next slides.
Simplify Management and Facilitate Collaboration – screenshot 1------------------------------------------------------------Here I would like to show you some screenshots from VMware Pass Through Switching demo lab to explain already discussed integration between UCS Manager and VMware vCenter. In UCS Manager you can see different tabs for different administrators. “Servers tab” for server administrators, “LAN tab” for network admins, “SAN tab” for storage experts and “VM tab” for virtual infrastructure administrators. The virtual infrastructure administrator has to define port profiles with some specific policies. In this example we have chosen specific QoS Policy which sets 1Mbps traffic limit in port profile mgmt_limited. QoS Policy details are defined by network admin so virtual infrastructure admin can just select specific QoS Policy and propagate this port profile into VMware port group named also mgmt_limited.
Simplify Management and Facilitate Collaboration – screenshot 2------------------------------------------------------------And as usually in VN-Link technology everything the virtual machineadministrator has to do in VMware virtual infrastructure is absolutely transparent. So he can choose appropriate portgroup on distributed virtual switch which is linked with particular port profile in UCS.
Simplify Management and Facilitate Collaboration – screenshot 3------------------------------------------------------------On this screenshot I would like to point out integration between VMware virtual distributed switch, virtual interface in Palo and veth in Nexus. In UCS Manager you can see on which host particular virtual machine is running. So in our example we can see that VMfbsd1 is running in ESX Host Server 1/1. By the way if someone initiate vMotion of this VM to another host we can track it here in UCS manager. Anyway, we can also see VMware vNIC and port id where vNIC is connected into VMware Distributed Switch. In this case vDS port id is 1985. And finally we can see remote virtual interface id (vETH) in this example 2306. So our virtual link between VMfbsd1 and our physical nexus switch is between these two virtual interfaces and we can display all other network properties directly in nexus. So in our example we can display status and counters of this particular veth. This is just a small example of tight integration and I hope it helps you to understand what VN-link is.
END-----So that’s it. I have tried to cover as much as possible. I hope it has given you at least basic overview what is Cisco VN-Link and what are Cisco enhancements for virtualized datacenters. Now we can open Q&A and if you will have any other questions or comments in near future then don’t hesitate to contact us for further information.Thanks for your attention.