Shaun Walsh, VP of marketing, Emulex, presented the '10GbE Top 10" at Interop Las Vegas in May 2011. If you missed it, here are the slides from that discussion.
28. Waves of 10GbE Adoption Sources: Dell’Oro 2010 Adapter Forecast ESG Deployment Paper 2010 IDC WW External Storage Report 2010 IT Brand Pulse 10GbE Survey April 2010 FCoE Storage 25% of I/O 10GbE LOM Everywhere Wave 3 Cloud Container Servers for Web Giants and Telcos 1-2 Socket Server with NICs and Modular LOMs Wave 2 10GbE Adoption Waves 2-4 Socket X86 and Unix Rack Servers Drive 10Gb with Modular LOMs and UCNAs 10Gb NAS and iSCSI Drive IP SAN Storage for Unstructured Data Wave 1 10Gb LOM and Adapters for Blade Servers 50% of IT Managers Cite Virtualization as the Driver of 10GbE in Blades 2010 2011 2013 Beyond 2012
29. Wave 2 of 10GbE Market Adoption 10GbE IP Storage Web Giants x86 & Unix Rack Servers 65% of Server Shipments More that 2X the Revenue Opportunity Romley from Q4:11 to Q2:11 Modular LOM on Rack More Cards vs. LOMs Cisco, Juniper, Brocade announced Multi-Hop FCoE in last 90 days 10GbE NAS, ISCSI & FCoE Convergence Next Gen Internet Scale Applications (Search, Games & Social Media) Container & High Density servers migrating to 10GbE Growing to 15% of Servers Wave 2 – More Than Doubles 10GbE Revenue Opportunity CY12/13
31. #1) Switch Architecture Top of Rack Modular unit design – managed by rack Servers connect to 1-2 Ethernet switches inside rack Rack connected to data center aggregation layer with SR optic or twisted pair Blades effectively incorporate TOR design with integrated switch SAN Eth Aggregation FIBER FIBER FIBER SERVER CABINET ROW
32. Switch Architecture End of Row Server cabinets lined up side-by-side Rack or cabinet at end (or middle) of row with network switches Fewer switches in the topology Managed by row – fewer switches to manage SAN Eth Aggregation FIBER FIBER FIBER FIBER COPPER SERVER CABINET ROW
33. #2) 10Gb LOM on Rack Servers vs. Typically limited I/O expansion (1-2 slots) Much cheaper 10G physical interconnect – backplane Ethernet & integrated switch – leading 10G transition Earlier 10G LOM Generally more I/O & memory expansion More expensive 10G physical interconnect Later 10G LOM due to 10GBASE-T
34. Today’s Data Center Typically Twisted Pair for Gigabit Ethernet Some Cat 6, more likely Cat 5 Optical cable for Fibre Channel Mix of older OM1 and OM2 (orange)and newer OM3 (green) Fibre Channel may notbe wired to every rack #3) Cables
37. 10G BASE-T Will Dominate IT Connectivity Crehan Research 2011
38. New 10GBASE-T Options Top of Rack End of Row SFP+ is lowest cost option today Doesn’t support existing gigabit Ethernet LOM ports 10GBASE-T support emerging Lower power at 10-30m reach (2.5w) Energy Efficient Ethernet reduces to ~1w on idle Optical is only option today 10GBASE-T support emerging Lower power at 10-30m reach (2.5w this year) Can do 100m reachat 3.5w Energy Efficient Ethernet reduces to ~1w on idle
46. 42% on power and coolingIP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP IP FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC (Based on 2 LOM, 8 IP and 4 FC Ports on 10 Servers) FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC
47. Volume x86 systems High End Unix Systems To Converge or Not To Converge? Best business case for convergence Systems actually benefit by reducing adapters, switch ports and cabling Limited convergence benefit Typically large number of SAN and LAN ports System may go from 24 Ethernet and 24 Fibre Channel ports to 48 Converged Ethernet ports Benefits in later stages of data center convergence when Fibre Channel SAN fully running over Ethernet physical layer (FCoE)
48. #7) Select 10Gb Adapter 10Gb LOM becoming standard on high-end servers Second adapter for high availability Should support FCoE and iSCSI offload for network convergence Compare performance for all protocols
49. Convergence Means Performance 40 Gb/Sec 900K IOPS/Sec Wider Track: More Data Lanes No Virtualization Conflicts Capacity for Provisioning Faster Engine: More Transactions More VMs/CPU QOS you demand Performance for Storage 24
50. #8) Bandwidth and Redundancy NIC Teaming – Link Aggregation Multiple physical links (NIC ports) combined into one logical link Bandwidth aggregation Failover redundancy FC Multipathing Load balancing over multiple paths Failover redundancy Core Core Core Core a b Redundancy Redundancy Poor Density c d
51. NETWORK FRAMEWORK SAN LAN NETWORKCONVERGENCE #9 Management Convergence Future Proof SAN LAN Investment Protection Protects Your Configuration Investment Protects Your LAN & SAN Investment Protects Your Management Investment
52. Convergence and IT Management Traditional IT management Server Storage Networking Convergence is latest technology to perturb IT management Prior: Blades, Virtualization, PODs Drives innovation Storage Networking Server/Virtualization
53. #10) Deployment Plan Upgrade switches as needed to support 10Gb links to servers Plan for network convergence with switches that support DCB Focus on new servers Unified 10GbE platform for LOM, blade and stand-up adapters
55. Flexible Converged Fabric Adapter Technology 4x8 2x8 2x16 Quad Port 8Gb FC Dual Port 8Gb FC Dual Port 16Gb FC 1x40 2x10 Single Port 40Gb CNA Dual Port 10Gb CNA 1x16 2x10 4x10 2x8 2x10 Quad Port 10Gb CNA Single Port 16Gb FC Dual Port 10Gb CNA Dual Port 8Gb FC Dual Port10Gb CNA
56. Emulex at Interop Booth # 743 First 16Gb HBA Fibre Channel Demonstration First UCNA 10GbE Base-T Demonstration First UCNA 40GbE Demonstration Interop May 9, 2011 EMC World May 9, 2011 Interop May 9, 2011
57. Wrap Up Many Phase to Market Transitions and Implementation 10GBASE-T will drive 10Gb Cost in Racks Network Convergence is Coming; Best to be Ready Management of Domains Will be the Most Challenging 10GBASE-T is Like Hanging Out with an Old Friend
58. THANK YOU For more information, please contact Shaun Walsh 949.922.7472 shaun.walsh@emulex.com