Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Managing change in the data center network

1.545 Aufrufe

Veröffentlicht am

Veröffentlicht in: Bildung
  • Als Erste(r) kommentieren

Managing change in the data center network

  1. 1. Managing Change in the Data Center Network Larry Hart Head of WW Marketing Robert Winter Office of the CTO
  2. 2. Entering the Virtual Era VIRTUAL ERA INTERNET ERA 2010s PC/CLIENT SERVER ERA 1990s Dell HP MINI- COMPUTING 1980s Apple IBM Dell Google Cisco Acer MAINFRAME 1960s IBM HP DEC Apple Data General Compaq 1950s HP AST Gateway Honeywell IBM Prime NCR Computervision Control Data Wang Sperry Honeywell Burroughs 2
  3. 3. The Journey to Efficiency Builds On Virtual Foundation STRATEGIC AGILITY QUALITY OF DATA CENTER SERVICE EFFICIENCY Policy Driven Automation ECONOMIC Dynamic Resource Optimization SAVINGS Rapid Provisioning Disaster Recovery High Availability Server & Storage Consolidation
  4. 4. Why DC Management Must Change Legacy Networks Next Gen DC Networks (Physical) (Virtual) MANAGEMENT CHALLENGES MANAGEMENT SOLUTIONS • Growth means complexity • High performance and scalability built- • Does not scale with virtualization in demands VS • Virtualization-aware components • Cannot keep up with storage growth • Unified Fabric enabling new levels of • Made up of discrete devices storage flexibility • Multitude of management tools • Network building blocks are interactive and scalable TECHNOLOGY CHALLENGES • Data center orchestration • Single Server with Single Application TECHNOLOGY SOLUTIONS • Discrete Communications and Storage •Dynamic Server with Virtual Applications Fabrics VS •Unified Fabrics for Communications and Low Bandwidth in the rack Storage •Lossy QoS •More bandwidth per port •Lots of Ports •“Near” Lossless •Fewer Ports – Higher Bandwidth Fabric convergence, port proliferation, management sprawl & virtualization forcing network management changes. Does that mean your legacy network needs to be thrown out? How do we manage this change?
  5. 5. We Listened to IT Professionals Keep it simple More affordable Don’t lock me in! 5
  6. 6. How You Get There Matters What’s Wrong with Some Implementations? Today Time Line Goal Orchestrated Components Data Center Network Compute Storage Hypervisor Servers Rule Data Center Enabling Open, Capable, Affordable Solutions Networking Rules Data Center 6
  7. 7. A Differentiated Approach to Imminent Change In the Data Center Uncompromised Virtual- Flexible Delivery integrated Solutions • Business-ready configurations • Build and transfer Open + Capable + Affordable • Build and operate • As-a-service delivery Best-of-Breed Innovation Without Partnerships Legacy • PowerConnect, including • Integrated & interoperable B-series and J-series • Customer vs. company • Mutual commitment driven • Fully integrated solutions • Path to advanced networking • Joint development technologies building from • Go-to-market alignment traditional GbE 7
  8. 8. The Next Step in Efficiency Flexible infrastructure orchestrated through unified infrastructure management Unified Infrastructure Management Compute Storage Networking 8
  9. 9. Dell Advanced Infrastructure Manager Putting It All Together Dell Advanced Infrastructure Manager Unify management of existing & future infrastructure Dell PowerEdge Dell EqualLogic Dell / EMC PowerConnect Servers Storage Network PowerConnect Dell Business Ready Configurations for a Virtual Ready Infrastructure Dell Blades – Dynamic data center building block – Simplify remote management of regional datacenters – Streamline dynamic infrastructure deployment EqualLogic Storage – Available as part of pre-configured solution DELL CONFIDENTIAL 9
  10. 10. Dell Advanced Infrastructure Manager Faster to Deploy, Easier to Manage Respond Faster – Deploy switches and servers from pallet to production in minutes – Change workloads servers are running in 5 minutes or less – Recover services automatically Increase IT Productivity – Rack once, cable once – Single console for physical and virtual infrastructure management Lower Costs – Consolidate servers & improving asset utilization – Reduce power, cooling and datacenter costs Freedom to Choose – Virtual and / or physical servers – Multiple Operating Systems – Open network solutions from Dell and others, servers and storage 10
  12. 12. What Really Matters . . . • Management of physical resources to management of virtualized applications Management • Every vendors’s tool to heterogeneous Transitions management tools • Discrete DC silos to orchestrated DC management • GbE to 10 GbE @ the right economics • Infiniband Ent. Clusters to 10GbE Technology • Traditional Priority to Data Center Bridging Transitions • • Storage transitions: FC to iSCSI, FCoE Multi-layered networks to flat L2 networks (e.g, TRILL) 12
  13. 13. Emerging Technological Changes In the Data Center DCB, iSCSI and FCoE Robert Winter Dell OCTO
  14. 14. Why iSCSI In the Data Center? Utilizes current IT investment to evolve into a next generation data center Server Switch Storage Migrate when Use mature High you are ready, current performance without ripping technology to from branch to and replacing converge data center fabrics 14
  15. 15. Data Center Bridging (DCB) Ethernet is a good thing DCB provides a number of advantages: • Congestion management • Bandwidth management • More discriminating flow control • Self-configuring links But…..we need to answer these two questions: 1. Does DCB Ethernet benefit iSCSI? 2. Does FCoE with DCB behave well in congested environments? 15
  16. 16. Review: DCB, TRILL and a “better” Ethernet FCoE and DCB are interconnected, but aren’t the same thing. FCoE requires DCB for best experience, iSCSI doesn’t (but can use DCB) 802.1Qbb 802.1Qaz IEEE (Per-Priority Flow Control) IEEE (Enhanced Transmission Selection) DCB DCB 5G 3G 10GE 10GE 4G 4G Link 1G Link 3G t1 t2 802.1Qau TRILL (or 802.1aq) IEEE (Congestion Notification) IETF (Ethernet Multi-Pathing) DCB TRILL X STP TRILL X DCB is an improvement over legacy Ethernet fabric but it does not provide the same experience as a Fibre Channel fabric. 16
  17. 17. Performance: iSCSI, FCoE and FC 10GE, fully offloaded iSCSI stacks up well against FC and FCoE Throughput (Mbps) Efficiency (Mbps/%CPU) 500 700 450 600 400 350 500 300 400 250 300 200 150 200 100 100 50 0 0 4K 8K 64K 512K 4K 8K 64K 512K 4K 8K 64K 512K 4K 8K 64K 512K Read Write Read Write iSCSI Offload FCoE FC iSCSI Offload FCoE FC IOMeter, 4 Gb/s targets 17 [Source: iSCSI/FC Performance Analysis in Dell CTO Storage Architecture Lab
  18. 18. Recovery: iSCSI and FCoE Assumption: FCoE may take up to 60 seconds to re-send the packet Measured: iSCSI (with TCP fast-retransmit option) takes <= 25 milliseconds Start 60 second I/O timer FCoE Start I/O (ex. OS WRITESCSI WRITE CMDiSCSI REQ) I/O timer expired 60 seconds X Packet Dropped FC/FCoE TARGET FCoE INITIATOR Re-Start I/O TCP fast re-transmit option selected, RFC 2001 (assume packet lost if ACK not received in 25 ms or 3 duplicate ACKs received iSCSI [window-size/seq#/ACK# same and segment length = 0] and re-transmit) iSCSI Transmit Packet No ACK received in 25 ms, or 3 DUP ACKs received 25 milliseconds X Packet Dropped DUP ACKs iSCSI TARGET INITIATOR Re-transmit Packet 18
  19. 19. Flow Control: FC and Ethernet iSCSI,FCoE/Reactive-Time Dependent FC/Proactive-Time Independent DCB 802.1Qbb FC credit-based flow control PAUSE-based (FCoE) flow control STATION 1 STATION 2 STATION 1 STATION 2 RX1 TX1 RX2 TX2 Buffers Threshold Buffers Available Available Count++ PAUSE PAUSE Packet (s) Sent Received Buffers Frame in Flight Delay Available Frame in Flight Delay Count-- t t High Level Delay High Level Delay t t Buffers Buffers Available Interface Delay Interface Delay Available t t Count++ Media Delay t 19
  20. 20. Flow Control: iSCSI iSCSI congestion testbed 802.3X or DCB PFC Switch (PC 8xxx) 10G 10G 10G 10GbE iSCSI RAM-Disk ARRAY Dell WINDOWS SERVER 2008 x64 (StarWind + Intel) 10GbE CNA (Intel) FLOW CONTROL FLOW CONTROL OFF ON 20
  21. 21. Flow Control: iSCSI More I/Os, more MBs/sec, less re-transmits iSCSI Write I/Os/sec 91000 FLOW 84250 CONTROL ON iSCSI Write MBs/sec 355 330 FLOW CONTROL TCP Re-Transmits/sec OFF 1 1000 DCB makes iSCSI/TCP more efficient; provides TCP “offload” 21
  22. 22. Technology Conclusions The questions: 1. Does DCB Ethernet benefit iSCSI? YES 2. Does FCoE behave well in congested environments? TBD These are important questions with long overdue answers. Help in characterizing DCB’s practical benefits is welcome. 22
  23. 23. Planning for the Change • Evaluate management tools that deliver data center management that delivers an “open” approach (think OS and hardware platforms) • Plan for 10GbE as the foundational fabric of your DC • Plan for a future with DCB in your network • Evaluate the potential benefits iSCSI could bring to your data center • Consider new networking providers as some networking vendors are forcing platform shifts anyway 23