SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Downloaden Sie, um offline zu lesen
Data Center Infrastructure
                         Architecture Overview
                         March, 2004




Corporate Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
       800 553-NETS (6387)
Fax: 408 526-4100
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT
    NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE
    BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY
    FOR THEIR APPLICATION OF ANY PRODUCTS.

    THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE
    INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU
    ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR
    A COPY.

    The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as
    part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

    NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE
    PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED
    OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
    NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

    IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL
    DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR
    INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH
    DAMAGES.


    CCIP, CCSP, the Cisco Arrow logo, the Cisco Powered Network mark, Cisco Unity, Follow Me Browsing, FormShare, and StackWise are trademarks of Cisco Systems, Inc.;
    Changing the Way We Work, Live, Play, and Learn, and iQuick Study are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA,
    CCNP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, the Cisco IOS logo, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo,
    Empowering the Internet Generation, Enterprise/Solver, EtherChannel, EtherSwitch, Fast Step, GigaStack, Internet Quotient, IOS, IP/TV, iQ Expertise, the iQ logo, iQ Net
    Readiness Scorecard, LightStream, MGX, MICA, the Networkers logo, Networking Academy, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing, RateMUX, Registrar,
    ScriptShare, SlideCast, SMARTnet, StrataView Plus, Stratm, SwitchProbe, TeleRouter, The Fastest Way to Increase Your Internet Quotient, TransPath, and VCO are registered
    trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries.

    All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship
    between Cisco and any other company. (0304R)




    Data Center Infrastructure Architecture Overview
    Copyright © 2004 Cisco Systems, Inc. All rights reserved.




2                                                                                                                                                                  Version 1.0
CONTENTS

               Data Center Infrastructure Architecture     5

                   Data Center Architecture        5

                   Hardware and Software Recommendations       7
                       Aggregation Switches 7
                       Service Appliances 9
                       Service Modules 9
                       Access Switches 9
                       Software Recommendations 12
                   Data Center Multi-Layer Design 12
                       Core Layer 12
                       Aggregation and Access Layer 13
                       Service Switches 13
                       Server Farm Availability 14
                       Load-Balanced Servers 15
                   Data Center Protocols and Features 17
                       Layer 2 Protocols 17
                       Layer 3 Protocols 18
                       Security in the Data Center 20
                   Scaling Bandwidth    20

                   Network Management         21

INDEX




                                                                   Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                          iii
Contents




           Data Center Infrastructure Architecture Overview
iv                                                            Version 1.0
Data Center Infrastructure Architecture

               This document is the first in a series of four documents that provide design guidance for designing and
               implementing a data center infrastucture that ensures high availability, security and scalability:
                •   Data Center Infrastructure Architecture—Provides background information for designing a secure,
                    scalable, and resilient data center infrastructure.
                •   Data Center Infrastructure Design—Describes major design issues, including routing between the
                    data center and the core, switching within the server farm, and optimizing mainframe connectivity.
                •   HA Connectivity for Servers and Mainframes: NIC Teaming and OSA/OSPF Design—Provides
                    information about server connectivity with NIC teaming and mainframe connectivity
                •   Data Center Infrastructure Configuration—Provides configuration procedures and sample listings
                    for implementing the recommended infrastructure architecture.
               This document provides background information for designing a secure, scalable, and resilient data
               center infrastructure. It includes the following sections:
                •   Data Center Architecture
                •   Hardware and Software Recommendations
                •   Data Center Multi-Layer Design
                •   Data Center Protocols and Features
                •   Scaling Bandwidth
                •   Network Management



Data Center Architecture
               This section describes the basic architecture for a secure, scalable, and resilient data center infrastruc-
               ture. The term infrastructure in this design guide refers to the Layer 2 and Layer 3 configurations that
               provide network connectivity to the server farm as well as the network devices that provide security and
               application-related functions. Data centers are composed of devices that provide the following functions:
                •   Ensuring network connectivity, including switches and routers
                •   Providing network and server security, including firewalls and Intrusion Detection Systems (IDSs)
                •   Enhancing availability and scalability of applications, including load balancers, Secure Sockets
                    Layer (SSL) offloaders and caches
               In addition, a Network Analysis Module (NAM) is typically used to monitor the functioning of the network



                                                                   Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                          5
Data Center Infrastructure Architecture
Data Center Architecture




                      and the performance of the server farm.
                      The following are critical requirements when designing the data center infrastructure to meet service level
                      expectations:
                           •   High Availability—Avoiding a single point of failure and achieving fast and predictable
                               convergence times
                           •   Scalability—Allowing changes and additions without major changes to the infrastructure, easily
                               adding new services, and providing support for hundreds dual-homed servers
                           •   Simplicity—Providing predictable traffic paths in steady and failover states, with explicitly defined
                               primary and backup traffic paths
                           •   Security—Prevent flooding, avoid exchanging protocol information with rogue devices, and
                               prevent unauthorized access to network devices
                      The data center infrastructure must provide port density and Layer 2 and Layer 3 connectivity, while
                      supporting security services provided by access control lists (ACLs), firewalls and intrusion detection
                      systems (IDS). It must support server farm services such as content switching, caching, SSL offloading
                      while integrating with multi-tier server farms, mainframes, and mainframe services (TN3270, load
                      balancing and SSL offloading).
                      While the data center infrastructure must be scalable and highly available, it should still be simple to
                      operate, troubleshoot, and must easily accommodate new demands.




             Data Center Infrastructure Architecture Overview
 6                                                                                                                           Version 1.0
Data Center Infrastructure Architecture
                                                                                                    Hardware and Software Recommendations




                          Figure 1        Data Center Architecture


                                                                  Enterprise
                                                                 campus core



                          Aggregation layer




                          Access




                                                                     Mainframe




                                                                                                                            114028
                                Load        Firewall       SSL           Cache           Network        IDS sensor
                               balancer                 offloader                        analysis


                          Figure 1 shows a high-level view of the Cisco Data Center Architecture. As shown, the design follows the
                          proven Cisco multilayer architecture, including core, aggregation, and access layers. Network devices are
                          deployed in redundant pairs to avoid a single point of failure. The examples in this design guide use the
                          Catalyst 6500 with Supervisor 2 in the aggregation layer, Gigabit Ethernet, and Gigabit EtherChannel links.



Hardware and Software Recommendations
                          This section summarizes the recommended hardware and software for implementing a highly available,
                          secure and scalable data center intrastructure. It includes the following topics:
                           •    Aggregation Switches
                           •    Service Appliances and Service Modules
                           •    Access Switches
                           •    Software Recommendations




                                                                                 Data Center Infrastructure Architecture Overview
Version 1.0                                                                                                                           7
Data Center Infrastructure Architecture
  Hardware and Software Recommendations




Aggregation Switches
                       The following are some of the factors to use in choosing the aggregation layer device:
                        •   Forwarding performance
                        •   Density of uplink ports
                        •   Support for 10 Gigabit Ethernet linecards
                        •   Support for 802.1s, 802.1w, Rapid-PVST+
                        •   Support for MPLS-VPNs
                        •   Support for hardware-based NAT
                        •   Support for uRPF in hardware
                        •   QoS characteristics
                        •   Support for load balancing and security services (service modules)
                       At the aggregation layer, Cisco recommends using Catalyst 6500 family switches because the Catalyst 6500
                       chassis supports service modules for load balancing and security, including the following:
                        • Content Service Module (CSM)
                        •   SSL Service Module (SSLSM)
                        •   Firewall Service Module (FWSM)
                        •   Intrusion Detection Service Module (IDSM)
                        •   Network Analysis Module (NAM)
                       The chassis configuration depends on the specific services you want to support at the aggregation layer, the
                       port density of uplinks and appliances, and the need for supervisor redundancy. Load balancing and security
                       services can also be provided by external service appliances, such as PIX Firewalls; Content Services
                       Switches, Secure Content Accelerators and Content Engines.
                       You also typically attach mainframes to the aggregation switches, especially if you configure each
                       connection to the optical server adapters (OSA) card as a Layer 3 link. In addition, you can use the
                       aggregation switches to attach caches for Reverse Proxy Caching. You can also directly attach servers to the
                       aggregation switches if the port density of the server farm doesn’t require using access switches.


             Note      The Supervisor 2 (Sup2) and Sup720 are both recommended, but this design guide is intended for use
                       with Sup2. Another design guide will describe the use of Sup720, which provides higher performance
                       and additional functionalities in hardware and is the best choice to build a 10-Gigabit Ethernet data
                       center infrastructure..

                       The Catalyst 6500 is available in several form factors:
                        • 6503: 3 slots 3 RUs
                        •   6506: 6 slots 12 RUs
                        •   7606: 6 slots 7 RUs
                        •   6509: 9 slots 15 RUs
                        •   6513: 13 slots, 19 RUs
                       The 6509 and 6513 are typically deployed in the data center because they provide enough slots for access
                       ports and service modules, such as IDS.
                       The 6500 chassis support a 32 Gbps shared bus, a 256 Gbps fabric (SFM2) and a 720 Gbps fabric (if using



              Data Center Infrastructure Architecture Overview
   8                                                                                                                        Version 1.0
Data Center Infrastructure Architecture
                                                                                                   Hardware and Software Recommendations




                          Sup720). With a 6509, the Sup2 connects to slot 1 or 2 and the switch fabric (or the Sup720) connects to slot
                          5 or slot 6. With a 6513, the Sup2 connects to slot 1 or 2, and the switch fabric (or the Sup720) connects to
                          the slot 7 or slot 8.
                          If you use the fabric module (SFM2) with Sup2, each slot in a 6509 receives 16 Gbps of channel attachment.
                          Slots 1-8 in a 6513 receive 8 Gbps and slots 9-13 receive 16 Gbps of channel attachment.
                          If you use Sup720, which has an integrated fabric, each slot in a 6509 receives 40 Gbps of channel
                          attachment. Slots 1-8 in a 6513 receive 20 Gbps, and slots 9-13 receive 40 Gbps of channel attachment.

                          Catalyst 6509 Hardware Configuration
                          A typical configuration of a Catalyst 6509 in the aggregation of a data center looks like this:
                           •   Sup2 with MSFC2
                           •   FWSM (fabric attached at 8 Gbps)
                           •   CSM
                           •   SSLSM (fabric attached at 8 Gbps)
                           •   IDSM-2 (fabric attached at 8 Gbps)
                           •   WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) –
                               (fabric attached at 8 Gbps) for uplink connectivity with the access switches
                           •   WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) –
                               (fabric attached at 8 Gbps) for uplink connectivity with the access switches
                           •   WS-X6516-GE-TX – 16 10/100/1000 BaseT– Jumbo – (fabric attached at 8 Gbps) for servers and
                               caches
                          If you use a fabric module, this would plug into slot 5 or 6. Because sup720 has an integrated fabric, this one
                          would also plug into slot 5 or 6.


                          Catalyst 6513 Hardware Configuration
                          A typical configuration of a Catalyst 6513 in the aggregation of a data center looks like this:
                           •   Sup2 with MSFC2
                           •   FWSM (fabric attached at 8 Gbps)
                           •   CSM
                           •   SSLSM (fabric attached at 8 Gbps)
                           •   IDSM-2 (fabric attached at 8 Gbps)
                           •   NAM-2 (fabric attached at 8 Gbps)
                           •   WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) –
                               (fabric attached at 8 Gbps) for uplink connectivity with the access switches
                           •   WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) –
                               (fabric attached at 8 Gbps) for uplink connectivity with the access switches
                           •   WS-X6516-GE-TX – 16 10/100/1000 BaseT– Jumbo (9216 B) – (fabric attached at 8 Gbps) for
                               servers and caches
                          If you use a fabric module, this would plug into slot 7 or 8. Because sup720 has an integrated fabric, this one
                          would also plug into slot 7 or 8.
                          It is also good practice to use the first 8 slots for service modules because these are fabric attached with a


                                                                                Data Center Infrastructure Architecture Overview
Version 1.0                                                                                                                          9
Data Center Infrastructure Architecture
  Hardware and Software Recommendations




                       single 8 Gbps channel. Use the remaining slots for Ethernet line cards because these might use both fabric
                       channels.


             Note      When upgrading the system to Sup720 you can keep using the linecards WS-6516-GE-TX,
                       WS-6516-GBIC, WS-6516A-GBIC



Service Appliances
                       Service appliances are external networking devices that include the following:
                        •   Content Service Switch (CSS, CSS11506): 5 RUs, 40 Gbps of aggregate throughput, 2,000
                            connections per second per module (max 6 modules), 200,000 concurrent connections with 256 MB
                            DRAM.
                        •   CSS11500 SSL decryption module (for the CSS11500 chassis): Performance numbers per module:
                            1,000 new transactions per second, 20,000 concurrent sessions, 250 Mbps of throughput.
                        •   PIX Firewalls (PIX 535): 3 RU, 1.7 Gpbs of throughput, 500,000 concurrent connections
                        •   IDS sensors (IDS 4250XL): 1 RU, 1 Gbps (with the XL card)
                        •   Cisco Secure Content Accelerator 2: 1 RU, 800 new transactions per second, 20,000 concurrent
                            sessions, 70 Mbps of bulk transfer
                       The number of ports that these appliances require depends entirely on how many appliances you use and
                       how you configure the Layer 2 and Layer 3 connectivity between the appliances and the infrastructure.


Service Modules
                       Security and load balancing services in the data center can be provided either with appliances or with
                       Catalyst 6500 linecards. The choice between the two family of devices is driven by considerations of
                       performance, rack space utilization, cabling and of course features that are specific to each of the devices.
                       Service modules are cards that you plug into the Catalyst 6500 to provide firewalling, intrusion detection,
                       content switching, and SSL offloading. Service modules communicate with the network through the
                       Catalyst backplane and can be inserted without the need for additional power or network cables.
                       Service modules provide better rack space utilisation, simplified cabling, better integration between the
                       modules and higher performance than typical appliances. When using service modules, certain
                       configurations that optimize the convergence time and the reliability of the network are automatic. For
                       example, when you use an external appliance, you need to manually configure portfast or trunkfast on the
                       switch port that connects to the appliance. This configuration is automatic when you use a service module.
                       As an example of rack space utilization consider that a PIX 535 firewall takes 3 Rack Units (RUs), while a
                       Firewall Services Module (FWSM) takes one slot in a Catalyst switch, which means that a FWSM inside a
                       Catalyst 6513 takes (19 RU / 13 slots) = 1.4 RUs.
                       Another advantage of using service modules as opposed to external appliances is that service modules are
                       VLAN aware, which makes consolidation and virtualization of the infrastructure easier.
                       Each service module provides a different functionality and takes one slot out of the Catalyst 6500. Examples
                       of these modules include the following:
                        •   CSM: 165,000 connections per second, 1,000,000 concurrent connections, 4 Gbps of throughput.



              Data Center Infrastructure Architecture Overview
  10                                                                                                                          Version 1.0
Data Center Infrastructure Architecture
                                                                                                  Hardware and Software Recommendations




                            •   FWSM: 8 Gpbs fabric attached . Performance numbers: 100,000 cps, 5.5Gbps of throughput,
                                1,000,000 cc.
                            •   SSLSM: 8 Gbps fabric attached. Performance numbers: 3000 new transactions per second, 60,000
                                concurrent connections, 300 Mbps of throughput.
                            •   IDSM-2: 8 Gbps fabric attached. Performance: 600 Mbps


Access Switches
                           This section describes how to select access switches for your data center intrastructure design and describes
                           some of the Cisco Catalyst products that are particularly useful. It includes the following topics:
                            •   Selecting Access Switches
                            •   Catalyst 6500
                            •   Catalyst 4500
                            •   Catalyst 3750


                           Selecting Access Switches
                           The following are some of the factors to consider when choosing access layer switches:
                            •   Forwarding performance
                            •   Oversubscription rates
                            •   Support for 10/100/1000 linecards
                            •   Support for 10 Gigabit Ethernet (for uplink connectivity)
                            •   Support for Jumbo Frames
                            •   Support for 802.1s, 802.1w, Rapid-PVST+
                            •   Support for stateful redundancy with dual supervisors
                            •   Support for VLAN ACLs (used in conjunction with IDS)
                            •   Support for Layer 2 security features such as port security and ARP inspection
                            •   Support for private VLANs
                            •   Support for SPAN and Remote SPAN (used in conjunction with IDS)
                            •   Support for QoS
                            •   Modularity
                            •   Rack space and cabling efficiency
                            •   Power redundancy
                           Cost often requires choosing less expensive server platforms that only support one NIC card. To provide
                           availability for these single-homed servers you need to use dual supervisors in the access switch. For
                           dual supervisor redundancy to be effective you need stateful failover at least to Layer 2.
                           When choosing linecards or other products to use at the access layer, consider how much
                           oversubscription a given application tolerates. When choosing linecards, you should also consider
                           support for Jumbo frames and the maximum queue size.




                                                                               Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                       11
Data Center Infrastructure Architecture
Hardware and Software Recommendations




                     Modular switches support both oversubscribed and non-oversubscribed linecards. Typically, you use
                     oversubscribed linecards as access ports for server attachment and non-oversubscribed linecards for
                     uplink ports or channels between switches. You might need to use non-oversubscribed linecards for the
                     server ports as well, depending on the amount of traffic that you expect a server to generate.
                     Although various platforms can be used as access switches, this design guide uses the Catalyst 6506. Using
                     service modules in an access switch can improve rack space utilization and reduce cabling if you deploy
                     load balancing and security at the access layer.
                     From the data center design perspective, the access layer (front-end switches) must support 802.1s/1w and
                     Rapid PVST+ to take advantage of rapid convergence.
                     The 10/100/1000 technology allows incremental adoption of Gigabit Ethernet in the server farm thanks to
                     the compatibility between FastEthernet NIC cards and 10/100/1000 switch linecards. 10 Gigabit Ethernet is
                     becoming the preferred technology for the data center uplinks within the data center and between the data
                     center and the core.
                     Cabling between the servers and the switch can be either fiber or copper. Gigabit over copper can run on the
                     existing Cat 5 cabling used for Fast Ethernet (ANSI/TIA/EIA 568-A, ISO/IEC 11801-1995). Cat 5 cabling
                     was designed for the use of 2 cable pairs, but Gigabit Ethernet uses 4 pairs. Existing Cat 5 wiring
                     infrastructure must be tested to ensure it can effectively support Gigabit rates. New installations of Gigabit
                     Ethernet over copper should use at least Cat 5e cabling or, better, Cat 6.


           Note      For more information on the cabling requirements of 1000BaseT refer to the document “Gigabit Ethernet
                     Over Copper Cabling” published on www.gigabitsolution.com


                     Catalyst 6500
                     The Catalyst 6500 supports all the technologies and features required for implementing a highly available,
                     secure, and scalable data center intrastructure. The platform used in this design guide for the access
                     switches is the 6506 because it provides enough slots for access ports and service modules together with
                     efficient rack space utilisation.
                     A typical configuration for the Catalyst 6500 in the access layer is as follows:
                      •   Single or dual supervisors (two supervisors are recommended for single-homed servers)
                      •   IDSM-2
                      •   Access ports for the servers 10/100/1000 linecards: WS-6516-GE-TX – Jumbo (9216 B), fabric
                          attached at 8 Gbps
                      •   Gigabit linecard for uplink connectivity: WS-6516-GBIC or WS-6516A-GBIC – Jumbo (9216 B),
                          fabric attached at 8 Gbps


           Note      It is possible to attach 1000BaseT GBIC adapters to Optical Gigabit linecards by using the WS-G5483
                     GBIC

                     If the Catalyst 6506 is upgraded to Sup720, Sup720 will be plugged into slot 5 or slot 6. For this reason
                     when using Sup2 it is practical to keep either slot empty for a possible upgrade or to insert a fabric module.
                     When upgrading the system to Sup720 you can keep using the linecards WS-6516-GE-TX, WS-6516-GBIC,
                     WS-6516A-GBIC




            Data Center Infrastructure Architecture Overview
12                                                                                                                         Version 1.0
Data Center Infrastructure Architecture
                                                                                                   Hardware and Software Recommendations




                           Catalyst 4500
                           The Catalyst 4500, which can also be used as an access switch in the data center is a modular switch
                           available with the following chassis types:
                            •   4503: 3 slots, 7 RUs
                            •   4506: 6 slots, 10 RUs
                            •   4507R: 7 slots, 11 RUs (slot 1 and 2 are reserved for the supervisors and do not support linecards)
                           Only the 4507R supports dual supervisors. A typical configuration with supervisor redundancy and layer 2
                           access would be as follows:
                            •   Dual Sup2-plus (mainly layer 2 + static routing and RIP) or dual supervisor IV (for layer 3 routing
                                protocols support with hardware CEF)
                            •   Gigabit copper attachment for servers, which can use one of the following:
                            •   WS-4306-GB with copper GBICs (WS-G5483)
                            •   24-port 10/100/1000 WS-X4424-GB-RJ45
                            •   12-port 1000BaseT linecard WS-X4412-2GB-T
                            •   Gigabit fiber attachment for servers, which can use a WS-X4418-GB (this doesn’t support copper
                                GBICs)
                            •   Gigabit linecard for uplink connectivity: WS-4306-GB – Jumbo (9198 B)


                Note       Jumbo frames are only supported on non-oversubscribed ports.

                           When internal redundancy is not required, you don’t need to use a 4507 chassis and you can use a Supervisor
                           3 for Layer 3 routing protocol support and CEF switching in hardware.


                           Catalyst 3750
                           The Catalyst 3750 is a stackable switch that supports Gigabit Ethernet, such as the 24-port 3750G-24TS
                           with 10/100/1000 ports and 4 SFP for uplink connectivity. Several 3750s can be clustered together to
                           logically form a single switch. In this case, you could use 10/100/1000 switches (3750-24T) clustered with
                           an SFP switch (3750G-12S) for EtherChannel uplinks.


Software Recommendations
                           Because of continous improvements in the features that are supported on the access switch platforms
                           described in this design document, it isn't possible to give a recommendation on the software release you
                           should deploy in your data center.
                           The choice of the software release depends on the hardware that the switch needs to support and on the
                           stability of a given version of code. In a data center design, you should use a release of code that has been
                           released for a long time, is available with several re-builds, and where the newer builds contain only bug
                           fixes.
                           When using Catalyst family products, you must choose between using the Supervisor IOS operating system
                           or the Catalyst IOS operating systems. These two operating systems have some important differences in the
                           CLI, the features supported, and the hardware supported.


                                                                                Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                        13
Data Center Infrastructure Architecture
  Data Center Multi-Layer Design




                        This design document uses supervisor IOS on the Catalyst 6500 aggregation switches because it supports
                        Distributed Forwarding Cards, and because it was the first operating system to support the Catalyst service
                        modules. Also, it is simpler to use a single standardized image and a single operating system on all the data
                        center devices
                        The following summarizes the features introduced with different releases of the software:
                          •   12.1(8a)E—Support for Sup2 and CSM
                          •   12.1(13)E—Support for Rapid PVST+ and for FWSM, NAM2 with Sup2, and SSLSM with Sup2
                          •   12.1(14)E—Support for IDSM-2 with Sup2
                          •   12.1(19)E—Support for some of the 6500 linecards typically used in data centers and SSHv2
                        This design guide is based on testing with Release 12.1(19)Ea1.



Data Center Multi-Layer Design
                        This section describes the design of the different layers of the data center infrastructure. It includes the
                        following topics:
                          •   Core Layer
                          •   Aggregation Layer
                          •   Access Layer
                          •   Service Switches
                          •   Server Availability


Core Layer
                        The core layer in an enterprise network provides connectivity among the campus buildings, the private
                        WAN network, the Internet edge network and the data center network. The main goal of the core layer is to
                        switch traffic at very high speed between the modules of the enterprise network. The configuration of the
                        core devices is typically kept to a minimum, which means pure routing and switching. Enabling additional
                        functions might bring down the performance of the core devices.
                        There are several possible types of core networks. In previous designs, the core layer used a pure Layer 2
                        design for performance reasons. However, with the availability of Layer 3 switching, a Layer 3 core is as
                        fast as a Layer 2 core. If well designed, a Layer 3 core can be more efficient in terms of convergence time
                        and can be more scalable.
                        For an analysis of the different types of core, refer to the white paper available on www.cisco.com:
                        “Designing High-Performance Campus Intranets with Multilayer Switching” by Geoff Haviland.
                        The data center described in this design guide connects to the core using Layer 3 links. The data center
                        network is summarized and the core injects a default into the data center network. Some specific
                        applications require injecting host routes (/32) into the core.


Aggregation and Access Layer
                        The access layer provides port density to the server farm, while the aggregation layer collects traffic from


               Data Center Infrastructure Architecture Overview
  14                                                                                                                            Version 1.0
Data Center Infrastructure Architecture
                                                                                                              Data Center Multi-Layer Design




                           the access layer and connects the data center to the core. The aggregation layer is also the preferred
                           attachment point for mainframes and the attachment point for caches used in Reverse Proxy Cache mode.
                           Security and application service devices (such as load balancing devices, SSL offloading devices, firewalls
                           and IDS devices) are deployed either at the aggregation or access layer. Service devices deployed at the
                           aggregation layer are shared among all the servers, while services devices deployed at the access layer
                           provide benefit only to the servers that are directly attached to the specific access switch.
                           The design of the access layer varies depending on whether you use Layer 2 or Layer 3 access. Layer 2
                           access is more efficient for sharing aggregation layer services among the servers. For example, to deploy a
                           firewall that is used by all the servers in the data center, deploy it at the aggregation layer. The easiest
                           implementation is with the firewall Layer 2 adjacent to the servers because the firewall should see both
                           client-to-server and server-to-client traffic.
                           Security and application services are provided by deploying external appliances or service modules. The
                           Cisco preferred architecture for large-scale server farms uses service modules for improved integration and
                           consolidation. A single service module can often replace multiple external appliances with a single linecard.
                           Figure 1 shows the aggregation switches with firewalling, IDS, load balancing, SSL offloading and NAM in
                           the same switch. This configuration needs to be customized for specific network requirements and is not the
                           specific focus of this document. For information about designing data centers with service modules, refer to
                           .


Service Switches
                           The architecture shown in Figure 1 is characterized by high density in service modules on each aggregation
                           switch, which limits the number of ports available for uplink connectivity. It is also possible that the code
                           versions required by the service modules may not match the software version already used on the
                           aggregation switches in the data center environment.
                           Figure 2 illustrates the use of service switches in a data center. Service switches are Catalyst 6500 populated
                           with service modules and dual-attached to the aggregation switches. When used with service modules, they
                           allow higher port density and separate the code requirements of the service modules from those of the
                           aggregation switches.




                                                                                Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                            15
Data Center Infrastructure Architecture
  Data Center Multi-Layer Design




                        Figure 2        Data Center Architecture with Service Switches


                                                                    Enterprise
                                                                   campus core



                        Aggregation layer




                        Access




                                                                       Mainframe




                                                                                                                     114029
                            Load          Firewall            SSL          Cache    Network       IDS sensor
                           balancer                        offloader                analysis

                        Using service switches is very effective when not all the traffic requires the use of service devices. Traffic
                        that doesn't can take the path to the core through the aggregation switches. For example, by installing a
                        Content Switching Module in a service switch, the servers that require load balancing are configured on a
                        “server VLAN” that brings the traffic to the service switches. Servers that don’t require load balancing are
                        configured on a VLAN that is terminated on the aggregation switches.
                        On the other hand, in a server farm, all the servers are typically placed behind one or more Firewall Service
                        Modules (FWSM). Placing an FWSM in a service switch would require all the traffic from the server farm
                        to flow through the service switch and no traffic would use the aggregation switches for direct access to the
                        core. The only benefit of using a service switch with FWSM is an increased number of uplink ports at the
                        aggregation layer. For this reason, it usually makes more sense to place an FWSM directly into an
                        aggregation switch.
                        By using service switches, you can gradually move the servers behind service modules and eventually
                        replace the aggregation switches with the service switches.


Server Farm Availability
                        Server farms in a data center have different availability requirements depending on whether they host



               Data Center Infrastructure Architecture Overview
  16                                                                                                                           Version 1.0
Data Center Infrastructure Architecture
                                                                                                              Data Center Multi-Layer Design




                           business-critical applications or applications with less stringent availability requirements, such as
                           development applications. You can meet availability requirements by leveraging specific software
                           technologies and network technologies, including the following:
                           Applications can be load-balanced either with a network device or with clustering software
                           Servers can be multi-homed with multiple NIC cards
                           Access switches can provide maximum availability if deployed with dual supervisors


Load-Balanced Servers
                           Load-balanced servers are located behind a load balancer, such as CSM. Load-balanced server farms
                           typically include the following kinds of servers:
                            •   Web and application servers
                            •   DNS servers
                            •   LDAP servers
                            •   RADIUS servers
                            •   TN3270 servers
                            •   Streaming servers


                Note       The document at the following URL outlines some of the popular applications of load balancing:
                           http://www.cisco.com/warp/public/cc/pd/cxsr/400/prodlit/sfarm_an.htm

                           Load-balanced server farms benefit from load distribution, application monitoring, and application-layer
                           services, such as session persistence. On the other hand, while the 4 Gbps throughput of a CSM is sufficient
                           in most client-to-server environments, it could be a bottleneck for bulk server-to-server data transfers in
                           large-scale server farms.
                           When the server farm is located behind a load balancer, you may need to choose one of the following
                           options to optimize server-to-server traffic:
                            •   Direct Server Return
                            •   Performing client NAT on the load balancer
                            •   Policy Based Routing
                           The recommendations in this document apply to network design with a CSM and should be deployed before
                           installing the CSM.
                           A key difference between load-balanced servers and non-load balanced servers is the placement of the
                           default gateway. Non-load balanced servers typically have their gateway configured as a Hot Standby
                           Routing Protocol (HSRP) address on the router inside the Catalyst 6500 switch or on the firewall device.
                           Load-balanced servers may use the IP address of the load balancing device as their default gateway.


                           Levels of Server Availability
                           Each enterprise categorizes its server farms based on how critical they are to the operation of the business.
                           Servers that are used in production and handle sales transaction are often dual-homed and configured for
                           “switch fault tolerance.” This means the servers are attached with two NIC cards to separate switches, as
                           shown in Figure 1. This allows performing maintenance on one access switch without affecting access to the


                                                                                Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                            17
Data Center Infrastructure Architecture
Data Center Multi-Layer Design




                      server.
                      Other servers, such as those used for developing applications, may become inaccessible without
                      immediately affecting the business. You can categorize the level of availability required for different servers
                      as follows:
                        •   Servers configured with multiple NIC cards each attached to a different access switch (switch fault
                            tolerance) provide the maximum possible availability. This option is typically reserved to servers
                            hosting business critical applications.
                        •   Development servers could also use two NICs that connect to a single access switch which has two
                            supervisors. This configuration of the NIC cards goes under the name of “adapter fault tolerance”.
                            The two NICs should be attached to different linecards.
                        •   Development servers that are less critical to the business can use one NIC connected to a single
                            access switch (which has two supervisors)
                        •   Development servers that are even less critical can use one NIC connected to a single access switch
                            which has a single supervisor
                      The use of access switches with two supervisors provides availability for servers that are attached to a single
                      access switch. The presence of two supervisors makes it possible to perform software upgrades on one
                      supervisor with minimal disruption of the access to the server farm.
                      Adapter fault tolerance means that the server is attached with each NIC card to the same switch but each
                      NIC card is connected to a different linecard in the access switch.
                      Switch fault tolerance and adapter fault tolerance are described in Chapter 3, “HA Connectivity for
                      Servers and Mainframes: NIC Teaming and OSA/OSPF Design.”


                      Multi-Tier Server Farms
                      Today, most web-based applications are built as multi-tier applications. The multi-tier model uses software
                      running as separate processes on the same machine, using interprocess communication, or on different
                      machines with communications over the network. Typically, the following three tiers are used:
                        •   Web-server tier
                        •   Application tier
                        •   Data base tier
                      Multi-tier server farms built with processes running on separate machines can provide improved resiliency
                      and security. Resiliency is improved because a server can be taken out of service while the same function is
                      still provided by another server belonging to the same application tier. Security is improved because an
                      attacker can compromise a web server without gaining access to the application or to the database.
                      Resiliency is achieved by load balancing the network traffic between the tiers, and security is achieved by
                      placing firewalls between the tiers. You can achieve segregation between the tiers by deploying a separate
                      infrastructure made of aggregation and access switches or by using VLANs.
                      Figure 3 shows the design of multi-tier server farms with physical segregation between the server farm
                      tiers. Side (a) of the figure shows the design with external appliances, while side (b) shows the design
                      with service modules




             Data Center Infrastructure Architecture Overview
18                                                                                                                           Version 1.0
Data Center Infrastructure Architecture
                                                                                                           Data Center Multi-Layer Design




                          Figure 3        Physical Segregation in a Server Farm with Appliances (a) and Service Modules (b)


                                                     (a)                                                              (b)




                                                 Web servers
                                                                                                            Web servers




                                                 Application
                                                  servers




                                                                                                                                            114030
                                                  Database                                                   Application
                                                   servers                                                    servers



                          The design shown in Figure 4 uses VLANs to segregate the server farms. The left side of the illustration
                          (a) shows the physical topology, while the right side (b) shows the VLAN allocation across the service
                          modules: firewall, load balancer and switch. The firewall is the device that routes between the VLANs,
                          while the load balancer, which is VLAN-aware, also enforces the VLAN segregation between the server
                          farms. Notice that not all the VLANs require load balancing. For example, the database in the example
                          sends traffic directly to the firewall.

                          Figure 4        Logical Segregation in a Server Farm with VLANs


                                                       (a)                                                  (b)




                                                   Web servers
                                                                                                               Database
                                                                                                               servers

                                                                                                       Application
                                                                                                       servers
                                                                                                                            114031




                                                                                             Web
                                                Application servers                          servers


                          The advantage of using physical segregation is performance, because each tier of servers is connected to
                          dedicated hardware. The advantage of using logical segregation with VLANs is the reduced complexity of
                          the server farm. The choice of one model versus the other depends on your specific network performance
                          requirements and traffic patterns.


                                                                             Data Center Infrastructure Architecture Overview
Version 1.0                                                                                                                          19
Data Center Infrastructure Architecture
  Data Center Protocols and Features




Data Center Protocols and Features
                        This section provides background information about protocols and features that are helpful when designing
                        a data center network for high availability, security and scalability. It includes the following topics:
                          •   Layer 2 Protocols
                          •   Layer 3 Protocols
                          •   Security in the Data Center


Layer 2 Protocols
                        Data centers are characterized by a wide variety of server hardware and software. Applications may run on
                        various kinds of server hardware, running different operating systems. These applications may be developed
                        on different platforms such as IBM Websphere, BEA Weblogic, Microsoft .NET, Oracle 9i or they may be
                        commercial applications developed by companies like SAP, Siebel, or Oracle. Most server farms are
                        accessible using a routed IP address, but some use non-routable VLANs.
                        All these varying requirements determine the traffic path that client-to-server and server-to-server traffic
                        takes in the data center.These factors also determine how racks are built because server farms of the same
                        kind are often mounted in the same rack based on the server hardware type and are connected to an access
                        switch in the same rack. These requirements also decide how many VLANs are present in a server farm
                        because servers that belong to the same application often share the same VLAN.
                        The access layer in the data center is typically built at Layer 2, which allows better sharing of service
                        devices across multiple servers, and allows the use of Layer 2 clustering, which requires the servers to be
                        Layer 2 adjacent. With Layer 2 access, the default gateway for the servers is configured at the aggregation
                        layer. Between the aggregation and access layers there is a physical loop in the topology that ensures a
                        redundant Layer 2 path in case one of the links from the access to the aggregation fails.
                        Spanning-tree protocol (STP) ensures a logically loop-free topology over a physical topology with loops.
                        Historically, STP (IEEE 802.1d and its Cisco equivalent PVST+) has often been dismissed because of slow
                        convergence and frequent failures that are typically caused by misconfigurations. However, with the
                        introduction of IEEE 802.1w, spanning-tree is very different from the original implementation. For example,
                        with IEEE 802.1w, BPDUs are not relayed, and each switch generates BPDUs after an interval determined
                        by the “hello time.” Also, this protocol is able to actively confirm that a port can safely transition to
                        forwarding without relying on any timer configuration. There is now a real feedback mechanism that takes
                        place between IEEE 802.1w compliant bridges.
                        We currently recommend using Rapid Per VLAN Spanning Tree Plus (PVST+), which is a combination of
                        802.1w and PVST+. For higher scalability, you can use 802.1s/1w, also called multi-instance spanning-tree
                        (MST). You get higher scalability with 802.1s because you limit the number of spanning-tree instances, but
                        it is less flexible than PVST+ if you use bridging appliances. The use of 802.1w in both Rapid PVST+ and
                        MST provides faster convergence than traditional STP. We also recommend other Cisco enhancements to
                        STP, such as LoopGuard and Unidirectional Link Detection (UDLD), in both Rapid PVST+ and MST
                        environments.
                        We recommend Rapid PVST+ for its flexibility and speed of convergence. Rapid PVST+ supersedes
                        BackboneFast and UplinkFast making the configuration easier to deploy than regular PVST+. Rapid PVST+
                        also allows extremely easy migration from PVST+.
                        Interoperability with IEEE 802.1d switches that do not support Rapid PVST+ is ensured by building the



               Data Center Infrastructure Architecture Overview
  20                                                                                                                          Version 1.0
Data Center Infrastructure Architecture
                                                                                                           Data Center Protocols and Features




                           “Common Spanning-Tree” (CST) by using VLAN 1. Cisco switches build a CST with IEEE 802.1d
                           switches, and the BPDUs for all the VLANs other than VLAN 1 are tunneled through the 802.1d region.
                           Cisco data centers feature a fully-switched topology, where no hub is present, and all links are full-duplex.
                           This delivers great performance benefits as long as flooding is not present. Flooding should only be used
                           during topology changes to allow fast convergence of the Layer 2 network. Technologies that are based on
                           flooding introduce performance degradation besides being a security concern. This design guide provides
                           information on how to reduce the likelihood of flooding. Some technologies rely on flooding, but you should
                           use the equivalent unicast-based options that are often provided.
                           Flooding can also be the result of a security attack and that is why port security should be configured on the
                           access ports together with the use of other well understood technologies, such as PortFast. You complete the
                           Layer 2 configuration with the following configuration steps:

               Step 1      Proper assignment of root and secondary root switches
               Step 2      Configuring rootguard on the appropriate links
               Step 3      Configuring BPDU guard on the access ports connected to the servers.



                           By using these technologies you control the Layer 2 topology from accidental or malicious changes that
                           could alter the normal functioning of the network.
                           The Layer 2 configuration needs to keep into account the presence of dual-attached servers. Dual attached
                           servers are used for redundancy and increased throughput. The configurations in this design guide ensure
                           compatibility with dual-attached servers.


Layer 3 Protocols
                           The aggregation layer typically provides Layer 3 connectivity from the data center to the core. Depending
                           on the requirements and the design, the boundary between Layer 2 and Layer 3 at the aggregation layer can
                           be the Multilayer Switching Feature Card (MSFC), which is the router card inside the Catalyst supervisor,
                           the firewalls, or the content switching devices. You can achieve routing either with static routes or with
                           routing protocols such as EIGRP and OSPF. This design guide covers routing using EIGRP and OSPF.
                           Network devices, such as content switches and firewalls, often have routing capabilities. Besides supporting
                           the configuration of static routes, they often support RIP and sometimes even OSPF. Having routing
                           capabilities facilitates the task of the network design but you should be careful not to misuse this
                           functionality. The routing support that a content switch or a firewall provides is not the same as the support
                           that a router has, simply because the main function of a content switching product or of a firewall is not
                           routing. Consequently, you might find that some of the options that allow you to control how the topology
                           converges (for example, configuration of priorities) are not available. Moreover, the routing table of these
                           devices may not accommodate as many routes as a dedicated router.
                           The routing capabilities of the MSFC, when used in conjunction with the Catalyst supervisor, provide traffic
                           switching at wire speed in an ASIC. Load balancing between equal-cost routes is also done in hardware
                           These capabilities are not available in a content switch or a firewall.
                           We recommend using static routing between the firewalls and the MSFC for faster convergence time in case
                           of firewall failures, and dynamic routing between the MSFC and the core routers. You can also use dynamic
                           routing between the firewalls and the MSFC, but this is subject to slower convergence in case of firewall



                                                                                Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                             21
Data Center Infrastructure Architecture
Data Center Protocols and Features




                      failures. Delays are caused by the process of neighbor establishment, data base exchange, running the SPF
                      algorithm and installing the Layer 3 forwarding table in the network processors.
                      Whenever dynamic routing is used, routing protocols with MD5 authentication should be used to prevent the
                      aggregation routers from becoming neighbors with rogue devices. We also recommend tuning the OSPF
                      timers to reduce the convergence time in case of failures of Layer 3 links, routers, firewalls, or LPARs (in a
                      mainframe).
                      Servers use static routing to respond to client requests. The server configuration typically contains a single
                      default route pointing to a router, a firewall, or a load balancer. The most appropriate device to use as the
                      default gateway for servers depends on the security and performance requirements of the server farm. Of
                      course, the highest performance is delivered by the MSFC.
                      You should configure servers with a default gateway with an address that is made highly available through
                      the use of gateway redundancy protocols such as HSRP, Virtual Router Redundancy Protocol (VRRP), or
                      the Gateway Load Balancing Protocol (GLBP). You can tune the gateway redundancy protocols for
                      convergence in less than one second, which makes router failures almost unnoticeable to the server farm.


            Note      The software release used to develop this design guide only supports HSRP.

                      Mainframes connect to the infrastructure using one or more OSA cards. If the mainframe uses Enterprise
                      System Connections (ESCON), it can be connected to a router with a Channel Interface Processor
                      (CIP/CPA). The CIP connects to the mainframes at the channel level. By using an ESCON director, multiple
                      hosts can share the same CIP router. Figure 1 shows the attachment for a mainframe with an OSA card.
                      The transport protocol for mainframe applications is IP, for the purpose of this design guide. You can
                      provide clients direct access to the mainframe or you can build a multi-tiered environment so clients can use
                      browsers to run mainframe applications. The network not only provides port density and Layer 3 services,
                      but can also provide the TN3270 service from a CIP/CPA card. The TN3270 can also be part of a
                      multi-tiered architecture, where the end client sends HTTP requests to web servers, which, in turn,
                      communicate with the TN3270 server. You must build the infrastructure to accommodate these requirements
                      as well.
                      You can configure mainframes with static routing just like other servers, and they also support OSPF
                      routing. Unlike most servers, mainframes have several internal instances of Logical Partitions (LPARs)
                      and/or Virtual machines (VMs), each of which contains a separate TCP/IP stack. OSPF routing allows the
                      traffic to gain access to these partitions and/or VMs using a single or multiple OSA cards.
                      You can use gateway redundancy protocols in conjunction with static routing when traffic is sent from a
                      firewall to a router or between routers. When the gateway redundancy protocol is tuned for fast convergence
                      and static routing is used, recovery from router failures is very quick. When deploying gateway redundancy
                      protocols, we recommend enabling authentication to avoid negotiation with unauthorized devices.
                      Routers and firewalls can provide protection against attacks based on source IP spoofing, by means of
                      unicast Reverse Path Forwarding (uRPF). The uRPF feature checks the source IP address of each packet
                      received against the routing table. If the source IP is not appropriate for the interface on which it is received,
                      the packet is dropped. We recommend that uRPF be enabled on the Firewall module in the data center
                      architecture described in this design guide.




             Data Center Infrastructure Architecture Overview
22                                                                                                                             Version 1.0
Data Center Infrastructure Architecture
                                                                                                                          Scaling Bandwidth




Security in the Data Center
                           Describing the details of security in the data center is beyond the scope of this document, but it is important
                           to be aware of it when building the infrastructure. Security in the data center is the result of Layer 2 and
                           Layer 3 configurations (such as routing authentication, uRPF, and so forth) and the use of security
                           technologies such as SSL, IDS, firewalls, and monitoring technologies such as network analysis products.
                           Firewalls provide Layer 4 security services such as Initial Sequence Number randomization, TCP intercept,
                           protection against fragment attacks and opening of specific Layer 4 ports for certain applications (fixups).
                           An SSL offloading device can help provide data confidentiality and non-repudiation, while IDS devices
                           capture malicious activities and block traffic generated by infected servers on the access switches. Network
                           analysis devices measure network performance, port utilization, application response time, QoS, and other
                           network activity.


                Note       Strictly speaking, network analysis devices are not security devices. They are network management
                           devices, but by observing network and application traffic it is sometimes possible to detect malicious
                           activity.

                           Some of the functions provided by the network, such as SYN COOKIEs and SSL, may be available on
                           server operating systems. However, implementing these functionalities on the network greatly simplifies the
                           management of the server farm because it reduces the number of configuration points for each technology.
                           Instead of configuring SSL on hundreds of servers you just configure SSL on a pair of SSL offloading
                           devices.
                           Firewalls and SSL devices see the session between client and server and directly communicate with both
                           entities. Other security products such as IDS devices or NAM devices, only see a replica of the traffic
                           without being on the main traffic path. For these products to be effective, the switching platforms need to
                           support technologies such as VACL capture and Remote SPAN in hardware.



Scaling Bandwidth
                           The amount of bandwidth required in the data center depends on several factors, including the application
                           type, the number of servers present in the data center, the type of servers, the storage technology. The need
                           for network bandwidth in the data center is increased because of the large amount of data that is stored in a
                           data center and the need to quickly move this data between servers. The technologies that address these
                           needs include:
                            •   EtherChannels—Either between servers and switches or between switches
                            •   CEF load balancing—Load balancing on equal cost layer 3 routes
                            •   GigabitEthernet attached servers—Upgrading to Gigabit attached servers is made simpler by the
                                adoption of the 10/100/1000 technology
                            •   10 GigabitEthernet—10 GigabitEthernet is being adopted as an uplink technology in the data center
                            •   Fabric switching—Fabric technology in data center switches helps improve throughput in the
                                communication between linecards inside a chassis, which is particularly useful when using service
                                modules in a Catalyst switch.
                           You can increase the bandwdith available to servers by using multiple server NIC cards either in load
                           balancing mode or in link-aggregation (EtherChannel) mode.



                                                                                Data Center Infrastructure Architecture Overview
 Version 1.0                                                                                                                           23
Data Center Infrastructure Architecture
 Network Management




                      EtherChannel allows increasing the aggregate bandwidth at Layer 2 by distributing traffic on multiple links
                      based on a hash of the Layer 2, Layer 3 and Layer 4 information in the frames.
                      EtherChannels are very effective in distributing aggregate traffic on multiple physical links, but they don’t
                      provide the full combined bandwidth of the aggregate links to a single flow because the hashing assigns the
                      flow to a single physical link. For this reason, GigabitEthernet NIC cards are becoming the preferred
                      technology for FastEtherChannels. This is a dominant trend because of the reduced cost of copper Gigabit
                      NICs compared to fiber NICs. For a similar reason, 10 GigabitEthernet is becoming the preferred
                      technology for enabling GigabitEtherchannels for data center uplinks.
                      At the aggregation layer, where service modules are deployed, the traffic between the service modules
                      travels on the bus of the Catalyst 6500 several times. This reduces the bandwidth available for
                      server-to-server traffic The use of the fabric optimizes the communication between fabric-connected
                      linecards and fabric-attached service modules. With Sup720 the fabric is part of the supervisor itself. With
                      the sup 2, the fabric is available as a separate module.


           Note       Not all service modules are fabric attached. Proper design should ensure the best utilization of the
                      service modules within the Catalyst 6500 chassis.

                      The maximum performance that a Catalyst switch can deliver is achieved by placing the servers Layer 2
                      adjacent to the MSFC interface. Placing service modules, such as firewalls or load balancers, in the path
                      delivers high performance load balancing and security services, but this design doesn’t provide the
                      maximum throughput that the Catalyst fabric can provide.
                      As a result, servers that do not require load balancing should not be placed behind a load balancer, and if
                      they require high throughput transfers across different VLANs you might want to place them adjacent to an
                      MSFC interface. The FWSM provides ~5.5 Gbps of throughput and the CSM provides ~4Gbps of
                      throughput.



Network Management
                      The management of every network device in the data center needs to be secured to avoid unauthorized
                      access. This basic concept is applicable in general but it is even more important in this design because the
                      firewall device is deployed as a module inside the Catalyst 6500 chassis. You need to ensure that nobody
                      changes the configuration of the switch to bypass the firewall.
                      You can promote secure management access through using Access Control Lists (ACL), Authentication
                      Authorization and Accounting (AAA), and Secure Shell (SSH). We recommend using a Catalyst IOS
                      software release greater than 12.1(19)E1a to take advantage of SSHv2.
                      You should deploy syslog at an informational level, and when available the syslogs should be sent to a
                      server rather than stored on the switch or router buffer: When a reload occurs, syslogs stored on the buffer
                      are lost, which makes their use in troubleshooting difficult. Disable console logging during normal
                      operations.
                      Configuration management is another important aspect of network management in the data center. Proper
                      management of configuration changes can significantly improve data center availability. By periodically
                      retrieving and saving configurations and by auditing the history of configuration changes you can
                      understand the cause of failures and ensure that managed devices comply with the standard configurations.
                      Software management is critical for achieving maximum data center availability. Before upgrading the


            Data Center Infrastructure Architecture Overview
 24                                                                                                                         Version 1.0
Data Center Infrastructure Architecture
                                                                                                                    Network Management




                          software to a new release you should know about the compatibility of the installed hardware image.
                          Network management tools can retrieve the information from Cisco Connection Online and compare it with
                          the hardware present in your data center. Only after you are sure that the requirements are met, should you
                          distribute the image to all the devices. You can use software management tools to retrieve the bug
                          information associated with device images and compare it to the bug information for the installed hardware
                          to identify the relevant bugs.
                          You can use Cisco Works 2000 Resource Manager Essentials (RME) to perform configuration management,
                          software image management, and inventory management of the Cisco data center devices. This requires
                          configuring the data center devices with the correct SNMP community strings. You can also use RME as the
                          syslog server. We recommend RME version 3.5 with Incremental Device Update v5.0 (for Sup720, FWSM,
                          and NAM support) and v6.0 (for CSM support).




                                                                             Data Center Infrastructure Architecture Overview
Version 1.0                                                                                                                       25
Data Center Infrastructure Architecture
Network Management




           Data Center Infrastructure Architecture Overview
26                                                                                       Version 1.0
INDEX

                                                   Catalyst 3750               12
Numerics
                                                   Catalyst 4500               11
10 Gigabit Ethernet            7, 10               Catalyst 6500
802.1      18                                        6506       11
                                                     6509       8
                                                     6513       8
A
                                                     form factors              7
AAA        21                                        rack units           7
access layer                                         service modules                7
    Catalyst 6500 hardware               11        Catalyst OS
    described        13                              SSHv2           21
    Layer 2       17                               CEF load balancing                   20
access ports, BPDU guard                 18        Channel Interface Processor
adapter fault tolerance             16               see CIP/CPA
aggregation layer                                  CIP/CPA
    described        13                              Layer 3 mainframe connectivity                            19
    Layer 3 protocols          18                  Cisco IOS software releases recommended                           12
application monitoring              15             Cisco Works 2000 Resource Manager Essentials
architecture, illustrated           6                see RME
auditing        21                                 client NAT             15
Authentication Authorization and Accounting        configuration management                      21
    see AAA                                        Content Accelerator 2                     9
availability, service classes             15       Content Service Module
                                                     see CSM
                                                   Content Service Switch
B
                                                     see CSS
BackboneFast              18                       convergence
bandwidth, scaling             20                    Layer 2         18
bottlenecks server-to-server                  15   copper cable for Fast Ethernet                     10
BPDUs                                              core layer, described                 12
    described        18                            CSM      7
                                                     performance               9
                                                     throughput               21
C                                                  CSS
Cat 5 cabling          10                            11500 SSL decryption module                           9

                                                         Data Center Infrastructure Architecture Overview
    Version 1.0                                                                                                           23
Index




    described    9
                                                                        G

                                                                        Gateway Load Balancing Protocol
D                                                                           see GLBP
data confidentiality             20                                     GBICs      8

default gateway                                                         GLBP, not supported               19

    placement options            19
    placement with load balancing                    15
                                                                        H
Direct Server Return              15
dual-attached servers                                                   hardware configuration                 8
    described    18                                                     hardware recommendations                        7
                                                                        high availability
                                                                            server farms        14
E                                                                       Hot Standby Routing Protocol
Enterprise System Connections                                               see HSRP
    see ESCON                                                           HSRP
ESCON                                                                       default gateway availability                    19

    Layer 3 connections               19                                    without load balancing                 15

EtherChannels          20                                               HTTP requests
Ethernet                                                                    with TN3270         19

    types of    10

                                                                        I
F                                                                       IDS
fabric-connected linecards                 21                               devices    20

fabric module                                                               sensors    9

    see SFM2                                                            IDSM
fabric switching           20                                               performance         9

fiber cable for Fast Ethernet                   10                      image management                 21

Firewall Service Module                                                 Incremental Device Update v5.0                           21

    see FWSM                                                            infrastructure
flooding                                                                    defined    5

    security attack         18                                              illustrated     6

full-duplex      18                                                     interoperability
FWSM                                                                        Rapid PVST+             18

    performance        9                                                interprocess communication                      16

    server farms, protecting               14                           Intrusion Detection Service Module
    throughput        21                                                    see IDSM
                                                                        inventory management                  21

                     Data Center Infrastructure Architecture Overview
      24                                                                                                                              Version 1.0
Index




J                                                                  N

jumbo frames                 11                                    NAM       7
                                                                   network analysis devices               20
                                                                   Network Analysis Module
L
                                                                       see NAM
Layer 3                                                            network management               21
    protocols           18                                         non-repudiation            20
Layer 4 security services                           20
load balancing
                                                                   O
    CEF     20
    default gateway placement                            15        optical service adapters
    servers        15                                                  see OSAs
    service modules                    7                           OSAs
    VLAN-aware                    17                                   for attaching mainframes                7
Logical Partitions                                                 OSPF
    see LPARs                                                          mainframes        19
logical segregation, illustrated                          17       oversubscription           10
LoopGuard
    recommended                   18
                                                                   P
LPARs         19
                                                                   performance, maximum                  21
                                                                   physical segregation            16
M
                                                                   PIX Firewalls         9
MAC flooding                                                       port density     7
    security attack               18                               PortFast
mainframes                                                             described    18
    attaching with OSAs                         7                  primary root switch             18
    Layer 3 protocols                      19
maximum performance                             21
                                                                   Q
MD5 authentication                         19
MSFC                                                               QoS     20
    aggregation layer                      18
    performance considerations                           21
multilayer architecture, illustrated                           6
                                                                   R
Multilayer Switching Feature Card                                  rack space utilization, improving               9
    see MSFC                                                       rack units, used by Catalyst 6500 models                 7
                                                                   Rapid Per VLAN Spanning Tree Plus


                                                                         Data Center Infrastructure Architecture Overview
    Version 1.0                                                                                                                  25
Index




    see RPVST+                                                            service appliances
Rapid PVST+                                                                 segregating server farms                     16
    recommended              18                                           service applicances
recommendations                                                             described      9
    Cisco IOS software releases                    12                     service classes           15
    hardware and software                     7                           service modules
Remote SPAN                 20                                              advantages         9
resilient server farms                  16                                  fabric attached              21
reverse proxy caching                   7                                   load balancing and security                       7
RME        21                                                               segregating server farms                     16
root switches                                                               supported by Catalyst 6500                        7
    primary and secondary                     18                          service switches              13
                                                                          session persistence                 15
                                                                          SFM2 with Sup2                 8
S
                                                                          software management                      21
scaling bandwidth                 20                                      software recommendations                       7
secondary root switches                      18                           SSH
Secure Shell                                                                for network management                       21
    see SSH                                                               SSL offloading devices                    20
Secure Sockets Layer                                                      SSL Service Module
    see SSL                                                                 see SSLSM
security                                                                  SSLSM
    data center         20                                                  performance             9
    Layer 2 attacks               18                                      static routing
    Layer 4       20                                                        mainframes          19
    port     18                                                             server farms           19
    service modules               7                                       Sup720
    technologies            20                                              fabric   21
segregation                                                                 integrated fabric                8
    between tiers            16                                             upgrading from Sup2                     9
    logical       17                                                      Supervisor 2
    physical       16                                                       see Sup2
server farms                                                              Supervisor 3         11
    high availability              14                                     switch fault tolerance                   16
    logical segregation                 17                                SYN COOKIEs                   20
    multi-tier         16                                                 syslog servers        21
    physical segregation                 16
    types of servers              15
server-to-server traffic                 17, 21

                       Data Center Infrastructure Architecture Overview
      26                                                                                                                          Version 1.0
Index




T

throughput, comparing Sup2 and Sup720              8
TN3270         19



U

UDLD
    described       18
unicast Reverse Path Forwarding
    see uRPF
Unidirectional Link Detection
    see UDLD
upgrading to Sup720                 9
UplinkFast          18
uRPF
    mainframes           19



V

VACL capture             20
virtual machines
    see VMs
Virtual Router Redundancy Protocol
    see VRRP
VLAN-aware
    load balancing            17
    service modules            9
VLANs
    determining number required               17
    segregating tiers          16
VMs       19
VRRP, not supported                 19



W

web-based applications                   16




                                                       Data Center Infrastructure Architecture Overview
    Version 1.0                                                                                            27

Weitere ähnliche Inhalte

Was ist angesagt? (20)

Suraj Kumar A.S._22.12.15
Suraj Kumar A.S._22.12.15  Suraj Kumar A.S._22.12.15
Suraj Kumar A.S._22.12.15
 
CV
CVCV
CV
 
Curriculum Vitae
Curriculum VitaeCurriculum Vitae
Curriculum Vitae
 
Gopu_CV_2016
Gopu_CV_2016Gopu_CV_2016
Gopu_CV_2016
 
CV--Aslam Zaki
CV--Aslam ZakiCV--Aslam Zaki
CV--Aslam Zaki
 
Forward unisys
Forward unisysForward unisys
Forward unisys
 
Unisys Stealth Black
Unisys Stealth BlackUnisys Stealth Black
Unisys Stealth Black
 
Resume
ResumeResume
Resume
 
Integration of pola alto and v mware nsx to protect virtual and cloud environ...
Integration of pola alto and v mware nsx to protect virtual and cloud environ...Integration of pola alto and v mware nsx to protect virtual and cloud environ...
Integration of pola alto and v mware nsx to protect virtual and cloud environ...
 
Cloud Security Solution Overview
Cloud Security Solution OverviewCloud Security Solution Overview
Cloud Security Solution Overview
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Company
CompanyCompany
Company
 
Cisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design GuideCisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design Guide
 
Yat (Calvin) Chow Resume - Oct 28 2016
Yat (Calvin) Chow Resume - Oct 28 2016Yat (Calvin) Chow Resume - Oct 28 2016
Yat (Calvin) Chow Resume - Oct 28 2016
 
Cv kamran
Cv   kamranCv   kamran
Cv kamran
 

Andere mochten auch

Bpr Process Modeling
Bpr Process ModelingBpr Process Modeling
Bpr Process Modelingrlynes
 
Center Business Overview V6 060611
Center Business Overview V6 060611Center Business Overview V6 060611
Center Business Overview V6 060611rlynes
 
Wimax Watch Us54 30 Sep05
Wimax Watch Us54 30 Sep05Wimax Watch Us54 30 Sep05
Wimax Watch Us54 30 Sep05rlynes
 
Five resons + one to tell the truth
Five resons + one to tell the truthFive resons + one to tell the truth
Five resons + one to tell the truthsvanzanten
 
INTO University of Exeter photo slideshow (Sept 2010)
INTO University of Exeter photo slideshow (Sept 2010)INTO University of Exeter photo slideshow (Sept 2010)
INTO University of Exeter photo slideshow (Sept 2010)INTO University Partnerships
 
Rdl Final Cv
Rdl Final CvRdl Final Cv
Rdl Final Cvrlynes
 
Contingency Planning Guide
Contingency Planning GuideContingency Planning Guide
Contingency Planning Guiderlynes
 

Andere mochten auch (18)

9 7-10
9 7-109 7-10
9 7-10
 
Linkedin Powerpoint
Linkedin PowerpointLinkedin Powerpoint
Linkedin Powerpoint
 
Linkedin Powerpoint
Linkedin PowerpointLinkedin Powerpoint
Linkedin Powerpoint
 
Bpr Process Modeling
Bpr Process ModelingBpr Process Modeling
Bpr Process Modeling
 
Linkedin Powerpoint
Linkedin PowerpointLinkedin Powerpoint
Linkedin Powerpoint
 
Poligonos
PoligonosPoligonos
Poligonos
 
Center Business Overview V6 060611
Center Business Overview V6 060611Center Business Overview V6 060611
Center Business Overview V6 060611
 
Wimax Watch Us54 30 Sep05
Wimax Watch Us54 30 Sep05Wimax Watch Us54 30 Sep05
Wimax Watch Us54 30 Sep05
 
Linkedin Powerpoint
Linkedin PowerpointLinkedin Powerpoint
Linkedin Powerpoint
 
Media Cartoons
Media CartoonsMedia Cartoons
Media Cartoons
 
Linkedin Powerpoint
Linkedin PowerpointLinkedin Powerpoint
Linkedin Powerpoint
 
Five resons + one to tell the truth
Five resons + one to tell the truthFive resons + one to tell the truth
Five resons + one to tell the truth
 
INTO University of Exeter photo slideshow (Sept 2010)
INTO University of Exeter photo slideshow (Sept 2010)INTO University of Exeter photo slideshow (Sept 2010)
INTO University of Exeter photo slideshow (Sept 2010)
 
Rdl Final Cv
Rdl Final CvRdl Final Cv
Rdl Final Cv
 
INTO Manchester photo slideshow (Sept 2010)
INTO Manchester photo slideshow (Sept 2010)INTO Manchester photo slideshow (Sept 2010)
INTO Manchester photo slideshow (Sept 2010)
 
INTO Newcastle photo slideshow (Sept 2010)
INTO Newcastle photo slideshow (Sept 2010)INTO Newcastle photo slideshow (Sept 2010)
INTO Newcastle photo slideshow (Sept 2010)
 
INTO UEA London photo slideshow (Sept 2010)
INTO UEA London photo slideshow (Sept 2010)INTO UEA London photo slideshow (Sept 2010)
INTO UEA London photo slideshow (Sept 2010)
 
Contingency Planning Guide
Contingency Planning GuideContingency Planning Guide
Contingency Planning Guide
 

Ähnlich wie Datacenterarchitecture

Fedramp developing-system-security-plan-slides
Fedramp developing-system-security-plan-slidesFedramp developing-system-security-plan-slides
Fedramp developing-system-security-plan-slidesTuan Phan
 
Unit 4_Introduction to Server Farms.pptx
Unit 4_Introduction to Server Farms.pptxUnit 4_Introduction to Server Farms.pptx
Unit 4_Introduction to Server Farms.pptxRahul Borate
 
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7CCNA (R & S) Module 02 - Connecting Networks - Chapter 7
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7Waqas Ahmed Nawaz
 
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SAMeh Zaghloul
 
VMworld 2013: VMware Compliance Reference Architecture Framework Overview
VMworld 2013: VMware Compliance Reference Architecture Framework Overview VMworld 2013: VMware Compliance Reference Architecture Framework Overview
VMworld 2013: VMware Compliance Reference Architecture Framework Overview VMworld
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage EMC
 
talk6securingcloudamarprusty-191030091632.pptx
talk6securingcloudamarprusty-191030091632.pptxtalk6securingcloudamarprusty-191030091632.pptx
talk6securingcloudamarprusty-191030091632.pptxTrongMinhHoang1
 
IBM Software Defined Networking = Brave New World of IT
IBM Software Defined Networking = Brave New World of  ITIBM Software Defined Networking = Brave New World of  IT
IBM Software Defined Networking = Brave New World of ITSteve Cole
 
Ccnp™ advanced cisco® router
Ccnp™ advanced cisco® routerCcnp™ advanced cisco® router
Ccnp™ advanced cisco® routerchiliconcarne
 
Lucw lsec-securit-20110907-4-final-5
Lucw lsec-securit-20110907-4-final-5Lucw lsec-securit-20110907-4-final-5
Lucw lsec-securit-20110907-4-final-5Luc Wijns
 
Software Defined Substation Intelligence, Automation and Control
Software Defined Substation Intelligence, Automation and ControlSoftware Defined Substation Intelligence, Automation and Control
Software Defined Substation Intelligence, Automation and ControlBastian Fischer
 
Architecting Secure Web Systems
Architecting Secure Web SystemsArchitecting Secure Web Systems
Architecting Secure Web SystemsInnoTech
 
Security and Compliance for Enterprise Cloud Infrastructure
Security and Compliance for Enterprise Cloud InfrastructureSecurity and Compliance for Enterprise Cloud Infrastructure
Security and Compliance for Enterprise Cloud InfrastructureCloudPassage
 
Client server technology
Client server technologyClient server technology
Client server technologyAnwar Kamal
 
presentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSpresentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSdnarvarte2
 
Simplifying SDN Networking Across Private and Public Clouds
Simplifying SDN Networking Across Private and Public CloudsSimplifying SDN Networking Across Private and Public Clouds
Simplifying SDN Networking Across Private and Public Clouds5nine
 

Ähnlich wie Datacenterarchitecture (20)

Fedramp developing-system-security-plan-slides
Fedramp developing-system-security-plan-slidesFedramp developing-system-security-plan-slides
Fedramp developing-system-security-plan-slides
 
Unit 4_Introduction to Server Farms.pptx
Unit 4_Introduction to Server Farms.pptxUnit 4_Introduction to Server Farms.pptx
Unit 4_Introduction to Server Farms.pptx
 
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7CCNA (R & S) Module 02 - Connecting Networks - Chapter 7
CCNA (R & S) Module 02 - Connecting Networks - Chapter 7
 
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
 
NetScaler 11 Update
NetScaler 11 UpdateNetScaler 11 Update
NetScaler 11 Update
 
Cisco data center
Cisco data centerCisco data center
Cisco data center
 
VMworld 2013: VMware Compliance Reference Architecture Framework Overview
VMworld 2013: VMware Compliance Reference Architecture Framework Overview VMworld 2013: VMware Compliance Reference Architecture Framework Overview
VMworld 2013: VMware Compliance Reference Architecture Framework Overview
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
 
OCS LIA
OCS LIAOCS LIA
OCS LIA
 
talk6securingcloudamarprusty-191030091632.pptx
talk6securingcloudamarprusty-191030091632.pptxtalk6securingcloudamarprusty-191030091632.pptx
talk6securingcloudamarprusty-191030091632.pptx
 
IBM Software Defined Networking = Brave New World of IT
IBM Software Defined Networking = Brave New World of  ITIBM Software Defined Networking = Brave New World of  IT
IBM Software Defined Networking = Brave New World of IT
 
Ccnp™ advanced cisco® router
Ccnp™ advanced cisco® routerCcnp™ advanced cisco® router
Ccnp™ advanced cisco® router
 
Lucw lsec-securit-20110907-4-final-5
Lucw lsec-securit-20110907-4-final-5Lucw lsec-securit-20110907-4-final-5
Lucw lsec-securit-20110907-4-final-5
 
Software Defined Substation Intelligence, Automation and Control
Software Defined Substation Intelligence, Automation and ControlSoftware Defined Substation Intelligence, Automation and Control
Software Defined Substation Intelligence, Automation and Control
 
Architecting Secure Web Systems
Architecting Secure Web SystemsArchitecting Secure Web Systems
Architecting Secure Web Systems
 
Cisco DNA
Cisco DNACisco DNA
Cisco DNA
 
Security and Compliance for Enterprise Cloud Infrastructure
Security and Compliance for Enterprise Cloud InfrastructureSecurity and Compliance for Enterprise Cloud Infrastructure
Security and Compliance for Enterprise Cloud Infrastructure
 
Client server technology
Client server technologyClient server technology
Client server technology
 
presentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSpresentacion comercial de CISCO UCS
presentacion comercial de CISCO UCS
 
Simplifying SDN Networking Across Private and Public Clouds
Simplifying SDN Networking Across Private and Public CloudsSimplifying SDN Networking Across Private and Public Clouds
Simplifying SDN Networking Across Private and Public Clouds
 

Mehr von rlynes

Rome Infrastructure Color
Rome Infrastructure ColorRome Infrastructure Color
Rome Infrastructure Colorrlynes
 
Innovating Better Buildings Final[1]
Innovating Better Buildings Final[1]Innovating Better Buildings Final[1]
Innovating Better Buildings Final[1]rlynes
 
Us Health Net Llc
Us Health Net LlcUs Health Net Llc
Us Health Net Llcrlynes
 
E Marketplace Strategy
E Marketplace StrategyE Marketplace Strategy
E Marketplace Strategyrlynes
 
Lynes Diagrams
Lynes DiagramsLynes Diagrams
Lynes Diagramsrlynes
 
Center Business Overview V4 042111
Center Business Overview V4 042111Center Business Overview V4 042111
Center Business Overview V4 042111rlynes
 

Mehr von rlynes (6)

Rome Infrastructure Color
Rome Infrastructure ColorRome Infrastructure Color
Rome Infrastructure Color
 
Innovating Better Buildings Final[1]
Innovating Better Buildings Final[1]Innovating Better Buildings Final[1]
Innovating Better Buildings Final[1]
 
Us Health Net Llc
Us Health Net LlcUs Health Net Llc
Us Health Net Llc
 
E Marketplace Strategy
E Marketplace StrategyE Marketplace Strategy
E Marketplace Strategy
 
Lynes Diagrams
Lynes DiagramsLynes Diagrams
Lynes Diagrams
 
Center Business Overview V4 042111
Center Business Overview V4 042111Center Business Overview V4 042111
Center Business Overview V4 042111
 

Kürzlich hochgeladen

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 

Kürzlich hochgeladen (20)

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 

Datacenterarchitecture

  • 1. Data Center Infrastructure Architecture Overview March, 2004 Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100
  • 2. THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. CCIP, CCSP, the Cisco Arrow logo, the Cisco Powered Network mark, Cisco Unity, Follow Me Browsing, FormShare, and StackWise are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, and iQuick Study are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, the Cisco IOS logo, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Empowering the Internet Generation, Enterprise/Solver, EtherChannel, EtherSwitch, Fast Step, GigaStack, Internet Quotient, IOS, IP/TV, iQ Expertise, the iQ logo, iQ Net Readiness Scorecard, LightStream, MGX, MICA, the Networkers logo, Networking Academy, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing, RateMUX, Registrar, ScriptShare, SlideCast, SMARTnet, StrataView Plus, Stratm, SwitchProbe, TeleRouter, The Fastest Way to Increase Your Internet Quotient, TransPath, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0304R) Data Center Infrastructure Architecture Overview Copyright © 2004 Cisco Systems, Inc. All rights reserved. 2 Version 1.0
  • 3. CONTENTS Data Center Infrastructure Architecture 5 Data Center Architecture 5 Hardware and Software Recommendations 7 Aggregation Switches 7 Service Appliances 9 Service Modules 9 Access Switches 9 Software Recommendations 12 Data Center Multi-Layer Design 12 Core Layer 12 Aggregation and Access Layer 13 Service Switches 13 Server Farm Availability 14 Load-Balanced Servers 15 Data Center Protocols and Features 17 Layer 2 Protocols 17 Layer 3 Protocols 18 Security in the Data Center 20 Scaling Bandwidth 20 Network Management 21 INDEX Data Center Infrastructure Architecture Overview Version 1.0 iii
  • 4. Contents Data Center Infrastructure Architecture Overview iv Version 1.0
  • 5. Data Center Infrastructure Architecture This document is the first in a series of four documents that provide design guidance for designing and implementing a data center infrastucture that ensures high availability, security and scalability: • Data Center Infrastructure Architecture—Provides background information for designing a secure, scalable, and resilient data center infrastructure. • Data Center Infrastructure Design—Describes major design issues, including routing between the data center and the core, switching within the server farm, and optimizing mainframe connectivity. • HA Connectivity for Servers and Mainframes: NIC Teaming and OSA/OSPF Design—Provides information about server connectivity with NIC teaming and mainframe connectivity • Data Center Infrastructure Configuration—Provides configuration procedures and sample listings for implementing the recommended infrastructure architecture. This document provides background information for designing a secure, scalable, and resilient data center infrastructure. It includes the following sections: • Data Center Architecture • Hardware and Software Recommendations • Data Center Multi-Layer Design • Data Center Protocols and Features • Scaling Bandwidth • Network Management Data Center Architecture This section describes the basic architecture for a secure, scalable, and resilient data center infrastruc- ture. The term infrastructure in this design guide refers to the Layer 2 and Layer 3 configurations that provide network connectivity to the server farm as well as the network devices that provide security and application-related functions. Data centers are composed of devices that provide the following functions: • Ensuring network connectivity, including switches and routers • Providing network and server security, including firewalls and Intrusion Detection Systems (IDSs) • Enhancing availability and scalability of applications, including load balancers, Secure Sockets Layer (SSL) offloaders and caches In addition, a Network Analysis Module (NAM) is typically used to monitor the functioning of the network Data Center Infrastructure Architecture Overview Version 1.0 5
  • 6. Data Center Infrastructure Architecture Data Center Architecture and the performance of the server farm. The following are critical requirements when designing the data center infrastructure to meet service level expectations: • High Availability—Avoiding a single point of failure and achieving fast and predictable convergence times • Scalability—Allowing changes and additions without major changes to the infrastructure, easily adding new services, and providing support for hundreds dual-homed servers • Simplicity—Providing predictable traffic paths in steady and failover states, with explicitly defined primary and backup traffic paths • Security—Prevent flooding, avoid exchanging protocol information with rogue devices, and prevent unauthorized access to network devices The data center infrastructure must provide port density and Layer 2 and Layer 3 connectivity, while supporting security services provided by access control lists (ACLs), firewalls and intrusion detection systems (IDS). It must support server farm services such as content switching, caching, SSL offloading while integrating with multi-tier server farms, mainframes, and mainframe services (TN3270, load balancing and SSL offloading). While the data center infrastructure must be scalable and highly available, it should still be simple to operate, troubleshoot, and must easily accommodate new demands. Data Center Infrastructure Architecture Overview 6 Version 1.0
  • 7. Data Center Infrastructure Architecture Hardware and Software Recommendations Figure 1 Data Center Architecture Enterprise campus core Aggregation layer Access Mainframe 114028 Load Firewall SSL Cache Network IDS sensor balancer offloader analysis Figure 1 shows a high-level view of the Cisco Data Center Architecture. As shown, the design follows the proven Cisco multilayer architecture, including core, aggregation, and access layers. Network devices are deployed in redundant pairs to avoid a single point of failure. The examples in this design guide use the Catalyst 6500 with Supervisor 2 in the aggregation layer, Gigabit Ethernet, and Gigabit EtherChannel links. Hardware and Software Recommendations This section summarizes the recommended hardware and software for implementing a highly available, secure and scalable data center intrastructure. It includes the following topics: • Aggregation Switches • Service Appliances and Service Modules • Access Switches • Software Recommendations Data Center Infrastructure Architecture Overview Version 1.0 7
  • 8. Data Center Infrastructure Architecture Hardware and Software Recommendations Aggregation Switches The following are some of the factors to use in choosing the aggregation layer device: • Forwarding performance • Density of uplink ports • Support for 10 Gigabit Ethernet linecards • Support for 802.1s, 802.1w, Rapid-PVST+ • Support for MPLS-VPNs • Support for hardware-based NAT • Support for uRPF in hardware • QoS characteristics • Support for load balancing and security services (service modules) At the aggregation layer, Cisco recommends using Catalyst 6500 family switches because the Catalyst 6500 chassis supports service modules for load balancing and security, including the following: • Content Service Module (CSM) • SSL Service Module (SSLSM) • Firewall Service Module (FWSM) • Intrusion Detection Service Module (IDSM) • Network Analysis Module (NAM) The chassis configuration depends on the specific services you want to support at the aggregation layer, the port density of uplinks and appliances, and the need for supervisor redundancy. Load balancing and security services can also be provided by external service appliances, such as PIX Firewalls; Content Services Switches, Secure Content Accelerators and Content Engines. You also typically attach mainframes to the aggregation switches, especially if you configure each connection to the optical server adapters (OSA) card as a Layer 3 link. In addition, you can use the aggregation switches to attach caches for Reverse Proxy Caching. You can also directly attach servers to the aggregation switches if the port density of the server farm doesn’t require using access switches. Note The Supervisor 2 (Sup2) and Sup720 are both recommended, but this design guide is intended for use with Sup2. Another design guide will describe the use of Sup720, which provides higher performance and additional functionalities in hardware and is the best choice to build a 10-Gigabit Ethernet data center infrastructure.. The Catalyst 6500 is available in several form factors: • 6503: 3 slots 3 RUs • 6506: 6 slots 12 RUs • 7606: 6 slots 7 RUs • 6509: 9 slots 15 RUs • 6513: 13 slots, 19 RUs The 6509 and 6513 are typically deployed in the data center because they provide enough slots for access ports and service modules, such as IDS. The 6500 chassis support a 32 Gbps shared bus, a 256 Gbps fabric (SFM2) and a 720 Gbps fabric (if using Data Center Infrastructure Architecture Overview 8 Version 1.0
  • 9. Data Center Infrastructure Architecture Hardware and Software Recommendations Sup720). With a 6509, the Sup2 connects to slot 1 or 2 and the switch fabric (or the Sup720) connects to slot 5 or slot 6. With a 6513, the Sup2 connects to slot 1 or 2, and the switch fabric (or the Sup720) connects to the slot 7 or slot 8. If you use the fabric module (SFM2) with Sup2, each slot in a 6509 receives 16 Gbps of channel attachment. Slots 1-8 in a 6513 receive 8 Gbps and slots 9-13 receive 16 Gbps of channel attachment. If you use Sup720, which has an integrated fabric, each slot in a 6509 receives 40 Gbps of channel attachment. Slots 1-8 in a 6513 receive 20 Gbps, and slots 9-13 receive 40 Gbps of channel attachment. Catalyst 6509 Hardware Configuration A typical configuration of a Catalyst 6509 in the aggregation of a data center looks like this: • Sup2 with MSFC2 • FWSM (fabric attached at 8 Gbps) • CSM • SSLSM (fabric attached at 8 Gbps) • IDSM-2 (fabric attached at 8 Gbps) • WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) – (fabric attached at 8 Gbps) for uplink connectivity with the access switches • WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) – (fabric attached at 8 Gbps) for uplink connectivity with the access switches • WS-X6516-GE-TX – 16 10/100/1000 BaseT– Jumbo – (fabric attached at 8 Gbps) for servers and caches If you use a fabric module, this would plug into slot 5 or 6. Because sup720 has an integrated fabric, this one would also plug into slot 5 or 6. Catalyst 6513 Hardware Configuration A typical configuration of a Catalyst 6513 in the aggregation of a data center looks like this: • Sup2 with MSFC2 • FWSM (fabric attached at 8 Gbps) • CSM • SSLSM (fabric attached at 8 Gbps) • IDSM-2 (fabric attached at 8 Gbps) • NAM-2 (fabric attached at 8 Gbps) • WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) – (fabric attached at 8 Gbps) for uplink connectivity with the access switches • WS-X6516A-GBIC or WS-X6516-GBIC – 16 Gigabit Ethernet Fiber Ports – Jumbo (9216 B) – (fabric attached at 8 Gbps) for uplink connectivity with the access switches • WS-X6516-GE-TX – 16 10/100/1000 BaseT– Jumbo (9216 B) – (fabric attached at 8 Gbps) for servers and caches If you use a fabric module, this would plug into slot 7 or 8. Because sup720 has an integrated fabric, this one would also plug into slot 7 or 8. It is also good practice to use the first 8 slots for service modules because these are fabric attached with a Data Center Infrastructure Architecture Overview Version 1.0 9
  • 10. Data Center Infrastructure Architecture Hardware and Software Recommendations single 8 Gbps channel. Use the remaining slots for Ethernet line cards because these might use both fabric channels. Note When upgrading the system to Sup720 you can keep using the linecards WS-6516-GE-TX, WS-6516-GBIC, WS-6516A-GBIC Service Appliances Service appliances are external networking devices that include the following: • Content Service Switch (CSS, CSS11506): 5 RUs, 40 Gbps of aggregate throughput, 2,000 connections per second per module (max 6 modules), 200,000 concurrent connections with 256 MB DRAM. • CSS11500 SSL decryption module (for the CSS11500 chassis): Performance numbers per module: 1,000 new transactions per second, 20,000 concurrent sessions, 250 Mbps of throughput. • PIX Firewalls (PIX 535): 3 RU, 1.7 Gpbs of throughput, 500,000 concurrent connections • IDS sensors (IDS 4250XL): 1 RU, 1 Gbps (with the XL card) • Cisco Secure Content Accelerator 2: 1 RU, 800 new transactions per second, 20,000 concurrent sessions, 70 Mbps of bulk transfer The number of ports that these appliances require depends entirely on how many appliances you use and how you configure the Layer 2 and Layer 3 connectivity between the appliances and the infrastructure. Service Modules Security and load balancing services in the data center can be provided either with appliances or with Catalyst 6500 linecards. The choice between the two family of devices is driven by considerations of performance, rack space utilization, cabling and of course features that are specific to each of the devices. Service modules are cards that you plug into the Catalyst 6500 to provide firewalling, intrusion detection, content switching, and SSL offloading. Service modules communicate with the network through the Catalyst backplane and can be inserted without the need for additional power or network cables. Service modules provide better rack space utilisation, simplified cabling, better integration between the modules and higher performance than typical appliances. When using service modules, certain configurations that optimize the convergence time and the reliability of the network are automatic. For example, when you use an external appliance, you need to manually configure portfast or trunkfast on the switch port that connects to the appliance. This configuration is automatic when you use a service module. As an example of rack space utilization consider that a PIX 535 firewall takes 3 Rack Units (RUs), while a Firewall Services Module (FWSM) takes one slot in a Catalyst switch, which means that a FWSM inside a Catalyst 6513 takes (19 RU / 13 slots) = 1.4 RUs. Another advantage of using service modules as opposed to external appliances is that service modules are VLAN aware, which makes consolidation and virtualization of the infrastructure easier. Each service module provides a different functionality and takes one slot out of the Catalyst 6500. Examples of these modules include the following: • CSM: 165,000 connections per second, 1,000,000 concurrent connections, 4 Gbps of throughput. Data Center Infrastructure Architecture Overview 10 Version 1.0
  • 11. Data Center Infrastructure Architecture Hardware and Software Recommendations • FWSM: 8 Gpbs fabric attached . Performance numbers: 100,000 cps, 5.5Gbps of throughput, 1,000,000 cc. • SSLSM: 8 Gbps fabric attached. Performance numbers: 3000 new transactions per second, 60,000 concurrent connections, 300 Mbps of throughput. • IDSM-2: 8 Gbps fabric attached. Performance: 600 Mbps Access Switches This section describes how to select access switches for your data center intrastructure design and describes some of the Cisco Catalyst products that are particularly useful. It includes the following topics: • Selecting Access Switches • Catalyst 6500 • Catalyst 4500 • Catalyst 3750 Selecting Access Switches The following are some of the factors to consider when choosing access layer switches: • Forwarding performance • Oversubscription rates • Support for 10/100/1000 linecards • Support for 10 Gigabit Ethernet (for uplink connectivity) • Support for Jumbo Frames • Support for 802.1s, 802.1w, Rapid-PVST+ • Support for stateful redundancy with dual supervisors • Support for VLAN ACLs (used in conjunction with IDS) • Support for Layer 2 security features such as port security and ARP inspection • Support for private VLANs • Support for SPAN and Remote SPAN (used in conjunction with IDS) • Support for QoS • Modularity • Rack space and cabling efficiency • Power redundancy Cost often requires choosing less expensive server platforms that only support one NIC card. To provide availability for these single-homed servers you need to use dual supervisors in the access switch. For dual supervisor redundancy to be effective you need stateful failover at least to Layer 2. When choosing linecards or other products to use at the access layer, consider how much oversubscription a given application tolerates. When choosing linecards, you should also consider support for Jumbo frames and the maximum queue size. Data Center Infrastructure Architecture Overview Version 1.0 11
  • 12. Data Center Infrastructure Architecture Hardware and Software Recommendations Modular switches support both oversubscribed and non-oversubscribed linecards. Typically, you use oversubscribed linecards as access ports for server attachment and non-oversubscribed linecards for uplink ports or channels between switches. You might need to use non-oversubscribed linecards for the server ports as well, depending on the amount of traffic that you expect a server to generate. Although various platforms can be used as access switches, this design guide uses the Catalyst 6506. Using service modules in an access switch can improve rack space utilization and reduce cabling if you deploy load balancing and security at the access layer. From the data center design perspective, the access layer (front-end switches) must support 802.1s/1w and Rapid PVST+ to take advantage of rapid convergence. The 10/100/1000 technology allows incremental adoption of Gigabit Ethernet in the server farm thanks to the compatibility between FastEthernet NIC cards and 10/100/1000 switch linecards. 10 Gigabit Ethernet is becoming the preferred technology for the data center uplinks within the data center and between the data center and the core. Cabling between the servers and the switch can be either fiber or copper. Gigabit over copper can run on the existing Cat 5 cabling used for Fast Ethernet (ANSI/TIA/EIA 568-A, ISO/IEC 11801-1995). Cat 5 cabling was designed for the use of 2 cable pairs, but Gigabit Ethernet uses 4 pairs. Existing Cat 5 wiring infrastructure must be tested to ensure it can effectively support Gigabit rates. New installations of Gigabit Ethernet over copper should use at least Cat 5e cabling or, better, Cat 6. Note For more information on the cabling requirements of 1000BaseT refer to the document “Gigabit Ethernet Over Copper Cabling” published on www.gigabitsolution.com Catalyst 6500 The Catalyst 6500 supports all the technologies and features required for implementing a highly available, secure, and scalable data center intrastructure. The platform used in this design guide for the access switches is the 6506 because it provides enough slots for access ports and service modules together with efficient rack space utilisation. A typical configuration for the Catalyst 6500 in the access layer is as follows: • Single or dual supervisors (two supervisors are recommended for single-homed servers) • IDSM-2 • Access ports for the servers 10/100/1000 linecards: WS-6516-GE-TX – Jumbo (9216 B), fabric attached at 8 Gbps • Gigabit linecard for uplink connectivity: WS-6516-GBIC or WS-6516A-GBIC – Jumbo (9216 B), fabric attached at 8 Gbps Note It is possible to attach 1000BaseT GBIC adapters to Optical Gigabit linecards by using the WS-G5483 GBIC If the Catalyst 6506 is upgraded to Sup720, Sup720 will be plugged into slot 5 or slot 6. For this reason when using Sup2 it is practical to keep either slot empty for a possible upgrade or to insert a fabric module. When upgrading the system to Sup720 you can keep using the linecards WS-6516-GE-TX, WS-6516-GBIC, WS-6516A-GBIC Data Center Infrastructure Architecture Overview 12 Version 1.0
  • 13. Data Center Infrastructure Architecture Hardware and Software Recommendations Catalyst 4500 The Catalyst 4500, which can also be used as an access switch in the data center is a modular switch available with the following chassis types: • 4503: 3 slots, 7 RUs • 4506: 6 slots, 10 RUs • 4507R: 7 slots, 11 RUs (slot 1 and 2 are reserved for the supervisors and do not support linecards) Only the 4507R supports dual supervisors. A typical configuration with supervisor redundancy and layer 2 access would be as follows: • Dual Sup2-plus (mainly layer 2 + static routing and RIP) or dual supervisor IV (for layer 3 routing protocols support with hardware CEF) • Gigabit copper attachment for servers, which can use one of the following: • WS-4306-GB with copper GBICs (WS-G5483) • 24-port 10/100/1000 WS-X4424-GB-RJ45 • 12-port 1000BaseT linecard WS-X4412-2GB-T • Gigabit fiber attachment for servers, which can use a WS-X4418-GB (this doesn’t support copper GBICs) • Gigabit linecard for uplink connectivity: WS-4306-GB – Jumbo (9198 B) Note Jumbo frames are only supported on non-oversubscribed ports. When internal redundancy is not required, you don’t need to use a 4507 chassis and you can use a Supervisor 3 for Layer 3 routing protocol support and CEF switching in hardware. Catalyst 3750 The Catalyst 3750 is a stackable switch that supports Gigabit Ethernet, such as the 24-port 3750G-24TS with 10/100/1000 ports and 4 SFP for uplink connectivity. Several 3750s can be clustered together to logically form a single switch. In this case, you could use 10/100/1000 switches (3750-24T) clustered with an SFP switch (3750G-12S) for EtherChannel uplinks. Software Recommendations Because of continous improvements in the features that are supported on the access switch platforms described in this design document, it isn't possible to give a recommendation on the software release you should deploy in your data center. The choice of the software release depends on the hardware that the switch needs to support and on the stability of a given version of code. In a data center design, you should use a release of code that has been released for a long time, is available with several re-builds, and where the newer builds contain only bug fixes. When using Catalyst family products, you must choose between using the Supervisor IOS operating system or the Catalyst IOS operating systems. These two operating systems have some important differences in the CLI, the features supported, and the hardware supported. Data Center Infrastructure Architecture Overview Version 1.0 13
  • 14. Data Center Infrastructure Architecture Data Center Multi-Layer Design This design document uses supervisor IOS on the Catalyst 6500 aggregation switches because it supports Distributed Forwarding Cards, and because it was the first operating system to support the Catalyst service modules. Also, it is simpler to use a single standardized image and a single operating system on all the data center devices The following summarizes the features introduced with different releases of the software: • 12.1(8a)E—Support for Sup2 and CSM • 12.1(13)E—Support for Rapid PVST+ and for FWSM, NAM2 with Sup2, and SSLSM with Sup2 • 12.1(14)E—Support for IDSM-2 with Sup2 • 12.1(19)E—Support for some of the 6500 linecards typically used in data centers and SSHv2 This design guide is based on testing with Release 12.1(19)Ea1. Data Center Multi-Layer Design This section describes the design of the different layers of the data center infrastructure. It includes the following topics: • Core Layer • Aggregation Layer • Access Layer • Service Switches • Server Availability Core Layer The core layer in an enterprise network provides connectivity among the campus buildings, the private WAN network, the Internet edge network and the data center network. The main goal of the core layer is to switch traffic at very high speed between the modules of the enterprise network. The configuration of the core devices is typically kept to a minimum, which means pure routing and switching. Enabling additional functions might bring down the performance of the core devices. There are several possible types of core networks. In previous designs, the core layer used a pure Layer 2 design for performance reasons. However, with the availability of Layer 3 switching, a Layer 3 core is as fast as a Layer 2 core. If well designed, a Layer 3 core can be more efficient in terms of convergence time and can be more scalable. For an analysis of the different types of core, refer to the white paper available on www.cisco.com: “Designing High-Performance Campus Intranets with Multilayer Switching” by Geoff Haviland. The data center described in this design guide connects to the core using Layer 3 links. The data center network is summarized and the core injects a default into the data center network. Some specific applications require injecting host routes (/32) into the core. Aggregation and Access Layer The access layer provides port density to the server farm, while the aggregation layer collects traffic from Data Center Infrastructure Architecture Overview 14 Version 1.0
  • 15. Data Center Infrastructure Architecture Data Center Multi-Layer Design the access layer and connects the data center to the core. The aggregation layer is also the preferred attachment point for mainframes and the attachment point for caches used in Reverse Proxy Cache mode. Security and application service devices (such as load balancing devices, SSL offloading devices, firewalls and IDS devices) are deployed either at the aggregation or access layer. Service devices deployed at the aggregation layer are shared among all the servers, while services devices deployed at the access layer provide benefit only to the servers that are directly attached to the specific access switch. The design of the access layer varies depending on whether you use Layer 2 or Layer 3 access. Layer 2 access is more efficient for sharing aggregation layer services among the servers. For example, to deploy a firewall that is used by all the servers in the data center, deploy it at the aggregation layer. The easiest implementation is with the firewall Layer 2 adjacent to the servers because the firewall should see both client-to-server and server-to-client traffic. Security and application services are provided by deploying external appliances or service modules. The Cisco preferred architecture for large-scale server farms uses service modules for improved integration and consolidation. A single service module can often replace multiple external appliances with a single linecard. Figure 1 shows the aggregation switches with firewalling, IDS, load balancing, SSL offloading and NAM in the same switch. This configuration needs to be customized for specific network requirements and is not the specific focus of this document. For information about designing data centers with service modules, refer to . Service Switches The architecture shown in Figure 1 is characterized by high density in service modules on each aggregation switch, which limits the number of ports available for uplink connectivity. It is also possible that the code versions required by the service modules may not match the software version already used on the aggregation switches in the data center environment. Figure 2 illustrates the use of service switches in a data center. Service switches are Catalyst 6500 populated with service modules and dual-attached to the aggregation switches. When used with service modules, they allow higher port density and separate the code requirements of the service modules from those of the aggregation switches. Data Center Infrastructure Architecture Overview Version 1.0 15
  • 16. Data Center Infrastructure Architecture Data Center Multi-Layer Design Figure 2 Data Center Architecture with Service Switches Enterprise campus core Aggregation layer Access Mainframe 114029 Load Firewall SSL Cache Network IDS sensor balancer offloader analysis Using service switches is very effective when not all the traffic requires the use of service devices. Traffic that doesn't can take the path to the core through the aggregation switches. For example, by installing a Content Switching Module in a service switch, the servers that require load balancing are configured on a “server VLAN” that brings the traffic to the service switches. Servers that don’t require load balancing are configured on a VLAN that is terminated on the aggregation switches. On the other hand, in a server farm, all the servers are typically placed behind one or more Firewall Service Modules (FWSM). Placing an FWSM in a service switch would require all the traffic from the server farm to flow through the service switch and no traffic would use the aggregation switches for direct access to the core. The only benefit of using a service switch with FWSM is an increased number of uplink ports at the aggregation layer. For this reason, it usually makes more sense to place an FWSM directly into an aggregation switch. By using service switches, you can gradually move the servers behind service modules and eventually replace the aggregation switches with the service switches. Server Farm Availability Server farms in a data center have different availability requirements depending on whether they host Data Center Infrastructure Architecture Overview 16 Version 1.0
  • 17. Data Center Infrastructure Architecture Data Center Multi-Layer Design business-critical applications or applications with less stringent availability requirements, such as development applications. You can meet availability requirements by leveraging specific software technologies and network technologies, including the following: Applications can be load-balanced either with a network device or with clustering software Servers can be multi-homed with multiple NIC cards Access switches can provide maximum availability if deployed with dual supervisors Load-Balanced Servers Load-balanced servers are located behind a load balancer, such as CSM. Load-balanced server farms typically include the following kinds of servers: • Web and application servers • DNS servers • LDAP servers • RADIUS servers • TN3270 servers • Streaming servers Note The document at the following URL outlines some of the popular applications of load balancing: http://www.cisco.com/warp/public/cc/pd/cxsr/400/prodlit/sfarm_an.htm Load-balanced server farms benefit from load distribution, application monitoring, and application-layer services, such as session persistence. On the other hand, while the 4 Gbps throughput of a CSM is sufficient in most client-to-server environments, it could be a bottleneck for bulk server-to-server data transfers in large-scale server farms. When the server farm is located behind a load balancer, you may need to choose one of the following options to optimize server-to-server traffic: • Direct Server Return • Performing client NAT on the load balancer • Policy Based Routing The recommendations in this document apply to network design with a CSM and should be deployed before installing the CSM. A key difference between load-balanced servers and non-load balanced servers is the placement of the default gateway. Non-load balanced servers typically have their gateway configured as a Hot Standby Routing Protocol (HSRP) address on the router inside the Catalyst 6500 switch or on the firewall device. Load-balanced servers may use the IP address of the load balancing device as their default gateway. Levels of Server Availability Each enterprise categorizes its server farms based on how critical they are to the operation of the business. Servers that are used in production and handle sales transaction are often dual-homed and configured for “switch fault tolerance.” This means the servers are attached with two NIC cards to separate switches, as shown in Figure 1. This allows performing maintenance on one access switch without affecting access to the Data Center Infrastructure Architecture Overview Version 1.0 17
  • 18. Data Center Infrastructure Architecture Data Center Multi-Layer Design server. Other servers, such as those used for developing applications, may become inaccessible without immediately affecting the business. You can categorize the level of availability required for different servers as follows: • Servers configured with multiple NIC cards each attached to a different access switch (switch fault tolerance) provide the maximum possible availability. This option is typically reserved to servers hosting business critical applications. • Development servers could also use two NICs that connect to a single access switch which has two supervisors. This configuration of the NIC cards goes under the name of “adapter fault tolerance”. The two NICs should be attached to different linecards. • Development servers that are less critical to the business can use one NIC connected to a single access switch (which has two supervisors) • Development servers that are even less critical can use one NIC connected to a single access switch which has a single supervisor The use of access switches with two supervisors provides availability for servers that are attached to a single access switch. The presence of two supervisors makes it possible to perform software upgrades on one supervisor with minimal disruption of the access to the server farm. Adapter fault tolerance means that the server is attached with each NIC card to the same switch but each NIC card is connected to a different linecard in the access switch. Switch fault tolerance and adapter fault tolerance are described in Chapter 3, “HA Connectivity for Servers and Mainframes: NIC Teaming and OSA/OSPF Design.” Multi-Tier Server Farms Today, most web-based applications are built as multi-tier applications. The multi-tier model uses software running as separate processes on the same machine, using interprocess communication, or on different machines with communications over the network. Typically, the following three tiers are used: • Web-server tier • Application tier • Data base tier Multi-tier server farms built with processes running on separate machines can provide improved resiliency and security. Resiliency is improved because a server can be taken out of service while the same function is still provided by another server belonging to the same application tier. Security is improved because an attacker can compromise a web server without gaining access to the application or to the database. Resiliency is achieved by load balancing the network traffic between the tiers, and security is achieved by placing firewalls between the tiers. You can achieve segregation between the tiers by deploying a separate infrastructure made of aggregation and access switches or by using VLANs. Figure 3 shows the design of multi-tier server farms with physical segregation between the server farm tiers. Side (a) of the figure shows the design with external appliances, while side (b) shows the design with service modules Data Center Infrastructure Architecture Overview 18 Version 1.0
  • 19. Data Center Infrastructure Architecture Data Center Multi-Layer Design Figure 3 Physical Segregation in a Server Farm with Appliances (a) and Service Modules (b) (a) (b) Web servers Web servers Application servers 114030 Database Application servers servers The design shown in Figure 4 uses VLANs to segregate the server farms. The left side of the illustration (a) shows the physical topology, while the right side (b) shows the VLAN allocation across the service modules: firewall, load balancer and switch. The firewall is the device that routes between the VLANs, while the load balancer, which is VLAN-aware, also enforces the VLAN segregation between the server farms. Notice that not all the VLANs require load balancing. For example, the database in the example sends traffic directly to the firewall. Figure 4 Logical Segregation in a Server Farm with VLANs (a) (b) Web servers Database servers Application servers 114031 Web Application servers servers The advantage of using physical segregation is performance, because each tier of servers is connected to dedicated hardware. The advantage of using logical segregation with VLANs is the reduced complexity of the server farm. The choice of one model versus the other depends on your specific network performance requirements and traffic patterns. Data Center Infrastructure Architecture Overview Version 1.0 19
  • 20. Data Center Infrastructure Architecture Data Center Protocols and Features Data Center Protocols and Features This section provides background information about protocols and features that are helpful when designing a data center network for high availability, security and scalability. It includes the following topics: • Layer 2 Protocols • Layer 3 Protocols • Security in the Data Center Layer 2 Protocols Data centers are characterized by a wide variety of server hardware and software. Applications may run on various kinds of server hardware, running different operating systems. These applications may be developed on different platforms such as IBM Websphere, BEA Weblogic, Microsoft .NET, Oracle 9i or they may be commercial applications developed by companies like SAP, Siebel, or Oracle. Most server farms are accessible using a routed IP address, but some use non-routable VLANs. All these varying requirements determine the traffic path that client-to-server and server-to-server traffic takes in the data center.These factors also determine how racks are built because server farms of the same kind are often mounted in the same rack based on the server hardware type and are connected to an access switch in the same rack. These requirements also decide how many VLANs are present in a server farm because servers that belong to the same application often share the same VLAN. The access layer in the data center is typically built at Layer 2, which allows better sharing of service devices across multiple servers, and allows the use of Layer 2 clustering, which requires the servers to be Layer 2 adjacent. With Layer 2 access, the default gateway for the servers is configured at the aggregation layer. Between the aggregation and access layers there is a physical loop in the topology that ensures a redundant Layer 2 path in case one of the links from the access to the aggregation fails. Spanning-tree protocol (STP) ensures a logically loop-free topology over a physical topology with loops. Historically, STP (IEEE 802.1d and its Cisco equivalent PVST+) has often been dismissed because of slow convergence and frequent failures that are typically caused by misconfigurations. However, with the introduction of IEEE 802.1w, spanning-tree is very different from the original implementation. For example, with IEEE 802.1w, BPDUs are not relayed, and each switch generates BPDUs after an interval determined by the “hello time.” Also, this protocol is able to actively confirm that a port can safely transition to forwarding without relying on any timer configuration. There is now a real feedback mechanism that takes place between IEEE 802.1w compliant bridges. We currently recommend using Rapid Per VLAN Spanning Tree Plus (PVST+), which is a combination of 802.1w and PVST+. For higher scalability, you can use 802.1s/1w, also called multi-instance spanning-tree (MST). You get higher scalability with 802.1s because you limit the number of spanning-tree instances, but it is less flexible than PVST+ if you use bridging appliances. The use of 802.1w in both Rapid PVST+ and MST provides faster convergence than traditional STP. We also recommend other Cisco enhancements to STP, such as LoopGuard and Unidirectional Link Detection (UDLD), in both Rapid PVST+ and MST environments. We recommend Rapid PVST+ for its flexibility and speed of convergence. Rapid PVST+ supersedes BackboneFast and UplinkFast making the configuration easier to deploy than regular PVST+. Rapid PVST+ also allows extremely easy migration from PVST+. Interoperability with IEEE 802.1d switches that do not support Rapid PVST+ is ensured by building the Data Center Infrastructure Architecture Overview 20 Version 1.0
  • 21. Data Center Infrastructure Architecture Data Center Protocols and Features “Common Spanning-Tree” (CST) by using VLAN 1. Cisco switches build a CST with IEEE 802.1d switches, and the BPDUs for all the VLANs other than VLAN 1 are tunneled through the 802.1d region. Cisco data centers feature a fully-switched topology, where no hub is present, and all links are full-duplex. This delivers great performance benefits as long as flooding is not present. Flooding should only be used during topology changes to allow fast convergence of the Layer 2 network. Technologies that are based on flooding introduce performance degradation besides being a security concern. This design guide provides information on how to reduce the likelihood of flooding. Some technologies rely on flooding, but you should use the equivalent unicast-based options that are often provided. Flooding can also be the result of a security attack and that is why port security should be configured on the access ports together with the use of other well understood technologies, such as PortFast. You complete the Layer 2 configuration with the following configuration steps: Step 1 Proper assignment of root and secondary root switches Step 2 Configuring rootguard on the appropriate links Step 3 Configuring BPDU guard on the access ports connected to the servers. By using these technologies you control the Layer 2 topology from accidental or malicious changes that could alter the normal functioning of the network. The Layer 2 configuration needs to keep into account the presence of dual-attached servers. Dual attached servers are used for redundancy and increased throughput. The configurations in this design guide ensure compatibility with dual-attached servers. Layer 3 Protocols The aggregation layer typically provides Layer 3 connectivity from the data center to the core. Depending on the requirements and the design, the boundary between Layer 2 and Layer 3 at the aggregation layer can be the Multilayer Switching Feature Card (MSFC), which is the router card inside the Catalyst supervisor, the firewalls, or the content switching devices. You can achieve routing either with static routes or with routing protocols such as EIGRP and OSPF. This design guide covers routing using EIGRP and OSPF. Network devices, such as content switches and firewalls, often have routing capabilities. Besides supporting the configuration of static routes, they often support RIP and sometimes even OSPF. Having routing capabilities facilitates the task of the network design but you should be careful not to misuse this functionality. The routing support that a content switch or a firewall provides is not the same as the support that a router has, simply because the main function of a content switching product or of a firewall is not routing. Consequently, you might find that some of the options that allow you to control how the topology converges (for example, configuration of priorities) are not available. Moreover, the routing table of these devices may not accommodate as many routes as a dedicated router. The routing capabilities of the MSFC, when used in conjunction with the Catalyst supervisor, provide traffic switching at wire speed in an ASIC. Load balancing between equal-cost routes is also done in hardware These capabilities are not available in a content switch or a firewall. We recommend using static routing between the firewalls and the MSFC for faster convergence time in case of firewall failures, and dynamic routing between the MSFC and the core routers. You can also use dynamic routing between the firewalls and the MSFC, but this is subject to slower convergence in case of firewall Data Center Infrastructure Architecture Overview Version 1.0 21
  • 22. Data Center Infrastructure Architecture Data Center Protocols and Features failures. Delays are caused by the process of neighbor establishment, data base exchange, running the SPF algorithm and installing the Layer 3 forwarding table in the network processors. Whenever dynamic routing is used, routing protocols with MD5 authentication should be used to prevent the aggregation routers from becoming neighbors with rogue devices. We also recommend tuning the OSPF timers to reduce the convergence time in case of failures of Layer 3 links, routers, firewalls, or LPARs (in a mainframe). Servers use static routing to respond to client requests. The server configuration typically contains a single default route pointing to a router, a firewall, or a load balancer. The most appropriate device to use as the default gateway for servers depends on the security and performance requirements of the server farm. Of course, the highest performance is delivered by the MSFC. You should configure servers with a default gateway with an address that is made highly available through the use of gateway redundancy protocols such as HSRP, Virtual Router Redundancy Protocol (VRRP), or the Gateway Load Balancing Protocol (GLBP). You can tune the gateway redundancy protocols for convergence in less than one second, which makes router failures almost unnoticeable to the server farm. Note The software release used to develop this design guide only supports HSRP. Mainframes connect to the infrastructure using one or more OSA cards. If the mainframe uses Enterprise System Connections (ESCON), it can be connected to a router with a Channel Interface Processor (CIP/CPA). The CIP connects to the mainframes at the channel level. By using an ESCON director, multiple hosts can share the same CIP router. Figure 1 shows the attachment for a mainframe with an OSA card. The transport protocol for mainframe applications is IP, for the purpose of this design guide. You can provide clients direct access to the mainframe or you can build a multi-tiered environment so clients can use browsers to run mainframe applications. The network not only provides port density and Layer 3 services, but can also provide the TN3270 service from a CIP/CPA card. The TN3270 can also be part of a multi-tiered architecture, where the end client sends HTTP requests to web servers, which, in turn, communicate with the TN3270 server. You must build the infrastructure to accommodate these requirements as well. You can configure mainframes with static routing just like other servers, and they also support OSPF routing. Unlike most servers, mainframes have several internal instances of Logical Partitions (LPARs) and/or Virtual machines (VMs), each of which contains a separate TCP/IP stack. OSPF routing allows the traffic to gain access to these partitions and/or VMs using a single or multiple OSA cards. You can use gateway redundancy protocols in conjunction with static routing when traffic is sent from a firewall to a router or between routers. When the gateway redundancy protocol is tuned for fast convergence and static routing is used, recovery from router failures is very quick. When deploying gateway redundancy protocols, we recommend enabling authentication to avoid negotiation with unauthorized devices. Routers and firewalls can provide protection against attacks based on source IP spoofing, by means of unicast Reverse Path Forwarding (uRPF). The uRPF feature checks the source IP address of each packet received against the routing table. If the source IP is not appropriate for the interface on which it is received, the packet is dropped. We recommend that uRPF be enabled on the Firewall module in the data center architecture described in this design guide. Data Center Infrastructure Architecture Overview 22 Version 1.0
  • 23. Data Center Infrastructure Architecture Scaling Bandwidth Security in the Data Center Describing the details of security in the data center is beyond the scope of this document, but it is important to be aware of it when building the infrastructure. Security in the data center is the result of Layer 2 and Layer 3 configurations (such as routing authentication, uRPF, and so forth) and the use of security technologies such as SSL, IDS, firewalls, and monitoring technologies such as network analysis products. Firewalls provide Layer 4 security services such as Initial Sequence Number randomization, TCP intercept, protection against fragment attacks and opening of specific Layer 4 ports for certain applications (fixups). An SSL offloading device can help provide data confidentiality and non-repudiation, while IDS devices capture malicious activities and block traffic generated by infected servers on the access switches. Network analysis devices measure network performance, port utilization, application response time, QoS, and other network activity. Note Strictly speaking, network analysis devices are not security devices. They are network management devices, but by observing network and application traffic it is sometimes possible to detect malicious activity. Some of the functions provided by the network, such as SYN COOKIEs and SSL, may be available on server operating systems. However, implementing these functionalities on the network greatly simplifies the management of the server farm because it reduces the number of configuration points for each technology. Instead of configuring SSL on hundreds of servers you just configure SSL on a pair of SSL offloading devices. Firewalls and SSL devices see the session between client and server and directly communicate with both entities. Other security products such as IDS devices or NAM devices, only see a replica of the traffic without being on the main traffic path. For these products to be effective, the switching platforms need to support technologies such as VACL capture and Remote SPAN in hardware. Scaling Bandwidth The amount of bandwidth required in the data center depends on several factors, including the application type, the number of servers present in the data center, the type of servers, the storage technology. The need for network bandwidth in the data center is increased because of the large amount of data that is stored in a data center and the need to quickly move this data between servers. The technologies that address these needs include: • EtherChannels—Either between servers and switches or between switches • CEF load balancing—Load balancing on equal cost layer 3 routes • GigabitEthernet attached servers—Upgrading to Gigabit attached servers is made simpler by the adoption of the 10/100/1000 technology • 10 GigabitEthernet—10 GigabitEthernet is being adopted as an uplink technology in the data center • Fabric switching—Fabric technology in data center switches helps improve throughput in the communication between linecards inside a chassis, which is particularly useful when using service modules in a Catalyst switch. You can increase the bandwdith available to servers by using multiple server NIC cards either in load balancing mode or in link-aggregation (EtherChannel) mode. Data Center Infrastructure Architecture Overview Version 1.0 23
  • 24. Data Center Infrastructure Architecture Network Management EtherChannel allows increasing the aggregate bandwidth at Layer 2 by distributing traffic on multiple links based on a hash of the Layer 2, Layer 3 and Layer 4 information in the frames. EtherChannels are very effective in distributing aggregate traffic on multiple physical links, but they don’t provide the full combined bandwidth of the aggregate links to a single flow because the hashing assigns the flow to a single physical link. For this reason, GigabitEthernet NIC cards are becoming the preferred technology for FastEtherChannels. This is a dominant trend because of the reduced cost of copper Gigabit NICs compared to fiber NICs. For a similar reason, 10 GigabitEthernet is becoming the preferred technology for enabling GigabitEtherchannels for data center uplinks. At the aggregation layer, where service modules are deployed, the traffic between the service modules travels on the bus of the Catalyst 6500 several times. This reduces the bandwidth available for server-to-server traffic The use of the fabric optimizes the communication between fabric-connected linecards and fabric-attached service modules. With Sup720 the fabric is part of the supervisor itself. With the sup 2, the fabric is available as a separate module. Note Not all service modules are fabric attached. Proper design should ensure the best utilization of the service modules within the Catalyst 6500 chassis. The maximum performance that a Catalyst switch can deliver is achieved by placing the servers Layer 2 adjacent to the MSFC interface. Placing service modules, such as firewalls or load balancers, in the path delivers high performance load balancing and security services, but this design doesn’t provide the maximum throughput that the Catalyst fabric can provide. As a result, servers that do not require load balancing should not be placed behind a load balancer, and if they require high throughput transfers across different VLANs you might want to place them adjacent to an MSFC interface. The FWSM provides ~5.5 Gbps of throughput and the CSM provides ~4Gbps of throughput. Network Management The management of every network device in the data center needs to be secured to avoid unauthorized access. This basic concept is applicable in general but it is even more important in this design because the firewall device is deployed as a module inside the Catalyst 6500 chassis. You need to ensure that nobody changes the configuration of the switch to bypass the firewall. You can promote secure management access through using Access Control Lists (ACL), Authentication Authorization and Accounting (AAA), and Secure Shell (SSH). We recommend using a Catalyst IOS software release greater than 12.1(19)E1a to take advantage of SSHv2. You should deploy syslog at an informational level, and when available the syslogs should be sent to a server rather than stored on the switch or router buffer: When a reload occurs, syslogs stored on the buffer are lost, which makes their use in troubleshooting difficult. Disable console logging during normal operations. Configuration management is another important aspect of network management in the data center. Proper management of configuration changes can significantly improve data center availability. By periodically retrieving and saving configurations and by auditing the history of configuration changes you can understand the cause of failures and ensure that managed devices comply with the standard configurations. Software management is critical for achieving maximum data center availability. Before upgrading the Data Center Infrastructure Architecture Overview 24 Version 1.0
  • 25. Data Center Infrastructure Architecture Network Management software to a new release you should know about the compatibility of the installed hardware image. Network management tools can retrieve the information from Cisco Connection Online and compare it with the hardware present in your data center. Only after you are sure that the requirements are met, should you distribute the image to all the devices. You can use software management tools to retrieve the bug information associated with device images and compare it to the bug information for the installed hardware to identify the relevant bugs. You can use Cisco Works 2000 Resource Manager Essentials (RME) to perform configuration management, software image management, and inventory management of the Cisco data center devices. This requires configuring the data center devices with the correct SNMP community strings. You can also use RME as the syslog server. We recommend RME version 3.5 with Incremental Device Update v5.0 (for Sup720, FWSM, and NAM support) and v6.0 (for CSM support). Data Center Infrastructure Architecture Overview Version 1.0 25
  • 26. Data Center Infrastructure Architecture Network Management Data Center Infrastructure Architecture Overview 26 Version 1.0
  • 27. INDEX Catalyst 3750 12 Numerics Catalyst 4500 11 10 Gigabit Ethernet 7, 10 Catalyst 6500 802.1 18 6506 11 6509 8 6513 8 A form factors 7 AAA 21 rack units 7 access layer service modules 7 Catalyst 6500 hardware 11 Catalyst OS described 13 SSHv2 21 Layer 2 17 CEF load balancing 20 access ports, BPDU guard 18 Channel Interface Processor adapter fault tolerance 16 see CIP/CPA aggregation layer CIP/CPA described 13 Layer 3 mainframe connectivity 19 Layer 3 protocols 18 Cisco IOS software releases recommended 12 application monitoring 15 Cisco Works 2000 Resource Manager Essentials architecture, illustrated 6 see RME auditing 21 client NAT 15 Authentication Authorization and Accounting configuration management 21 see AAA Content Accelerator 2 9 availability, service classes 15 Content Service Module see CSM Content Service Switch B see CSS BackboneFast 18 convergence bandwidth, scaling 20 Layer 2 18 bottlenecks server-to-server 15 copper cable for Fast Ethernet 10 BPDUs core layer, described 12 described 18 CSM 7 performance 9 throughput 21 C CSS Cat 5 cabling 10 11500 SSL decryption module 9 Data Center Infrastructure Architecture Overview Version 1.0 23
  • 28. Index described 9 G Gateway Load Balancing Protocol D see GLBP data confidentiality 20 GBICs 8 default gateway GLBP, not supported 19 placement options 19 placement with load balancing 15 H Direct Server Return 15 dual-attached servers hardware configuration 8 described 18 hardware recommendations 7 high availability server farms 14 E Hot Standby Routing Protocol Enterprise System Connections see HSRP see ESCON HSRP ESCON default gateway availability 19 Layer 3 connections 19 without load balancing 15 EtherChannels 20 HTTP requests Ethernet with TN3270 19 types of 10 I F IDS fabric-connected linecards 21 devices 20 fabric module sensors 9 see SFM2 IDSM fabric switching 20 performance 9 fiber cable for Fast Ethernet 10 image management 21 Firewall Service Module Incremental Device Update v5.0 21 see FWSM infrastructure flooding defined 5 security attack 18 illustrated 6 full-duplex 18 interoperability FWSM Rapid PVST+ 18 performance 9 interprocess communication 16 server farms, protecting 14 Intrusion Detection Service Module throughput 21 see IDSM inventory management 21 Data Center Infrastructure Architecture Overview 24 Version 1.0
  • 29. Index J N jumbo frames 11 NAM 7 network analysis devices 20 Network Analysis Module L see NAM Layer 3 network management 21 protocols 18 non-repudiation 20 Layer 4 security services 20 load balancing O CEF 20 default gateway placement 15 optical service adapters servers 15 see OSAs service modules 7 OSAs VLAN-aware 17 for attaching mainframes 7 Logical Partitions OSPF see LPARs mainframes 19 logical segregation, illustrated 17 oversubscription 10 LoopGuard recommended 18 P LPARs 19 performance, maximum 21 physical segregation 16 M PIX Firewalls 9 MAC flooding port density 7 security attack 18 PortFast mainframes described 18 attaching with OSAs 7 primary root switch 18 Layer 3 protocols 19 maximum performance 21 Q MD5 authentication 19 MSFC QoS 20 aggregation layer 18 performance considerations 21 multilayer architecture, illustrated 6 R Multilayer Switching Feature Card rack space utilization, improving 9 see MSFC rack units, used by Catalyst 6500 models 7 Rapid Per VLAN Spanning Tree Plus Data Center Infrastructure Architecture Overview Version 1.0 25
  • 30. Index see RPVST+ service appliances Rapid PVST+ segregating server farms 16 recommended 18 service applicances recommendations described 9 Cisco IOS software releases 12 service classes 15 hardware and software 7 service modules Remote SPAN 20 advantages 9 resilient server farms 16 fabric attached 21 reverse proxy caching 7 load balancing and security 7 RME 21 segregating server farms 16 root switches supported by Catalyst 6500 7 primary and secondary 18 service switches 13 session persistence 15 SFM2 with Sup2 8 S software management 21 scaling bandwidth 20 software recommendations 7 secondary root switches 18 SSH Secure Shell for network management 21 see SSH SSL offloading devices 20 Secure Sockets Layer SSL Service Module see SSL see SSLSM security SSLSM data center 20 performance 9 Layer 2 attacks 18 static routing Layer 4 20 mainframes 19 port 18 server farms 19 service modules 7 Sup720 technologies 20 fabric 21 segregation integrated fabric 8 between tiers 16 upgrading from Sup2 9 logical 17 Supervisor 2 physical 16 see Sup2 server farms Supervisor 3 11 high availability 14 switch fault tolerance 16 logical segregation 17 SYN COOKIEs 20 multi-tier 16 syslog servers 21 physical segregation 16 types of servers 15 server-to-server traffic 17, 21 Data Center Infrastructure Architecture Overview 26 Version 1.0
  • 31. Index T throughput, comparing Sup2 and Sup720 8 TN3270 19 U UDLD described 18 unicast Reverse Path Forwarding see uRPF Unidirectional Link Detection see UDLD upgrading to Sup720 9 UplinkFast 18 uRPF mainframes 19 V VACL capture 20 virtual machines see VMs Virtual Router Redundancy Protocol see VRRP VLAN-aware load balancing 17 service modules 9 VLANs determining number required 17 segregating tiers 16 VMs 19 VRRP, not supported 19 W web-based applications 16 Data Center Infrastructure Architecture Overview Version 1.0 27