SlideShare ist ein Scribd-Unternehmen logo
1 von 10
Downloaden Sie, um offline zu lesen
White Paper




FCoE Storage Convergence
Across the Data Center
with the Juniper Networks
QFabric System




Copyright © 2012, Juniper Networks, Inc.	             1
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System




                       Table of Contents
                       Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                       Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                       Access-Layer Convergence Modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                       Understanding the Layout of a Typical Data Center and Organization of the Data Center Teams . . . . . . . . . . . . . . . . . . . . . . 5
                       Applying the Network Topology to a Typical Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                       Deployment of Server POD-Wide FCoE Transit Switch to FCoE-Enabled FC SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                       The Implications of Multiprotocol Data Center Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                       Standards That Allow for Server I/O and Access-Layer Convergence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
                           Enhancements to Ethernet for Converged Data Center Networks—DCB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
                           Enhancements to Fibre Channel for Converged Data Center Networks—FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
                       Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
                       About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10




                       List of Figures
                       Figure 1: The phases of convergence, from separate networks, to access layer convergence,
                       to the fully converged network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                       Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                       Figure 3: Typical data center layout and management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                       Figure 4: Large-scale converged access SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                       Figure 5: Multiprotocol storage network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
                       Figure 6: PFC ETS and QCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9




2	                                                                                                                                                                                       Copyright © 2012, Juniper Networks, Inc.
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System




                         Executive Summary
                         Since 2011, customers have finally been able to invest in convergence-enabled equipment and begin reaping the benefits
                         of convergence in their data centers. With the first wave of standards now complete—both the IEEE Data Center Bridging
                         (DCB) enhancements to Ethernet and the InterNational Committee for Information Technology Standards (INCITS) T11
                         FC-BB-5 standard for Fibre Channel over Ethernet (FCoE)—enterprises can benefit from server- and access-layer I/O
                         convergence while continuing to leverage their investment in their existing Fibre Channel (FC) backbones.
                         Other Juniper Networks white papers, focusing specifically on the Juniper Networks® QFX3500 top-of-rack switch,
                         already address the general concepts of convergence and the protocols and deployments possible with FCoE transit
                         switches and FCoE-FC gateways. Another white paper covering the end-to-end convergence possibilities resulting from
                         the VN2VN capabilities of FC-BB-6 is also available.
                         This white paper will focus on the ability to deploy a single, simple, large-scale converged access layer that not only
                         supports individual racks or rows of racks but entire server points of delivery (PODs) or halls consisting of thousands
                         of servers. The Juniper Networks QFabric™ family of products offers a revolutionary approach that delivers dramatic
                         improvements in data center performance, operating costs, and business agility for enterprises, high-performance
                         computing systems, and cloud providers. The QFabric family implements a single-tier network in the data center,
                         improving speed, scale, and efficiency by removing legacy barriers and increasing business agility. The QFX3000-G
                         QFabric System can scale up to 6,144 ports across 128 QFX3500 or QFX3600 QFabric Nodes, while the QFX3000-M
                         QFabric System, designed for mid-size deployments, supports up to 768 ports across 16 QFabric Nodes.
                         Convergence using FCoE is proceeding as a steady migration from what could be referred to as single hop, first device,
                         or shallow access convergence, to multihop or deep access convergence, and eventually to end-to-end convergence.
                         Or, looking at it another way, it is proceeding from convergence within a blade server shelf, to convergence in the rack,
                         to convergence across a row of racks, to convergence across a server area, and finally to convergence all the way to
                         storage. Most of the benefits are realized once convergence spans the entire server area.




                                                                                                           FC
                                                                                                          SAN




                                                     FC
                                                    SAN




                          Phase 1: Shallow Access Convergence          Phase 2: Deep Access Convergence                     Phase 3: End-to-End Convergence



                         Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                           3
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System




                       Introduction
                       The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable
                       data center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. Such a
                       network also allows the data center to support much higher levels of business agility and not become a bottleneck
                       that hinders a company from releasing new products or services.
                       To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully
                       clarify possible scale of convergence based upon the solutions and topologies that can be deployed in 2012:
                       1.	Briefly review the different types of convergence-capable solutions and how these product types can be deployed to
                          support convergence at scale.
                       2.	 Look at the typical physical layout and management of the data center and how these relate to convergence at
                           large scale.
                       3.	 Look forward to some of the new product and solution capabilities expected over the next couple of years.


                       Access-Layer Convergence Modes
                       When buying a converged platform, it is possible to deploy products based on three very different modes of operation.
                       Products on the market today may be capable of one or more of these modes, depending on hardware and software
                       configuration and license enablement. A given data center network may have multiple hops and tiers using different
                       hardware and software combinations and permutations. The capabilities can in principle be mixed with other features
                       such as Layer 2 multipathing mechanisms (TRILL, MC-LAG) and fabrics (Juniper Networks QFabric architecture and
                       Virtual Chassis technology).

                       •	 	 CoE transit switch—DCB switch with FCoE Initialization Protocol (FIP) snooping. Largely managed as a LAN device and
                          F
                          acting as a multiplexer from a storage area network (SAN) perspective.
                       •	 FCoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy. Likely to be managed as both a LAN and SAN device,
                          particularly if it’s in the general Ethernet/IP data path.
                       •	 	FCoE-FC switch—full Fibre Channel Forwarder (FCF) capability. May be managed as both a LAN and SAN device or just
                           as a SAN device, depending on its location in the network.


                                                                 FCoE Transit Switch vs. FCoE-FC Gateway

                                                               FC/FCoE Switch                                       FC Switch

                                                 VF_Port          VF_Port               VF_Port


                                                        DCB                      DCB                       F_Port                   F_Port
                                                        Port                     Port


                                                        DCB                      DCB
                                                                                                           N_Port                   N_Port
                                                        Port                     Port


                                                                            FCoE Transit Switch
                                                                                                                               NPIV Proxy
                                                                            FIP Snooping

                                                       FIP           FIP           FIP                   VF_Port     VF_Port        VF_Port
                                                       ACL           ACL           ACL

                                                       DCB           DCB           DCB                    DCB         DCB             DCB
                                                       Port          Port          Port                   Port        Port            Port




                                                     VN_Port       VN_Port       VN_Port                 VN_Port     VN_Port        VN_Port




                                                        FCoE servers with CNA                              FCoE servers with CNA

                                                              Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway




4	                                                                                                                                Copyright © 2012, Juniper Networks, Inc.
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System



                         When trying to understand these device capabilities, there are certain details that are often neglected but are critical
                         to designing a converged network. The most important detail is that FC over Ethernet means that many things are
                         specific to Ethernet L2 domains. In the context of this section, it is not a device that is configured for one of the modes
                         just listed but rather the VLAN. This means that a device can be operating in multiple modes simultaneously, while
                         at the same time operating in the same mode for multiple logical SAN fabrics on different VLANs. Looking at some
                         examples specific to the capabilities of the QFX3500 Switch and QFabric Systems:

                         •	 	 CoE transit switch—DCB switch with FIP snooping. Each VLAN can be an independent VN2VF or VN2VN VLAN for
                            F
                            different logical FC SAN fabrics.
                         •	 	 CoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy. The FC ports can connect to more than one FC SAN fabric
                            F
                            and then be mapped as independent gateway functions to different VLANs.

                         There are two key design use cases for these configurations:
                         1.	Allowing customers to choose between either physical dual rail/dual SAN in FCoE, or logical dual rail/dual SAN
                            across a common Infrastructure
                         2.	Allowing multiple logical SANs to exist within the same physical fabric leveraging Ethernet quality of service (QoS)
                            as required


                         Understanding the Layout of a Typical Data Center and Organization of
                         the Data Center Teams
                         Physical instantiation matters, even in a virtual world. Data centers are built by laying out rows of racks or, for many
                         larger data centers, PODs, areas, or halls—each of which contain multiple rows of racks to contain the equipment.
                         Different areas of the data center are then allocated for different purposes. For the purposes of this white paper,
                         understanding that regions of the data center are housing servers and other regions are housing storage along with the
                         backbone FC SAN is the most critical separation. Typically, there will also be specific locations where L3 core routers,
                         firewalls, and external metro area network (MAN) and WAN connections are provided. The area housing storage may
                         be subdivided into FC disk and tape racks, while the area housing servers may be subdivided into different server types
                         such as blade, rack-mount Intel-based, RISC Unix-based, mainframe, etc.
                         In addition to understanding the physical layout, it is important to also understand that data centers are often
                         operated by multiple teams with overlapping responsibilities. At the most extreme, there may be teams for desktop
                         support, particularly now with virtual desktop infrastructure (VDI), as well as for applications, servers/operating
                         systems/hypervisors, the Ethernet network, the FC network, network access server (NAS) storage, block storage, tape/
                         backup/archive, and facilities (cabling, power, and cooling).


                                         Server Area 1                          Server Area 3                          Main Distribution Area
                                      Windows/Linus/Intel                      Unix/Mainframe                         MAN/WAN Connectivity



                                                    Server                                   Application                                 SAN/Disk/Tape
                                                    Admin                                      Admin                                        Admin

                                                                                                                     SAN A
                                                                                                                      Disk
                                                                                                                      Tape
                                                                                                                     SAN B
                                                                               Facilities
                                                                              Management
                                                                                                                              Network
                                                                                                                               Team



                                         Server Area 2                          Server Area 4                          Main Distribution Area
                                      Windows/Linus/Intel                      Unix/Mainframe                         MAN/WAN Connectivity

                                                             Figure 3: Typical data center layout and management




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                            5
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System



                       Related to the physical data center, there is also a change in deployment coming in part as a result of the move
                       towards 10GbE and later 40GbE and 100GbE, the specifications of connectivity at these speeds, and the need for less
                       oversubscription within the network as a whole. The implication of all of these conditions is the need to move many
                       deployments towards top-of-rack and sometimes end-of-row rather than end-of-POD or end-of-data center designs.
                       Along with convergence in general, this tends to result in more physical boxes in the overall network, especially when
                       compared to the typical end-of-data center/storage POD-based FC SAN design.


                       Applying the Network Topology to a Typical Data Center
                       While looking at individual product types is important and interesting, it is far more important to look at their role
                       in large-scale data center network deployments. A complex and poorly designed network is just that, and no box—
                       regardless of its mode of operation—will change that. Similarly, a well designed topology with a clear understanding of
                       what makes sense functionally at each layer allows for large converged networks that are deployable and manageable.
                       The practical span or radius of a converged network is no worse than the equivalent FC SAN fabric and, if designed
                       with care, can far exceed the limits of the traditional SAN. As with any SAN deployment, bandwidth latency and the
                       maximum number of device hops should be controlled, but an FCoE transit switch does not consume a domain ID,
                       allowing a far larger total device count. An FCoE transit switch, like an FCF but unlike an FCoE-to-FCoE gateway, can
                       load-balance at the OxID or exchange level. With sophisticated QoS and FIP snooping, there is no loss of manageability
                       for such a device compared to an FCF.
                       Having removed the complexity of gateways and the protocol scaling limits of FCFs, a well designed large-scale Layer 2
                       allows for highly scalable deployments. Note, however, that traditional hop count limits should be applied to all switch
                       or link types—for instance, the five link hops or six device hops limit between server and storage still applies, no matter
                       whether the device is an FCoE transit switch, gateway, or FCF.
                       The increased scale possible from a well designed converged network compared to a traditional FC SAN is critically
                       important as the move to 10GbE/40GbE is driving deployments from end-of-hall or end-of-row to top-of-rack,
                       naturally increasing the network device count in the data center.
                       Indeed, no matter the fabric of choice, it is now possible to build, deploy, and manage thousands or even tens of
                       thousands of FCoE-connected servers with just a pair of FCFs hosting the FC disk, FC tape, FICON mainframe, and the
                       high-end servers that must be FC attached until they are available with 40GbE converged network adapters (CNAs).
                       Understanding that most data centers have regions for servers and regions for storage, it quickly becomes clear that
                       the optimal converged network design is to deploy a highly scalable L2 Ethernet- and L3 IP DCB-enabled network
                       across the regions of the data center housing servers, and minimize storage enablement to just that required to
                       multiplex the traffic towards those regions of the data center housing the storage and FC backbone.


                       Deployment of Server POD-Wide FCoE Transit Switch to FCoE-Enabled
                       FC SAN
                       As previously noted, this paper focuses on deployments that apply for server access-layer convergence. As such,
                       it is assumed that this access layer is in turn connecting to a Fibre Channel backbone. The term “Fibre Channel
                       backbone” implies a traditional FC SAN of some sort, which is attached to the FC disk and tape, as well as most
                       likely existing FC servers.
                       When examining FCoE and convergence at scale, this physical separation not only shows the limitations in the “FCF
                       Everywhere model,” but also demonstrates the inadequacies of the ”top-of-rack only” converged access model. Simply
                       put, in a modern data center, it is not practical nor is it desirable to have cable runs from every single server rack to the
                       storage racks. In any modern data center, a rational simple design is to have server racks connecting to one side of the
                       fabric and have the FCoE-enabled FC SAN backbone connected to the other side of the fabric. This, of course, is much
                       the same as the way other services and appliances are connected to the fabric, be they routing services to the MAN/
                       WAN, firewall services, and so on.




6	                                                                                                                  Copyright © 2012, Juniper Networks, Inc.
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System




                                                                                                                           FC
                                                                                                                          SAN A




                                                                               Converged Access




                                                                                                                           FC
                                                                                                                          SAN A




                                                                Figure 4: Large-scale converged access SAN



                         The Implications of Multiprotocol Data Center Networks
                         A very common but largely unrecognized (at least by marketing folks) phenomenon is the rise of the multiprotocol
                         storage network. The reality of the modem data center is that there are often different types of storage devices serving
                         different needs. Further, it is increasingly the case that these are deployed with a variety of connectivity protocols—FC,
                         FCoE, iSCSI, Server Message Block (SMB), Network File System (NFS), parallel NFS (PNFS), object-based, and even
                         dual-attachment station (DAS) and distributed. Storage devices are no different than servers or clients in that different
                         protocols have different use cases, and “multiprotocolism” is in fact a natural state of affairs. With the rise of server
                         virtualization, the nature of the underlying storage protocol is hidden from the operating system and the application as
                         part of the normal hardware abstraction provided by the hypervisor. This, along with data migration capabilities, gives
                         much needed agility and flexibility in the deployment of best in class on a per use case basis.
                         At the network level, normal QoS and DCB provide all of the tools necessary for the separation of the various storage
                         traffic types. This allows not just the separation of storage and non storage traffic, but also separation of storage
                         traffic of different protocols allowing for the safe convergence of any combination of storage types needed to meet the
                         needs of a given deployment.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                          7
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System




                                                           In Server Storage


                                                                                                                    FC
                                                                                                                   SAN A

                                                            In Rack Storage



                                                                                                                    FC
                                                                                                                   SAN A
                                                          End of Row Storage



                                                          Big Data Storage
                                                         Data De-Duplication

                                                                         Figure 5: Multiprotocol storage network



                       Standards That Allow for Server I/O and Access-Layer Convergence
                       Enhancements to Ethernet for Converged Data Center Networks—DCB
                       Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support
                       lossless traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames
                       can lead to cross traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing
                       (user priorities), but again, these are rarely deployed within the data center. The next logical step for Ethernet will be
                       to leverage these capabilities and enhance existing standards to meet the needs of convergence and virtualization,
                       propelling Ethernet into the forefront as the preeminent infrastructure for LANs, SANs, and high-performance
                       computing (HPC) clusters.
                       These enhancements benefit Ethernet I/O convergence (remembering that most servers have multiple 1GbE network
                       interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based
                       storage protocols such as NAS and iSCSI. These enhancements also provide the appropriate platform for supporting
                       FCoE. In the early days when these standards were being developed and before they moved under the auspices of the
                       IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them.
                       DCB—A Set of IEEE Standards. Ethernet needed a variety of enhancements to support I/O, network convergence,
                       and server virtualization. Server virtualization is covered in other Juniper white papers, even though it is part of the
                       DCB protocol set. With respect to I/O and network convergence, the development of new standards began with the
                       following existing standards:

                       •	 	User Priority for Class of Service—802.1p—which already allows identification of eight separate lanes of traffic
                          (used as-is)
                       •	 	 thernet Flow Control (Pause, symmetric, and/or asymmetric flow control)—802.3X—which is leveraged for priority flow
                          E
                          control (PFC)
                       •	 	MAC Control Frame for PFC—802.3bd—to allow 802.3X to apply to individual user priorities (modified)
                       •	 A number of new standards that leverage these components have been developed and have either been formally
                          approved or are in the final stages of the approval process. These include:




8	                                                                                                                         Copyright © 2012, Juniper Networks, Inc.
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System



                            -- 	PFC—IEEE 802.1Qbb—which applies traditional 802.3X Pause to individual priorities instead of the port
                            -- 	 nhanced Transmission Selection (ETS)—IEEE 802.1Qaz—which is a grouping of priorities and bandwidth allocation to
                               E
                               those groups
                            -- 	 uantized Congestion Notification (QCN)—IEEE 802.1Qau—which is a cross network as opposed to a point-to-point
                               Q
                               backpressure mechanism
                            -- 	 ata Center Bridging Exchange Protocol (DCBx), which is part of the ETS standard for DCB auto-negotiation
                               D
                         The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms
                         of external requirements, and also describe in some detail the options for implementing internal behavior and the
                         downside of some lower cost but standards-compliant ways of implementing DCB. It is important to note that these
                         standards are separate from the efforts to solve the L2 multipathing issues that are not technically necessary to
                         make convergence work. Also, neither these standards nor those around L2 multipathing address a number of other
                         challenges that arise when networks are converged and flattened.


                                                                                                      PFC       TX Queue 0                                          RX Buffer 0     PFC
                                                                                                      ON        RX Buffer 0                                         TX Queue 0      ON

                                                                                                      PFC       TX Queue 1                                          RX Buffer 1     PFC
                                                                                                      ON        RX Buffer 1                                         TX Queue 1      ON
                                                                                                                              S
                                                                             Physical Port – PFC




                                                                                                                                                                                           Physical Port – PFC
                                                                                                      PFC       TX Queue 2                                          RX Buffer 2     PFC
                                                                                                      ON        RX Buffer 2   T                  pause              TX Queue 2      ON

                                                                                                      PFC       TX Queue 3    O                                     RX Buffer 3     PFC
                                                                                                      ON        RX Buffer 3   P                                     TX Queue 3      ON

                                                                                                      PFC       TX Queue 4                                          RX Buffer 4     PFC
                                                                                                      OFF       RX Buffer 4                                         TX Queue 4      OFF

                                                                                                      PFC       TX Queue 5                                          RX Buffer 5     PFC
                                                                                                      OFF       RX Buffer 5                                         TX Queue 5      OFF

                                                                                                      PFC       TX Queue 6     Keeps sending                 DROP   RX Buffer 6     PFC
                                                                                                      OFF       RX Buffer 6                                         TX Queue 6      OFF

                                                                                                      PFC       TX Queue 7                                          RX Buffer 7     PFC
                                                                                                      ON        RX Buffer 7                                         TX Queue 7      ON




                                                                                                                                  1         2           3
                                            Physical Port – ETS




                                                                  Class Group 1                    TX Queue 0
                                                                                                                                                                            1          2                         2
                                                                                                   TX Queue 1
                                                                  Class Group 2                    TX Queue 2
                                                                                                   TX Queue 3                                                               2          5                         5
                                                                                                                                  2         6           5
                                                                                                   TX Queue 4
                                                                                                   TX Queue 5
                                                                  Class Group 3                                                                                             2          3                         3
                                                                                                   TX Queue 6
                                                                                                   TX Queue 7                     2         4           3

                                                                                                                                T1          T2          T3                 T1         T2                         T3
                                                                                                                                      Offered Traffic                           Realized Traffic

                                                                                                                              Figure 6: PFC ETS and QCN


                         Enhancements to Fibre Channel for Converged Data Center Networks—FCoE
                         FCoE—the protocol developed within T11. The proposed FCoE protocol has been developed by the T11 Technical
                         Committee—a subgroup of the International Committee for Information Technology Standards (INCITS)—as part of
                         the Fibre Channel Backbone 5 (FC-BB-5) project. The standard was passed over to INCITS for public comment and
                         final ratification in 2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Channel
                         Backbone 6 (FC-BB-6), which is intended to address a number of issues not covered in the first standard, and develop
                         a number of new deployment scenarios.
                         FCoE was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the
                         cost of change. To the storage world, FCoE is, in many ways, just FC with a new physical media type; many of the tools
                         and services remain the same. To the Ethernet world, FCoE is just another upper level protocol riding over Ethernet.
                         The FC-BB-5 standard clearly defines all of the details involved in mapping FC through an Ethernet layer, whether
                         directly or through simplified L2 connectivity. It lays out both the responsibilities of the FCoE-enabled endpoints and FC
                         fabrics as well as the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended
                         to maintain the level of security that a physically separate SAN traditionally provides. Overall, apart from the scale-
                         up and scale-down aspects, FC-BB-5 defines everything needed to build and support the products and solutions
                         discussed earlier.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                                                                             9
White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System



                         While the development of FCoE as an industry standard will bring the deployment of unified data center infrastructures
                         closer to reality, FCoE by itself is not enough to complete the necessary convergence. Many additional enhancements
                         to Ethernet and changes to the way networking products are designed and deployed are required to make it a viable,
                         useful, and pragmatic implementation. Many, though not all, of the additional enhancements are provided by the
                         standards developed through the IEEE DCB committee. In theory, the combination of the DCB and FCoE standards
                         allows for full network convergence. In reality, they only solve the problem for relatively small-scale data centers. The
                         challenge of applying these techniques to larger deployments involves the use of these protocols purely for server- and
                         access-layer I/O convergence through the use of FCoE transit switches (DCB switches with FIP snooping) and FCoE-FC
                         gateways (using N_Port ID Virtualization to eliminate SAN scaling and heterogeneous support issues).
                         Juniper Networks EX4500 and EX4550 Ethernet Switches, and Juniper Networks QFX3500 Switch, all support an FCoE
                         transit switch mode. The QFX3500 also supports FCoE-FC gateway mode. These products are industry firsts in many ways:
                         1.	The EX4500 and QFX3500 switches are fully standards-based with rich implementations from both a DCB and FC-
                            BB-5 perspective.
                         2.	The EX4500 and QFX3500 are purpose-built FCoE transit switches.
                         3.	 QFX3500 is a purpose-built FCoE-FC gateway, which includes fungible combined Ethernet/Fibre Channel ports.
                         4.	 QFX3500 features a single Packet Forwarding Engine (PFE) design.
                         5.	The EX4500 and QFX3500 switches both include feature rich L3 capabilities.
                         6.	 QFX3500 supports low latency with cut-through switching.


                         Conclusion
                         Juniper Networks QFabric Switch is the first true single tier fabric switch built to solve all of the challenges posed
                         by large-scale convergence. The QFX3500 is the first fully FC-BB-5-enabled gateway capable of easily supporting
                         upstream DCB switches, including third-party embedded blade shelf switches. The QFabric System is the only solution
                         today allowing customers to efficiently deploy FCoE convergence at scale.
                         Industry firsts in many ways, EX4500, EX4550, QFX3500, QFX3600, and QFabric switches all support an FCoE transit
                         switch mode, and the QFX3500 and QFabric System also support FCoE-FC gateway mode. They are fully standards-
                         based with rich implementations from both a DCB and FC-BB-5 perspective and feature rich L3 capabilities. The
                         QFX3500 and QFabric System are purpose-built FCoE-FC gateways, which include fungible combined Ethernet/FC
                         ports, a single PFE design, and low latency cut-through switching. Moreover, the QFX3500 Switch, QFX3600 Switch,
                         and QFabric System are the first solutions on the market to support FC-BB-6 FCoE transit switch mode.
                         There are a number of very practical server I/O access-layer convergence topologies that can be used as steps along
                         the path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), quad
                         small form-factor pluggable transceiver (QSFP), 40GbE, and the FCoE Direct Discovery Direct Attach model will further
                         bring Ethernet economics to FCoE convergence efforts.


                         About Juniper Networks
                         Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud
                         providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics
                         of networking. The company serves customers and partners worldwide. Additional information can be found at
                         www.juniper.net.




Corporate and Sales Headquarters                    APAC Headquarters                        EMEA Headquarters                To purchase Juniper Networks solutions,
Juniper Networks, Inc.                              Juniper Networks (Hong Kong)             Juniper Networks Ireland         please contact your Juniper Networks
1194 North Mathilda Avenue                          26/F, Cityplaza One                      Airside Business Park            representative at 1-866-298-6428 or
Sunnyvale, CA 94089 USA                             1111 King’s Road                         Swords, County Dublin, Ireland   authorized reseller.
Phone: 888.JUNIPER (888.586.4737)                   Taikoo Shing, Hong Kong                  Phone: 35.31.8903.600
or 408.745.2000                                     Phone: 852.2332.3636                     EMEA Sales: 00800.4586.4737
Fax: 408.745.2100                                   Fax: 852.2574.7803                       Fax: 35.31.8903.601
www.juniper.net

Copyright 2012 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.

2000500-001-EN          Oct 2012                       Printed on recycled paper



10	                                                                                                                                     Copyright © 2012, Juniper Networks, Inc.

Weitere ähnliche Inhalte

Mehr von Juniper Networks

Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachJuniper Networks
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Juniper Networks
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Juniper Networks
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper Networks
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationJuniper Networks
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveJuniper Networks
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud Juniper Networks
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper Networks
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionJuniper Networks
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastJuniper Networks
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingJuniper Networks
 
What Are Virtual Chassis and Virtual Chassis Fabric?
What Are Virtual Chassis and Virtual Chassis Fabric?What Are Virtual Chassis and Virtual Chassis Fabric?
What Are Virtual Chassis and Virtual Chassis Fabric?Juniper Networks
 
MetaFabric Architectures 1.0 - Virtualized IT Data Center
MetaFabric Architectures 1.0 - Virtualized IT Data CenterMetaFabric Architectures 1.0 - Virtualized IT Data Center
MetaFabric Architectures 1.0 - Virtualized IT Data CenterJuniper Networks
 
WAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
WAN Solution Meets The Challenges Of The Large Enterprise Solution BriefWAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
WAN Solution Meets The Challenges Of The Large Enterprise Solution BriefJuniper Networks
 
Juniper Networks: Converged SuperCore Infographic
Juniper Networks: Converged SuperCore Infographic Juniper Networks: Converged SuperCore Infographic
Juniper Networks: Converged SuperCore Infographic Juniper Networks
 
Juniper switching infographic_final_0415[2]
Juniper switching infographic_final_0415[2]Juniper switching infographic_final_0415[2]
Juniper switching infographic_final_0415[2]Juniper Networks
 
The Case for Disaggregation of Compute in the Data Center
The Case for Disaggregation of Compute in the Data CenterThe Case for Disaggregation of Compute in the Data Center
The Case for Disaggregation of Compute in the Data CenterJuniper Networks
 
Cloud Analytics Engine Value - Juniper Networks
Cloud Analytics Engine Value - Juniper Networks Cloud Analytics Engine Value - Juniper Networks
Cloud Analytics Engine Value - Juniper Networks Juniper Networks
 

Mehr von Juniper Networks (20)

Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic Approach
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
 
What's Your IT Alter Ego?
What's Your IT Alter Ego?What's Your IT Alter Ego?
What's Your IT Alter Ego?
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCO
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider Organization
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 Infographic
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer Satisfaction
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning Fast
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computing
 
What Are Virtual Chassis and Virtual Chassis Fabric?
What Are Virtual Chassis and Virtual Chassis Fabric?What Are Virtual Chassis and Virtual Chassis Fabric?
What Are Virtual Chassis and Virtual Chassis Fabric?
 
MetaFabric Architectures 1.0 - Virtualized IT Data Center
MetaFabric Architectures 1.0 - Virtualized IT Data CenterMetaFabric Architectures 1.0 - Virtualized IT Data Center
MetaFabric Architectures 1.0 - Virtualized IT Data Center
 
WAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
WAN Solution Meets The Challenges Of The Large Enterprise Solution BriefWAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
WAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
 
Juniper Networks: Converged SuperCore Infographic
Juniper Networks: Converged SuperCore Infographic Juniper Networks: Converged SuperCore Infographic
Juniper Networks: Converged SuperCore Infographic
 
Juniper switching infographic_final_0415[2]
Juniper switching infographic_final_0415[2]Juniper switching infographic_final_0415[2]
Juniper switching infographic_final_0415[2]
 
The Case for Disaggregation of Compute in the Data Center
The Case for Disaggregation of Compute in the Data CenterThe Case for Disaggregation of Compute in the Data Center
The Case for Disaggregation of Compute in the Data Center
 
Cloud Analytics Engine Value - Juniper Networks
Cloud Analytics Engine Value - Juniper Networks Cloud Analytics Engine Value - Juniper Networks
Cloud Analytics Engine Value - Juniper Networks
 

Kürzlich hochgeladen

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 

Kürzlich hochgeladen (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 

FCOE Storage Convergence Across the Data Center with the Juniper Networks QFabric System

  • 1. White Paper FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System Copyright © 2012, Juniper Networks, Inc. 1
  • 2. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System Table of Contents Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Access-Layer Convergence Modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Understanding the Layout of a Typical Data Center and Organization of the Data Center Teams . . . . . . . . . . . . . . . . . . . . . . 5 Applying the Network Topology to a Typical Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Deployment of Server POD-Wide FCoE Transit Switch to FCoE-Enabled FC SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 The Implications of Multiprotocol Data Center Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Standards That Allow for Server I/O and Access-Layer Convergence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Enhancements to Ethernet for Converged Data Center Networks—DCB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Enhancements to Fibre Channel for Converged Data Center Networks—FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 List of Figures Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Figure 3: Typical data center layout and management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 4: Large-scale converged access SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Figure 5: Multiprotocol storage network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Figure 6: PFC ETS and QCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Copyright © 2012, Juniper Networks, Inc.
  • 3. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System Executive Summary Since 2011, customers have finally been able to invest in convergence-enabled equipment and begin reaping the benefits of convergence in their data centers. With the first wave of standards now complete—both the IEEE Data Center Bridging (DCB) enhancements to Ethernet and the InterNational Committee for Information Technology Standards (INCITS) T11 FC-BB-5 standard for Fibre Channel over Ethernet (FCoE)—enterprises can benefit from server- and access-layer I/O convergence while continuing to leverage their investment in their existing Fibre Channel (FC) backbones. Other Juniper Networks white papers, focusing specifically on the Juniper Networks® QFX3500 top-of-rack switch, already address the general concepts of convergence and the protocols and deployments possible with FCoE transit switches and FCoE-FC gateways. Another white paper covering the end-to-end convergence possibilities resulting from the VN2VN capabilities of FC-BB-6 is also available. This white paper will focus on the ability to deploy a single, simple, large-scale converged access layer that not only supports individual racks or rows of racks but entire server points of delivery (PODs) or halls consisting of thousands of servers. The Juniper Networks QFabric™ family of products offers a revolutionary approach that delivers dramatic improvements in data center performance, operating costs, and business agility for enterprises, high-performance computing systems, and cloud providers. The QFabric family implements a single-tier network in the data center, improving speed, scale, and efficiency by removing legacy barriers and increasing business agility. The QFX3000-G QFabric System can scale up to 6,144 ports across 128 QFX3500 or QFX3600 QFabric Nodes, while the QFX3000-M QFabric System, designed for mid-size deployments, supports up to 768 ports across 16 QFabric Nodes. Convergence using FCoE is proceeding as a steady migration from what could be referred to as single hop, first device, or shallow access convergence, to multihop or deep access convergence, and eventually to end-to-end convergence. Or, looking at it another way, it is proceeding from convergence within a blade server shelf, to convergence in the rack, to convergence across a row of racks, to convergence across a server area, and finally to convergence all the way to storage. Most of the benefits are realized once convergence spans the entire server area. FC SAN FC SAN Phase 1: Shallow Access Convergence Phase 2: Deep Access Convergence Phase 3: End-to-End Convergence Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network Copyright © 2012, Juniper Networks, Inc. 3
  • 4. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System Introduction The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable data center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. Such a network also allows the data center to support much higher levels of business agility and not become a bottleneck that hinders a company from releasing new products or services. To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully clarify possible scale of convergence based upon the solutions and topologies that can be deployed in 2012: 1. Briefly review the different types of convergence-capable solutions and how these product types can be deployed to support convergence at scale. 2. Look at the typical physical layout and management of the data center and how these relate to convergence at large scale. 3. Look forward to some of the new product and solution capabilities expected over the next couple of years. Access-Layer Convergence Modes When buying a converged platform, it is possible to deploy products based on three very different modes of operation. Products on the market today may be capable of one or more of these modes, depending on hardware and software configuration and license enablement. A given data center network may have multiple hops and tiers using different hardware and software combinations and permutations. The capabilities can in principle be mixed with other features such as Layer 2 multipathing mechanisms (TRILL, MC-LAG) and fabrics (Juniper Networks QFabric architecture and Virtual Chassis technology). • CoE transit switch—DCB switch with FCoE Initialization Protocol (FIP) snooping. Largely managed as a LAN device and F acting as a multiplexer from a storage area network (SAN) perspective. • FCoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy. Likely to be managed as both a LAN and SAN device, particularly if it’s in the general Ethernet/IP data path. • FCoE-FC switch—full Fibre Channel Forwarder (FCF) capability. May be managed as both a LAN and SAN device or just as a SAN device, depending on its location in the network. FCoE Transit Switch vs. FCoE-FC Gateway FC/FCoE Switch FC Switch VF_Port VF_Port VF_Port DCB DCB F_Port F_Port Port Port DCB DCB N_Port N_Port Port Port FCoE Transit Switch NPIV Proxy FIP Snooping FIP FIP FIP VF_Port VF_Port VF_Port ACL ACL ACL DCB DCB DCB DCB DCB DCB Port Port Port Port Port Port VN_Port VN_Port VN_Port VN_Port VN_Port VN_Port FCoE servers with CNA FCoE servers with CNA Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway 4 Copyright © 2012, Juniper Networks, Inc.
  • 5. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System When trying to understand these device capabilities, there are certain details that are often neglected but are critical to designing a converged network. The most important detail is that FC over Ethernet means that many things are specific to Ethernet L2 domains. In the context of this section, it is not a device that is configured for one of the modes just listed but rather the VLAN. This means that a device can be operating in multiple modes simultaneously, while at the same time operating in the same mode for multiple logical SAN fabrics on different VLANs. Looking at some examples specific to the capabilities of the QFX3500 Switch and QFabric Systems: • CoE transit switch—DCB switch with FIP snooping. Each VLAN can be an independent VN2VF or VN2VN VLAN for F different logical FC SAN fabrics. • CoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy. The FC ports can connect to more than one FC SAN fabric F and then be mapped as independent gateway functions to different VLANs. There are two key design use cases for these configurations: 1. Allowing customers to choose between either physical dual rail/dual SAN in FCoE, or logical dual rail/dual SAN across a common Infrastructure 2. Allowing multiple logical SANs to exist within the same physical fabric leveraging Ethernet quality of service (QoS) as required Understanding the Layout of a Typical Data Center and Organization of the Data Center Teams Physical instantiation matters, even in a virtual world. Data centers are built by laying out rows of racks or, for many larger data centers, PODs, areas, or halls—each of which contain multiple rows of racks to contain the equipment. Different areas of the data center are then allocated for different purposes. For the purposes of this white paper, understanding that regions of the data center are housing servers and other regions are housing storage along with the backbone FC SAN is the most critical separation. Typically, there will also be specific locations where L3 core routers, firewalls, and external metro area network (MAN) and WAN connections are provided. The area housing storage may be subdivided into FC disk and tape racks, while the area housing servers may be subdivided into different server types such as blade, rack-mount Intel-based, RISC Unix-based, mainframe, etc. In addition to understanding the physical layout, it is important to also understand that data centers are often operated by multiple teams with overlapping responsibilities. At the most extreme, there may be teams for desktop support, particularly now with virtual desktop infrastructure (VDI), as well as for applications, servers/operating systems/hypervisors, the Ethernet network, the FC network, network access server (NAS) storage, block storage, tape/ backup/archive, and facilities (cabling, power, and cooling). Server Area 1 Server Area 3 Main Distribution Area Windows/Linus/Intel Unix/Mainframe MAN/WAN Connectivity Server Application SAN/Disk/Tape Admin Admin Admin SAN A Disk Tape SAN B Facilities Management Network Team Server Area 2 Server Area 4 Main Distribution Area Windows/Linus/Intel Unix/Mainframe MAN/WAN Connectivity Figure 3: Typical data center layout and management Copyright © 2012, Juniper Networks, Inc. 5
  • 6. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System Related to the physical data center, there is also a change in deployment coming in part as a result of the move towards 10GbE and later 40GbE and 100GbE, the specifications of connectivity at these speeds, and the need for less oversubscription within the network as a whole. The implication of all of these conditions is the need to move many deployments towards top-of-rack and sometimes end-of-row rather than end-of-POD or end-of-data center designs. Along with convergence in general, this tends to result in more physical boxes in the overall network, especially when compared to the typical end-of-data center/storage POD-based FC SAN design. Applying the Network Topology to a Typical Data Center While looking at individual product types is important and interesting, it is far more important to look at their role in large-scale data center network deployments. A complex and poorly designed network is just that, and no box— regardless of its mode of operation—will change that. Similarly, a well designed topology with a clear understanding of what makes sense functionally at each layer allows for large converged networks that are deployable and manageable. The practical span or radius of a converged network is no worse than the equivalent FC SAN fabric and, if designed with care, can far exceed the limits of the traditional SAN. As with any SAN deployment, bandwidth latency and the maximum number of device hops should be controlled, but an FCoE transit switch does not consume a domain ID, allowing a far larger total device count. An FCoE transit switch, like an FCF but unlike an FCoE-to-FCoE gateway, can load-balance at the OxID or exchange level. With sophisticated QoS and FIP snooping, there is no loss of manageability for such a device compared to an FCF. Having removed the complexity of gateways and the protocol scaling limits of FCFs, a well designed large-scale Layer 2 allows for highly scalable deployments. Note, however, that traditional hop count limits should be applied to all switch or link types—for instance, the five link hops or six device hops limit between server and storage still applies, no matter whether the device is an FCoE transit switch, gateway, or FCF. The increased scale possible from a well designed converged network compared to a traditional FC SAN is critically important as the move to 10GbE/40GbE is driving deployments from end-of-hall or end-of-row to top-of-rack, naturally increasing the network device count in the data center. Indeed, no matter the fabric of choice, it is now possible to build, deploy, and manage thousands or even tens of thousands of FCoE-connected servers with just a pair of FCFs hosting the FC disk, FC tape, FICON mainframe, and the high-end servers that must be FC attached until they are available with 40GbE converged network adapters (CNAs). Understanding that most data centers have regions for servers and regions for storage, it quickly becomes clear that the optimal converged network design is to deploy a highly scalable L2 Ethernet- and L3 IP DCB-enabled network across the regions of the data center housing servers, and minimize storage enablement to just that required to multiplex the traffic towards those regions of the data center housing the storage and FC backbone. Deployment of Server POD-Wide FCoE Transit Switch to FCoE-Enabled FC SAN As previously noted, this paper focuses on deployments that apply for server access-layer convergence. As such, it is assumed that this access layer is in turn connecting to a Fibre Channel backbone. The term “Fibre Channel backbone” implies a traditional FC SAN of some sort, which is attached to the FC disk and tape, as well as most likely existing FC servers. When examining FCoE and convergence at scale, this physical separation not only shows the limitations in the “FCF Everywhere model,” but also demonstrates the inadequacies of the ”top-of-rack only” converged access model. Simply put, in a modern data center, it is not practical nor is it desirable to have cable runs from every single server rack to the storage racks. In any modern data center, a rational simple design is to have server racks connecting to one side of the fabric and have the FCoE-enabled FC SAN backbone connected to the other side of the fabric. This, of course, is much the same as the way other services and appliances are connected to the fabric, be they routing services to the MAN/ WAN, firewall services, and so on. 6 Copyright © 2012, Juniper Networks, Inc.
  • 7. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System FC SAN A Converged Access FC SAN A Figure 4: Large-scale converged access SAN The Implications of Multiprotocol Data Center Networks A very common but largely unrecognized (at least by marketing folks) phenomenon is the rise of the multiprotocol storage network. The reality of the modem data center is that there are often different types of storage devices serving different needs. Further, it is increasingly the case that these are deployed with a variety of connectivity protocols—FC, FCoE, iSCSI, Server Message Block (SMB), Network File System (NFS), parallel NFS (PNFS), object-based, and even dual-attachment station (DAS) and distributed. Storage devices are no different than servers or clients in that different protocols have different use cases, and “multiprotocolism” is in fact a natural state of affairs. With the rise of server virtualization, the nature of the underlying storage protocol is hidden from the operating system and the application as part of the normal hardware abstraction provided by the hypervisor. This, along with data migration capabilities, gives much needed agility and flexibility in the deployment of best in class on a per use case basis. At the network level, normal QoS and DCB provide all of the tools necessary for the separation of the various storage traffic types. This allows not just the separation of storage and non storage traffic, but also separation of storage traffic of different protocols allowing for the safe convergence of any combination of storage types needed to meet the needs of a given deployment. Copyright © 2012, Juniper Networks, Inc. 7
  • 8. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System In Server Storage FC SAN A In Rack Storage FC SAN A End of Row Storage Big Data Storage Data De-Duplication Figure 5: Multiprotocol storage network Standards That Allow for Server I/O and Access-Layer Convergence Enhancements to Ethernet for Converged Data Center Networks—DCB Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support lossless traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames can lead to cross traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing (user priorities), but again, these are rarely deployed within the data center. The next logical step for Ethernet will be to leverage these capabilities and enhance existing standards to meet the needs of convergence and virtualization, propelling Ethernet into the forefront as the preeminent infrastructure for LANs, SANs, and high-performance computing (HPC) clusters. These enhancements benefit Ethernet I/O convergence (remembering that most servers have multiple 1GbE network interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based storage protocols such as NAS and iSCSI. These enhancements also provide the appropriate platform for supporting FCoE. In the early days when these standards were being developed and before they moved under the auspices of the IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them. DCB—A Set of IEEE Standards. Ethernet needed a variety of enhancements to support I/O, network convergence, and server virtualization. Server virtualization is covered in other Juniper white papers, even though it is part of the DCB protocol set. With respect to I/O and network convergence, the development of new standards began with the following existing standards: • User Priority for Class of Service—802.1p—which already allows identification of eight separate lanes of traffic (used as-is) • thernet Flow Control (Pause, symmetric, and/or asymmetric flow control)—802.3X—which is leveraged for priority flow E control (PFC) • MAC Control Frame for PFC—802.3bd—to allow 802.3X to apply to individual user priorities (modified) • A number of new standards that leverage these components have been developed and have either been formally approved or are in the final stages of the approval process. These include: 8 Copyright © 2012, Juniper Networks, Inc.
  • 9. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System -- PFC—IEEE 802.1Qbb—which applies traditional 802.3X Pause to individual priorities instead of the port -- nhanced Transmission Selection (ETS)—IEEE 802.1Qaz—which is a grouping of priorities and bandwidth allocation to E those groups -- uantized Congestion Notification (QCN)—IEEE 802.1Qau—which is a cross network as opposed to a point-to-point Q backpressure mechanism -- ata Center Bridging Exchange Protocol (DCBx), which is part of the ETS standard for DCB auto-negotiation D The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms of external requirements, and also describe in some detail the options for implementing internal behavior and the downside of some lower cost but standards-compliant ways of implementing DCB. It is important to note that these standards are separate from the efforts to solve the L2 multipathing issues that are not technically necessary to make convergence work. Also, neither these standards nor those around L2 multipathing address a number of other challenges that arise when networks are converged and flattened. PFC TX Queue 0 RX Buffer 0 PFC ON RX Buffer 0 TX Queue 0 ON PFC TX Queue 1 RX Buffer 1 PFC ON RX Buffer 1 TX Queue 1 ON S Physical Port – PFC Physical Port – PFC PFC TX Queue 2 RX Buffer 2 PFC ON RX Buffer 2 T pause TX Queue 2 ON PFC TX Queue 3 O RX Buffer 3 PFC ON RX Buffer 3 P TX Queue 3 ON PFC TX Queue 4 RX Buffer 4 PFC OFF RX Buffer 4 TX Queue 4 OFF PFC TX Queue 5 RX Buffer 5 PFC OFF RX Buffer 5 TX Queue 5 OFF PFC TX Queue 6 Keeps sending DROP RX Buffer 6 PFC OFF RX Buffer 6 TX Queue 6 OFF PFC TX Queue 7 RX Buffer 7 PFC ON RX Buffer 7 TX Queue 7 ON 1 2 3 Physical Port – ETS Class Group 1 TX Queue 0 1 2 2 TX Queue 1 Class Group 2 TX Queue 2 TX Queue 3 2 5 5 2 6 5 TX Queue 4 TX Queue 5 Class Group 3 2 3 3 TX Queue 6 TX Queue 7 2 4 3 T1 T2 T3 T1 T2 T3 Offered Traffic Realized Traffic Figure 6: PFC ETS and QCN Enhancements to Fibre Channel for Converged Data Center Networks—FCoE FCoE—the protocol developed within T11. The proposed FCoE protocol has been developed by the T11 Technical Committee—a subgroup of the International Committee for Information Technology Standards (INCITS)—as part of the Fibre Channel Backbone 5 (FC-BB-5) project. The standard was passed over to INCITS for public comment and final ratification in 2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Channel Backbone 6 (FC-BB-6), which is intended to address a number of issues not covered in the first standard, and develop a number of new deployment scenarios. FCoE was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the cost of change. To the storage world, FCoE is, in many ways, just FC with a new physical media type; many of the tools and services remain the same. To the Ethernet world, FCoE is just another upper level protocol riding over Ethernet. The FC-BB-5 standard clearly defines all of the details involved in mapping FC through an Ethernet layer, whether directly or through simplified L2 connectivity. It lays out both the responsibilities of the FCoE-enabled endpoints and FC fabrics as well as the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended to maintain the level of security that a physically separate SAN traditionally provides. Overall, apart from the scale- up and scale-down aspects, FC-BB-5 defines everything needed to build and support the products and solutions discussed earlier. Copyright © 2012, Juniper Networks, Inc. 9
  • 10. White Paper - FCoE Storage Convergence Across the Data Center with the Juniper Networks QFabric System While the development of FCoE as an industry standard will bring the deployment of unified data center infrastructures closer to reality, FCoE by itself is not enough to complete the necessary convergence. Many additional enhancements to Ethernet and changes to the way networking products are designed and deployed are required to make it a viable, useful, and pragmatic implementation. Many, though not all, of the additional enhancements are provided by the standards developed through the IEEE DCB committee. In theory, the combination of the DCB and FCoE standards allows for full network convergence. In reality, they only solve the problem for relatively small-scale data centers. The challenge of applying these techniques to larger deployments involves the use of these protocols purely for server- and access-layer I/O convergence through the use of FCoE transit switches (DCB switches with FIP snooping) and FCoE-FC gateways (using N_Port ID Virtualization to eliminate SAN scaling and heterogeneous support issues). Juniper Networks EX4500 and EX4550 Ethernet Switches, and Juniper Networks QFX3500 Switch, all support an FCoE transit switch mode. The QFX3500 also supports FCoE-FC gateway mode. These products are industry firsts in many ways: 1. The EX4500 and QFX3500 switches are fully standards-based with rich implementations from both a DCB and FC- BB-5 perspective. 2. The EX4500 and QFX3500 are purpose-built FCoE transit switches. 3. QFX3500 is a purpose-built FCoE-FC gateway, which includes fungible combined Ethernet/Fibre Channel ports. 4. QFX3500 features a single Packet Forwarding Engine (PFE) design. 5. The EX4500 and QFX3500 switches both include feature rich L3 capabilities. 6. QFX3500 supports low latency with cut-through switching. Conclusion Juniper Networks QFabric Switch is the first true single tier fabric switch built to solve all of the challenges posed by large-scale convergence. The QFX3500 is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches, including third-party embedded blade shelf switches. The QFabric System is the only solution today allowing customers to efficiently deploy FCoE convergence at scale. Industry firsts in many ways, EX4500, EX4550, QFX3500, QFX3600, and QFabric switches all support an FCoE transit switch mode, and the QFX3500 and QFabric System also support FCoE-FC gateway mode. They are fully standards- based with rich implementations from both a DCB and FC-BB-5 perspective and feature rich L3 capabilities. The QFX3500 and QFabric System are purpose-built FCoE-FC gateways, which include fungible combined Ethernet/FC ports, a single PFE design, and low latency cut-through switching. Moreover, the QFX3500 Switch, QFX3600 Switch, and QFabric System are the first solutions on the market to support FC-BB-6 FCoE transit switch mode. There are a number of very practical server I/O access-layer convergence topologies that can be used as steps along the path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), quad small form-factor pluggable transceiver (QSFP), 40GbE, and the FCoE Direct Discovery Direct Attach model will further bring Ethernet economics to FCoE convergence efforts. About Juniper Networks Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. The company serves customers and partners worldwide. Additional information can be found at www.juniper.net. Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions, Juniper Networks, Inc. Juniper Networks (Hong Kong) Juniper Networks Ireland please contact your Juniper Networks 1194 North Mathilda Avenue 26/F, Cityplaza One Airside Business Park representative at 1-866-298-6428 or Sunnyvale, CA 94089 USA 1111 King’s Road Swords, County Dublin, Ireland authorized reseller. Phone: 888.JUNIPER (888.586.4737) Taikoo Shing, Hong Kong Phone: 35.31.8903.600 or 408.745.2000 Phone: 852.2332.3636 EMEA Sales: 00800.4586.4737 Fax: 408.745.2100 Fax: 852.2574.7803 Fax: 35.31.8903.601 www.juniper.net Copyright 2012 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 2000500-001-EN Oct 2012 Printed on recycled paper 10 Copyright © 2012, Juniper Networks, Inc.