2. SDN switches – current status
OpenFlow switch has OpenFlow “agent” control
plane logic on board
OpenFlow Standard assumes switch is “free-
standing” – no really accommodate any other control
plane
3. Hybrid switch – why we need it
Transition mechanism - allowing users to shift
services to an OpenFlow basis gradually, as opposed to
all-or-nothing approach
Missing features - To provide functionality that
OpenFlow does not support (yet) - at least does not
support in the version of OpenFlow actually available
Correct behavior - To support functionality that
OpenFlow COULD support - but that is better delegated
to switch-local implementation for scalability and
efficiency reasons
4. Hybridization approaches
Ships in the night - each side
(OpenFlow and current ) thinks it
is alone, with no cooperation /
coordination of their actions
Integrated Approach - allow the
sides to co-operate
5. Ships in the night approach
The switch imposes Traffic Separation soon after
ingress into two separate domains - OpenFlow and
"other". The two common approaches are:
To separate by the Port (i.e. on some ports ALL traffic is
OpenFlow, and on the other ports ALL traffic is Non-
OpenFlow)
To separate by VLAN (i.e. ALL traffic in some VLAN(s)
is sent to OpenFlow, and ALL traffic in all other VLANs is
Non-OpenFLow)
6. Ships in the night –
advantages and disadvantages
Simple to implement
Very hard (or impossible) to share information between Nodes connected to
OpenFlow Ports/VLANs and nodes connected to Non-OpenFlow Ports/VLANs.
In effect, the user is forced to build two separate networks - a regular one, and a separate Overlay
OpenFlow network. Any servers needed for Network operation (DHCP, DNS, RAS, AAA, ...) may
have to be duplicated (or at least given two separate interfaces, one into each network).
In the most common case, of VLAN-based separation, users usually can't use
VLANs at all, and moreover, depending on which mechanism is used to classify
incoming traffic into VLANs, some security concerns may occur (e.g. if users
send VLAN-tagged traffic, they can potentially get into or out of the OpenFlow
network contrary to desired result)
In general, traffic sent to the OpenFlow side can only get the functionality
OpenFlow supports even if the underlying systems can do better.
Functionality that has to apply to all frames has to be implemented twice - once
on the OpenFlow side (assuming it is possible) and once on the "traditional"
side.
7. Integrated Approach –
By-Function choice
OpenFlow is considered an additional control input
to the single, integrated data-plane.
The users can decide by-function if it will be
configured by:
OpenFlow mechanism
Traditional Mechanism
Combination of Both
All Traffic is subject to handling controlled by both
sides
8. Integrated approach–
advantages and disadvantages
More complicate to implement
It is a Superset - user may still implement by-the-port or by-the-VLAN traffic
separation, when desired. However, this is now a CHOICE to be made by the
system administrator, rather than a forced limit imposed by the switch
implementation. Moreover, this is not an all-or-nothing choice user can set a
some ports or VLANs to OpenFlow-Only, some to Traditional-only, and some to
mixed-mode operation.
It easier to migrate service to OpenFlow gradually, because when
service/functionality X is migrated to OpenFlow, other needed services, still
implemented by Traditional means can still apply to the same traffic. (e.g. All
traffic can still be subject to Traditional L3 routing, while OpenFlow overrides
the L2 forwarding, allowing policy-controlled traffic engineering to set links
used to reach next-hop targets)
9. Integrated approach–
advantages and disadvantages
End-User has better visibility - if OpenFlow is used as a control-input to the
general switch control implementation, naturally its operation is visible using the
usual means system administrators are already familiar with. For example, in
Marvell's hybrid switch the "show Running Configuration" CLI command (and its
equivalents in GUI, SNMP and XML) will all show both configurations originating
from Traditional channels and configurations coming from the OpenFlow side. This
makes it much easier to understand and debug switch operation.
System Administrator is given Maximum capability and flexibility - any
and all traffic can get any functionality the switch supports. There is nothing
"forbidden“ or only available to SOME traffic. Given that only end-user really knows
what is needed in the field, this is a big advantage, since any assumptions by switch
implementer into how the switch "should" be used become in reality limitation on
how the switch CAN be used
It is possible (and easy) to create synergies - OpenFlow can be used to
supplement traditional services, and can co-operate with "traditional" control plane
to create a best-of-both-worlds combinations. For example, OpenFlow can be
applied to a selected subset of the traffic, leaving the Simple case to be handled by
Traditional means.
10. Hybrid Switch Components and Interfaces
Management
Plane, Control
Management Plane (off-board) Plane (off-board)
Web Telnet, NMS OpenFlo SDN / OF
Program
Browser SSH Console w C&M app
OF
XML, Controller
Netconf, etc’ HTML ASCII SNMP
OF Config OF Wire
Control Plane (on-board or local)
Web CLI SNMP OF Conf
API OF Agent
Server Parser Agent Agent Node
To
On-board Logic Node
Protocols
Data Plane
Forwarding Pipeline
Network Device
( e.g. DiffServ QoS functionality is only supported by OpenFlow 1.3 (and even then only partly) , but is widely supported by all Marvell Switches today ) ( e.g. OpenFlow COULD implement Triple-Play/L2 Multicast Distribution from the controller - but it is more efficient to let each switch handle this locally for its downstream clients than doing this on the controller for the entire network, especially since that would not provide any new benefit )
( e.g. DiffServ QoS functionality is only supported by OpenFlow 1.3 (and even then only partly) , but is widely supported by all Marvell Switches today ) ( e.g. OpenFlow COULD implement Triple-Play/L2 Multicast Distribution from the controller - but it is more efficient to let each switch handle this locally for its downstream clients than doing this on the controller for the entire network, especially since that would not provide any new benefit )
( e.g. DiffServ QoS functionality is only supported by OpenFlow 1.3 (and even then only partly) , but is widely supported by all Marvell Switches today ) ( e.g. OpenFlow COULD implement Triple-Play/L2 Multicast Distribution from the controller - but it is more efficient to let each switch handle this locally for its downstream clients than doing this on the controller for the entire network, especially since that would not provide any new benefit )
( e.g. DiffServ QoS functionality is only supported by OpenFlow 1.3 (and even then only partly) , but is widely supported by all Marvell Switches today ) ( e.g. OpenFlow COULD implement Triple-Play/L2 Multicast Distribution from the controller - but it is more efficient to let each switch handle this locally for its downstream clients than doing this on the controller for the entire network, especially since that would not provide any new benefit )
another example can illustrate closer co-operation: OpenFlow can control the list of POSSIBLE recipients of Triple-Play service (by defining all possible clients of some channel as an "ALL" group), while on-board IGMP/MLD snooping controls the actual current subset of these who want to see this channel NOW, while pruning this traffic from paths where nobody wants to receive it.