The document describes AT&T's Central Office Re-architected as a Datacenter (CORD) initiative. CORD aims to modernize central offices by implementing them as cloud-like datacenters using open source software and white box switches. This allows for increased agility, automation and the ability to deploy new services more quickly. The CORD architecture uses a leaf-spine fabric with open networking switches and virtualizes network functions using open source components like OpenStack and ONOS. This simplifies infrastructure and enables new types of services.
14. Economies of a datacenter
Infrastructure built with a few commodity building blocks
using open source software and white-box switches
Agility
of
a
cloud
provider
So#ware
pla+orms
that
enable
rapid
crea5on
of
new
services
CORD Aims to Deliver
15. Residence/Enterprise Central Office
CPE
ONU
OLT
ETH
AGG
BNG
Legacy Central Office
Acronyms
• CPE – Customer Premises Equipment
• OLT – Optical Line Termination
• BNG – Broadband Network Gateway
Backbone
Core
Data
Center
29. Central
Office
Re-‐architected
as
a
Datacenter
(CORD)
CORD Fabric
Saurav
Das
Principal
System
Architect
Open
Networking
FoundaCon
30.
I
O
I
O
Metro
Core
Link
I
O
Access
Link
Fabric
Spine
Switches
Leaf
Switches
vRouter
vSG
vOLT
NFVI
Orch-‐
XOS
DHCP
LDAP
RADIUS
Control
Data
PON
OLT
MACs
Commodity
hardware
SDN
Control
Plane-‐
ONOS
Applications
CORD:
Central
Office
Re-‐architected
as
Datacenter
ONT
Simple
CPE
GPON
GPON OLT
vSG
31. Open-‐Source
Mul9-‐Purpose
Leaf-‐Spine
Fabric
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
Open
Source
SDN-‐based
Bare-‐metal
Slow
I/O:
PON
OLT
MACs
Access
Links
CORD fabric – designed to scale up to 16 Racks
Fast
I/O
Metro
Core
Links
HA,
scales
to
16
racks,
OF
1.3,
Topo-‐Discovery,
ConfiguraCon,
GUI,
CLI,
TroubleshooCng,
ISSU
Fabric
Control
ApplicaCon:
Addressing,
ECMP
RouCng,
Recovery,
Interoperability,
API
support
ONOS
Controller
Cluster
API
32. White
Box
SDN
Switch
Accton
6712
Leaf
Switch
24
x
40G
ports
downlink
to
servers
via
4
X
10G
breakout
DAC
8
x
40G
ports
uplink
to
different
spine
switches
ECMP
across
all
uplink
ports
GE
mgmt.
White
Box
SDN
Switch
Accton
6712
Spine
Switch
32
x
40G
ports
downlink
to
leaf
switches
40G
QSFP+/DAC
GE
mgmt.
BRCM
ASIC
OF-‐DPA
Indigo
OF
Agent
OF-‐DPA
API
OpenFlow
1.3
OCP:
Open
Compute
Project
ONL:
Open
Network
Linux
ONIE:
Open
Network
Install
Environment
BRCM:
Broadcom
Merchant
Silicon
ASICs
OF-‐DPA:
OpenFlow
Datapath
AbstracCon
Leaf/Spine
Switch
SoHware
Stack
to
controller
OCP
Software
-
ONL
ONIE
OCP Bare Metal Hardware
Open
Hardware
&
SoHware
Stacks
BRCM
SDK
API
33. ONOS
Project
Atrium
Stack
vlan
x
vlan
y
vlan
z
E-‐BGP
E-‐BGP
BRCM
ASIC
OF-‐DPA
Indigo
OF
Agent
OF-‐DPA
API
OpenFlow
1.3
OCP
Software
-
ONL
ONIE
OCP Bare Metal Hardware
BRCM
SDK
API
OCP:
Open
Compute
Project;
ONL:
Open
Network
Linux;
ONIE:
Open
Network
Install
Env;
BRCM:
Broadcom
Merchant
Silicon
ASICs;
OF-‐DPA:
OpenFlow
Datapath
AbstracCon
OFDPA Driver
Peering
Application
Quagga
BGP
Fabric Control
Application
34. OLT
OLT
OLT
To
residenCal
subscribers
I/O
I/O
To
upstream
Metro
routers
vSG
vSG
L2
bridged
IPv4
unicast
/
MPLS
SR
IPv4
mulCcast
/
MPLS
SR
QinQ
/
MPLS
PW
CORD
Fabric
Opera9on
35. Ingress
Port
Table
Phy
Port
Vlan
Table
Termin
-‐aCon
MAC
Table
MulC-‐
cast
RouCng
Table
Unicast
RouCng
Table
MPLS
Table
Bridging
Table
ACL
Policy
Table
L2
Flood
Group
L3
ECMP
Group
Phy
Port
Phy
Port
Phy
Port
Phy
Port
Phy
Port
MPLS
Label
Group
MPLS
Label
Group
L3
Group
L2
Interface
Group
L2
Interface
Group
L2
Interface
Group
Fabric
Chip
Pipeline
(Broadcom’s
OF-‐DPA)
L2
bridged
IPv4
unicast
/
MPLS
SR
IPv4
mulCcast
/
MPLS
SR
QinQ
/
MPLS
PW
Vlan
1
Table
MPLS
L2
Port
Table
*
Simplified
view
37. Field-‐Trial
Specifica9on
Rack
1
Leaf
1
Rack
1
Leaf
2
GE
L2
Switch
2
X
10GE
(bonded)
dual-‐homed
server
to
ToRs
(leafs)
1
X
GE
mgmt
Rack
includes
OLTs,
Servers,
OpenStack
compute
nodes
GE
L2
Switch
OVS
C
C
VM
Spine
1
Spine
2
40G
uplinks
Rack
2
Leaf
1
Rack
2
Leaf
2
2
racks,
2
ToRs/rack,
2
spines,
servers
dual
homed
to
ToRs
38. Central
Office
Re-‐architected
as
a
Datacenter
(CORD)
Access and Virtualization walkthrough
Ali
Al-‐Shabibi
Open
Networking
Lab
39. Outline
• Hardware
and
SoAware
involved
– CPE,
OLT
– XOS,
OpenStack,
ONOS,
vOLT
OpenFlow
agent
• Walkthroughs
– CPE
to
OLT
to
vSG
– Service
ComposiLon
40. Access
hardware
CPE
• Simple
commodity
NetGear
device
• Flashed
with
OpenWrt
• Runs
OVS
as
dataplane
switch
• OpenFlow
capable
• Runs
802.1X
authenLcaLon
• Several
design
opLons
available
here
• OpenFlow
enabled?
• Runs
a
DHCP
server
• Actual
CPE
for
producLon
environment
sLll
TBD
41. Access
hardware
OLT
• One
rack
unit
GPON
OLT
MAC
• 48
PON
ports
(arranged
as
12
OLT
chips)
• 6
40Gbps
Ethernet
ports
• NetConf
to
configure
power
se[ngs,
fan
speed,
etc.
• OpenFlow
Controllable
• via
external
OF
agent
• External
soAware
bootstraps
firmware
42. SoAware
PMC
vOLT
• Runs
either
in
a
container
or
VM
• Exposes
an
OpenFlow
interface
north
to
ONOS
• Manages
the
OLT
via
OMCI
to
the
south
• Converts
OpenFlow
messages
into
OMCI
to
provision
the
OLT
• Enables
the
OLT
to
pass
802.1X
and
IGMP
packets
to
ONOS
– to
implement
client/ONU
authenLcaLon;
and
– to
implement
IGMP
snooping
PMC
vOLT
OMCI
43. SoAware
XOS,
ONOS,
and
OpenStack
• XOS
orchestrates
both
ONOS
and
OpenStack
• OpenStack
is
used
to
spawn
VMs
and
containers
• ONOS
(via
neutron)
creates
virtual
networks
and
connects
them
together
achieving
service
chaining
44. CPE
boot
and
AuthenLcaLon
IO
IO
Metro
Core
Link
IO
Access
Link
Spine
Switches
Leaf
Switches
RADIUS
Control
Data
PON
OLT
MACs
Commodity
hardware
CORD
SoAware
Stack
=
XOS
+
ONOS
+
OpenStack
vOLT ONOS App
ONT
Simple
CPE
GPON
CPE
(re)boots
45. Dataplane
ConfiguraLon
Home
Network
CPE
OLT
No
VLAN
Default
VLAN
(0)
Q-‐in-‐Q
• OLT
double
tags
packets
from
customer
• C-‐tag
idenLfies
the
customer
• S-‐tag
idenLfies
the
OLT
the
customer
is
connect
to
• OLT
also
meters
customer
connecLons
• OLT
maintains
group
informaLon
to
handle
mulLcast
traffic
?
46. Spinning
Up
a
vSG
IO
IO
Metro
Core
Link
IO
Access
Link
Spine
Switches
Leaf
Switches
Control
Data
PON
OLT
MACs
Commodity
hardware
CORD
SoAware
Stack
=
XOS
+
ONOS
+
OpenStack
vOLT
ONT
Simple
CPE
GPON
vSG
AuthenLcaLon
has
been
successful.
A
vSG
is
now
needed.
47. Service
ComposiLon
• Services
operate
in
their
own
virtual
network
isolated
using
VXLAN
overlays
• Services
can
scale
both
in
terms
of
compute
and
networking
• Services
are
designated
by
their
own
service
IP
– Work
is
load-‐balanced
amongst
service
compute
nodes
48. Access
vRouter
in
the
CORD
Architecture
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
White
Box
Leaf-‐Spine
Fabric
White
Box
White
Box
SDN
enabled
DC
Fabric
Dedicated
vRouter
switches
to
Metro
49. vRouter
Control
Plane
Metro
router
Quagga
RouLng
protocols
ONOS
FIB
Push
Manager
(FPM)
Network
Metro
router
• FPM
is
a
feature
of
Quagga
which
enables
it
to
push
routes
to
external
enLLes
• Based
on
the
Linux
netlink
protocol
vRouter
app
50. Conclusion
• Understanding
of
the
hardware
components
• Understanding
of
the
end
to
end
traffic
flow
• CORD
is
really
one
quite
large
integraLon
project