Michał Dubiel – TBD
Topic of Presentation: OpenContrail software architecture
Language: Polish
Abstract:
OpenContrail is a complete solution for Software Defined Networking (SDN). Its relatively new approach to network virtualization in data centers utilizes the overlay networking technology in order to achieve full decoupling of the physical infrastructure from the tenant’s logical configurations.
This presentation describes the software architecture of the system and its functional partitioning. A special emphasis is put on a compute node components: the vRouter kernel module and the vRouter Agent. Also, selected implementation details are presented in greater details along with an analysis of their impact on an overall system’s exceptional scalability and great performance.
2. Plan
• Cloud operating system
– Why?
• Network virtualization
– Why it is important
– OpenContrail solution
• OpenContrail architecture
– Goals, assumptions
– Functional partitioning
– Components
3. • Compute power
• Storage
• Networking
CLOUD OPERATING SYSTEM
4. Operating System analogy
• Resources in a typical server
– CPU cores
– Memory
– Storage
– Networking
• Resources in a datacenter
– Hardware machines
– Storage appliances
– Networking equipment
10. Observations
• Majority of network endpoints are virtual
• Virtual networks dominate
• Isolation between them has to be
provided
• While using the same physical network
• Automatically
11. Solutions
• Vlans
– Default OpenStack approach
– Limited, not flexible
• Overlay networking
– OpenContrail as a Neutron plugin
– Flexible
– Scalable
12. VLANs
• VM’s interfaces placed on bridges
– Each bridge for a virtual network
• Difficult to manage
• 4096 VLAN tags limit
– Can be extended using Shortest Path Bridging
• Physical switches have to contain the VN state
13. Overlay networking
• “Old” technology, new for data-centers
• Physical underlay network
– IP fabric
– No state of the virtual networks
• Virtual overlay network
– Holds state of the virtual networks
– Dynamic tunnels (MPLSoGRE, VXLAN, etc.)
14. VM migration example
VM1 VM2
VM3
Server 1
VM4 VM5
VM6
Server 2
Physical switch
VM7 VM8
VM9
Server 3
Virtual networks:
1 2
3
S3 VM9 Payload
Physical network:
15. VM migration example
VM1 VM2
VM3
Server 1
VM4 VM5
Physical switch
VM6
VM9 Server 2
VM7 VM8
Server 3
Virtual networks:
1 2
3
S2 VM9 Payload
Physical network:
16. Overlay networks advantages
• “Knowledge” about network only in the
software (vRouter)
• Any switch works for IP fabric network
– No configuration
– Only speed matters
– Low price
• OpenContrail implementation is standards-based
(MPLS, BGP, VXLAN, etc.)
19. “Think globally, act locally”
• The system is physically distributed
– No single point of failure
– Scalability
– Performance
• Logically centralized control and management
– Simplicity
– Ease of use
22. Configuration node components
• Configuration API Server
– Active/Active mode
– Receives REST API calls
– Publishes configuration to the IF-MAP Server
– Receives configuration from other API Servers
• Discovery Service
– Active/Active mode
– A Registry of all OpenContrail services
– Provides REST API for publishing and querying of
services
23. Configuration node components (2)
• Schema Transformer
– Active/Backup mode
– Receives high-level configuration from IF-MAP Server
– Transforms high-level constructs (eg. virtual network)
to low-level (eg. routing instance)
• IF-MAP Server
– Active/Active mode
– Publishes system configuration to Control nodes,
Schema Transformer
– All configuration comes from API Server (both high
and low level)
24. Configuration node components (3)
• Service Monitor
– Active/Backup mode
– Monitors service virtual machines (firewall, analyzer,
etc.)
– Calls nova API to control VMs
• AMPQ Server (RabbitMQ)
– Communication between system components
• Persistent storage (Cassandra)
– Receives and stores system configuration from the
Configuration API Server
25. Configuration flow (user)
1. User Request
2. Original API Server
3. RabbitMQ
4. All API Servers
5. Local IF-MAP Server
6. Schema Transformer
26. Configuration flow (transformed)
1. Schema Transformer
2. Configuration API Server
3. RabbitMQ
4. All API Servers
5. Local IF-MAP Server
6. Control nodes and DNS
28. Control node components
• Controller
– Active/Active mode
– Receives configuration from IF-MAP Server
– Exchanges XMPP messages with vRouter Agent
– Federate with other nodes and physical switches via
BGP/Netconf
• DNS Service
– Active/Active
– Receives configuration from IF-MAP Server
– Exchanges XMPP messages with vRouter Agent
– Front-end only, backend using host native ‘named’
29. Compute node
Nova
Scheduler
Contrail Control
node
Nova vif
driver
VM VM VM
KVM
Contrail
Agent
Contrail
vRouter
Nova
compute
Libvirt
Kernel space
TCP
NetLink
/dev/flow
pkt
QEMU
TUN/TAP
30. Compute node components
• vRouter Agent
– Communication via XMPP with the Control node
– Installation of forwarding state into vRouter
– ARP, DHCP, DNS proxy
• vRouter
– Packet forwarding
– Applying flow policies
– Encapsulation, decapsulation
31. Agent <-> vRouter communication
• NetLink
– Routing entry, next-hop, flow, etc. synchronization
– Uses RCU
• /dev/flow
– Shared memory for flow hash tables
• pkt tap device
– Flow discovery (first packet of a flow)
– ARP, DHCP, DNS proxy
33. Analytics node components
• API Server
– REST API for querying analytics
• Collector
– Collects analytics information from all system nodes
• Query Engine
– Map-reduce over collected analytics
– Executes queries
• Rules Engine
– Controls which events are collected by the Collector
Cel (przedstawić architekture sofware’u, zachęcić do rozwijania)
Temat
Agenda
Czas
Dlaczego ja
Dojdziemy do tego jak się OpenContrail integruje z OpenStackiem później
Zastanówmy się jaki mamy tutaj problem w datacentrze zarządzanym przez OpenStacka
- Top of rack
- Są inne np. End-of-row
Kto zna proste rozwiazanie VLAN?
Migration example
Doszliśmy do sedna sprawy, wiemy jak ma (od strony sieciowej działać system) jak to teraz zrealizować