This document discusses load balancing as a service (LBaaS) in OpenStack Havana. It covers:
1. A focus for Havana will be supporting multiple load balancing technologies and vendors through LBaaS drivers while maintaining a common tenant API.
2. The architecture proposed separates the LBaaS plugin from drivers for specific load balancers. This allows different load balancing solutions like network services, virtual appliances, and hardware to be used.
3. Additional topics to be addressed for Havana include the tenant API to support multiple vendors, load balancing across networks through SNAT/DSR, and hierarchical modeling of load balancing configurations.
2. Why should I care?
• Load balancing as a services (LBaaS) are
expected from cloud services targeting critical
applications.
• Load balancers are crucial part of
– Availability
– Scalability
– Manageability
3. Radware Involvement in
OpenStack
• Radware Joined OpenStack on Dec 2011
• Planning of LBaaS for Grizzly and Havana
• Contributor to the Networking/LBaaS
project
Slide 3
4. Agenda
• LBaaS History
• LBaaS in Grizzly
• Focus Areas for Havana
– Multivendor Support
– Tenant API
– Network Topologies
8. Grizzly - Tenant API
• VIP
• Pool
• Pool Member
• Health Monitoring
VIP Pool
Pool
Member
Pool
Member
Pool
Member
Subnet Subnet
Health
Monitor
9. Grizzly - Implementation
Quantum Server Network Node
LBaaS
LBaaS -
callback
LBaaS Agent
HA Proxy
Process
HA Proxy
Process
HA Proxy
Process
HA Proxy
Process
HA Proxy
Process
10. Notes
• HA Proxy process per VIP
• VIP / Pool Members on the same network /
subnet
• Nat only
• Model is actionable on the Device/Instance
when all the model is completely defined
• Does not support multi network nodes
• Does not support HA for the service
12. OpenStack/Networking/LBaaS –
Highlights for Havana
• Multiple load balancing technologies and vendors could be
used in parallel
• Service Types as a way to specify the required service (ex:
Platinum, Gold, Silver)
• Solution can be used out of the box with a default open
source load balancer driver
Slide 12
13. Multi Vendor Support
• Vendor/Driver selection should be done in the LBaaS Plug-
in running inside Quantum
– Based on Service Type
– Based on the decision how to handle service insertion
• Device provisioning and selection (AKA Scheduling) is the
responsibility of the Driver.
– Shared libraries could assist but should not be mandatory (ex:
scheduling library)
• Should allow different service models
– NS based
– Service VM based
– HW appliance based
– Other
14. Proposed Architecture
Quantum Plugin
LBaaS Plugin
HA Proxy NS Driver
HA Proxy Service
VM Driver
Vendor 1 Ns Driver
Vendor 2 Driver
Vendor 3 HW
Appliance Driver
HA Proxy NS Agent
HA Proxy Service
VM Agent
Vendor 1 Ns Agent
Vendor 2 LB Fabric
Manager
Vendor 3 HW On-
Appliance API
AMQP
AMQP
REST
AMQP
REST / SOAP
15. LBaaS Driver
• The Driver API is similar to the LBaaS Plugin API,
the Plugin delegates handling of the Message to
the Driver and pass itself as parameter.
• HA is complex and should be managed by each
vendor per his needs:
– Allocating QPorts and managing IP adress allocation
must be done in the LBaaS Plugin / Driver and not on
an Agent - Some of the capabilities exists only when
embedded in the Quantum Plug-in
16. LBaaS Driver
• Handling a-sync operations
– Message Queues with Driver <->Agent
– Callback threads with ITC queue
• Connecting Physical appliances to the
Quantum network is still missing API
capabilities that allow for example connecting
a VLAN based appliance to Quantum via L2/L3
network gateway.
17. Tenant API
• Support Multiple vendors at the same time
• How to expose LBaaS vendors’ unique
capabilities
• Validate/Update the Grizzly Tenant API
18. Remarks on current model
• Health Monitor as global entity
– The model was derived from vendors who can
reuse Health Monitor on the boundary of a device
– Managing Health Monitor over multiple instances
is an error prone experience since updates should
be done “atomically”
– Options
• Use Health Monitor definition globally but when
connect to a Pool, do a copy
• Manage Health Monitor on the Pool and not global
19. Remarks on current model
• Since the model is actionable only when fully
defined, does it make sense to still manage it
as different “flat” model or should it be
hierarchical under VIP?
20. Network Topologies
• LB between two networks - the case when Vip
and Pool are assigned to different subnets
• Adding SNAT and DSR on top of the current
NAT implementation (extension to L3 agent?)
• Can the LB replace the L3 GW?