This document summarizes a presentation on the design and implementation of a distributed Mobility Management Entity (MME) on OpenStack. Key points:
- The authors implemented a 3-tier architecture for MME with a front-end node to handle interfaces and balance requests, stateless worker nodes to handle procedures, and a Redis database to store UE contexts.
- Experimental results showed the distributed MME had higher attach latency than a standalone MME but provided resilience. Scaling tests demonstrated the ability to handle increased load by adding worker nodes.
- Future work includes evaluating different Redis persistence policies and performance in a hybrid cloud environment.
Design and Implementation of Distributed Mobility Management Entity on OpenStack
1. Design and Implementation of a
Distributed Mobility Management Entity on
OpenStack
Gopika Premsankar, Kimmo Ahokas, Sakari Luukkainen
PhD Consortium, CloudCom 2015
December 3, 2015
2. Agenda
• Introduction
- Motivation and contribution
• Implementation
- Architecture choices
- 1:N mapping / 3-tier architecture
- Testbed
• Results
• Conclusion and future work
2
03/12/2015
PhD Consortium, CloudCom 2015
5. Motivation and contribution
• How to harness cloud computing benefits?
• New architecture for MME
• Build resilience into the architecture
5
03/12/2015
PhD Consortium, CloudCom 2015
6. Architecture choices for MME
6
Standalone MME
(UE Context stored on
local storage)
State database
(stores UE Context)
Worker Worker Worker
Front end
1:1 Mapping
3-tier architecture
1:N Mapping
03/12/2015
PhD Consortium, CloudCom 2015
8. Functions of front end
8
• Maintain 3GPP interfaces
– How to identify correct node?
• Balance requests to workers
– Round robin balancing
• Initiate creation or deletion
of worker nodes
State database
(stores UE Context)
Worker Worker Worker
Front end
03/12/2015
PhD Consortium, CloudCom 2015
9. Functions of worker
9
• Implements actual working logic
• Procedures of interest
– Attach
– Detach
• Stateless operation
When to store UE context?
– After callflow is complete
State database
(stores UE Context)
Worker Worker Worker
Front end
03/12/2015
PhD Consortium, CloudCom 2015
10. State database
10
• Redis cluster
– Data sharded across master nodes
– Very low latency (in-memory data)
– High availability
• Different configurations possible
– Tradeoff between
persistence of data and latency
State database
(stores UE Context)
Worker Worker Worker
Front end
03/12/2015
PhD Consortium, CloudCom 2015
15. Attach latency
15
Average
latency
95% confidence interval
Original MME 8.399 ms 0.563
Distributed MME 12.782 ms 0.208
• Measured on eNodeB
• Latency = time between sending Attach Request &
receiving Attach Accept
03/12/2015
PhD Consortium, CloudCom 2015
16. Impact of placement on attach latency
16
Placement configuration of
worker & FE
Average latency
95% confidence
interval
On different OpenStack clouds 12.914 ms 0.222
On same compute host in same
OpenStack cloud
12.368 ms 0.505
On different compute hosts in
same OpenStack cloud
13.065 ms 0.288
03/12/2015
PhD Consortium, CloudCom 2015
18. Time taken to retrieve UE context
18
• Measured on MME
• On distributed MME – includes time to send request &
receive response from Redis server
• On original MME – time to query local storage
- uthash – for C structures
Average
Latency
95% confidence interval
Original MME 20.700 us 0.675
Distributed MME 1256.724 us 18.028
03/12/2015
PhD Consortium, CloudCom 2015
21. Conclusion and future work
21
• Presented a novel 3-tier architecture for vMME
• Leverages cloud computing benefits in a vEPC
• Evaluate effect of Redis persistence policies
• Evaluate performance with hybrid cloud
03/12/2015
PhD Consortium, CloudCom 2015
25. Components of testbed
25
• Two OpenStack installations
– Icehouse release 2014.1.3
– All services on identical blade servers :
• 2 compute hosts, 1 controller, 1 networking node
• NFS shared storage
CPU 2 x Intel Xeon E5-2665 (2.4 GHz, 64-bit, 8 cores,
Hyper-Threading enabled)
RAM 128 GB DDR3 1600 MHz
Hard disk space 150 GB
Networking 10GbE interconnect
03/12/2015
PhD Consortium, CloudCom 2015
26. Software components of testbed
26
• MME components
– FE, Worker on different VMs
– Redis cluster with 3 master nodes, each on different VM
• eNodeB
– C program which sends required messages sequentially
• Collocated S-GW and P-GW
– nwEPC - EPC SAE Gateway
03/12/2015
PhD Consortium, CloudCom 2015
27. Characteristics of VMs
27
• Small flavor
• Medium flavor for Redis
VCPU 1
RAM 2048 MB
Disk space 10 GB
VCPU 2
RAM 4096 MB
Disk space 20 GB
03/12/2015
PhD Consortium, CloudCom 2015