Yujun Zhang, ZTE Corporation, Carlos Goncalves, NEC
Fault management is a component that allows operations teams to monitor, detect, isolate and automate the recovery of faults. With an efficient fault management system, countermeasures can negate the effects of any deployment faults, avoiding bad user experiences or violation of service-level agreements (SLAs). The OPNFV Doctor project has been developing fault management features that increases resiliency to cloud-based mobile platforms and provides system integration.
The OPNFV Doctor team continues improving its framework, not only making fault management more reliable but also faster to satisfy Telco requirements. The 4G mobile system demonstrated at the OpenStack Summit Barcelona keynote featured already a double-digit millisecond fault notification. The team has identified scalability issues in and between relevant OpenStack projects and in conjunction with other open-source software. We will share performance figures, how we continuously profile and red-flag unexpected results (e.g. performance regressions). Finally, we will present solutions to make the overall OpenStack-based fault management framework even faster.
3. Speakers
Carlos Gonçalves
Software Specialist on the 5G
Networks team at NEC Laboratories
Europe in Heidelberg, Germany.
He works in the areas of Network
Functions Virtualization and
Carrier-Cloud Operation &
Management
Yujun Zhang
NFV System Engineer from ZTE
Corporation
He is current PTL of QTIP in OPNFV,
and creator of MitmStack in
OpenStack
His main interest focuses on
performance testing, analysis and
tuning
4. Doctor project introduction
Doctor is fault management and maintenance project to
develop and realize the consequent implementation for the
OPNFV reference platform.
● Goals
○ build fault management and maintenance framework
■ high availability of Network Services
■ immediate notification of unavailability
○ requirement survey
○ development of missing feature
● Scope: NFVI, VIM
5. Role of QTIP in the collaboration
QTIP is the project for "Platform Performance Benchmarking"
● Reveal details behind a simple indicator
● Benchmarking of various testing environment and condition
6. Expected to learn
● How you can enable fast fault mitigation from a rich set
of monitoring data sources
● How to fasten NFVI failure event to user
● How to leverage performance profiler to find the
bottleneck
11. Notification strategies: pros and cons
Conservative
+ Cloud resource states are
always up-to-date
- Takes longer to report the
alarm out to consumers
Shortcut
+ Faster notification to consumer
- Cloud resource states could
still be out-of-sync by the
time consumer processes the
alarm notification
Consumer: User-side Manager; consumer of the interfaces produced by the VIM; VFNM, NFV-O or Orchestrator in ETSI NFV terminology
13. Notification times comparison (2/3)
Same deployment + Congress w/
notification capabilities (draft) &
parallel execution driver support
(cherry-picked from master)
14. Notification times comparison (3/3)
- Sample outperforms Congress out of the box
- Congress is much feature richer supporting
dynamic user-defined policies and execution
actions on most OpenStack cloud resources.
15. Issues and challenges
● Passed on Pod-A, but poor result on Pod-B
○ Why such difference?
● Performance degradation when scaling up to more servers
○ What is the bottleneck?
● Distributed services
○ How to collect data from different nodes
24. Now we know why
What’s behind `nova reset-state`
25. How osprofiler works
The implementation is quite simple. Profiler has one stack
that contains ids of all trace points. E.g.:
profiler.start("parent_point") # trace_stack.push(<new_uuid>)
# send to collector -> trace_stack[-2:]
profiler.start("parent_point") # trace_stack.push(<new_uuid>)
# send to collector -> trace_stack[-2:]
profiler.stop() # send to collector -> trace_stack[-2:]
# trace_stack.pop()
profiler.stop() # send to collector -> trace_stack[-2:]
# trace_stack.pop()
26. Supported vs Needed
osprofiler doctor
CINDER
HEAT
KEYSTONE
NOVA
NEUTRON
GLANCE
TROVE
SENLIN
MAGNUM
CEILOMETER
VITRAGE
CONGRESS
AODH
27. Recommended to track by default
All HTTP calls - helps to get information about: what HTTP requests were
done, duration of calls (latency of service), information about projects
involved in request.
All RPC calls - helps to understand duration of parts of request related
to different services in one project. This information is essential to
understand which service produce the bottleneck.
All DB API calls - in some cases slow DB query can produce bottleneck. So
it’s quite useful to track how much time request spend in DB layer.
All driver calls - in case of nova, cinder and others we have vendor
drivers. Duration
ALL SQL requests (turned off by default, because it produce a lot of
traffic)
28. Challenges in doctor use case
Doctor use case
● Composed by several consecutive
steps
● Relies on events for fast
notification
● Starts on monitor and ends in
consumer
● Multi threaded in inspector
ASYNCHRONOUS
OSProfiler limitation
● Designed for profiling ONE
request
● Event notification not tracked
● Must start and end in same
thread
● Multi thread is not supported
SYNCHRONOUS
29. Gaps identified in upstream
osprofiler feature
[ ] multiple thread supporting
No support for osprofiler in Openstack services
[ ] alarming: aodh
[ ] inspector: vitrage
[ ] inspector: congress
31. Roadmap: Doctor-QTIP collaboration
● [doctor] Integration of osprofiler in CI jobs
● [doctor] Propose changes to upstream to fill gaps
○ Osprofiler enhancement
○ Aodh supporting
○ Congress supporting
○ Vitrage supporting
● [qtip] Benchmarking of notification performance
○ Collector backend for profiler data
○ Dashboard for performance profile of last build