Using OS performance tools and basic alternatives to troubleshoot production database issues
The document discusses using Linux performance tools like pidstat, ps, and tracing tools like perf, systemtap, and dtrace to troubleshoot complex database problems that may involve issues at the operating system, hardware, or network level. It provides examples of using these tools to diagnose specific issues like memory fragmentation, I/O problems, and network congestion and presents a methodology around reproducing issues, analyzing tool output, identifying root causes, and developing solutions.
Scaling API-first – The story of a global engineering organization
Using the big guns: Advanced OS performance tools for troubleshooting database issues
1. Using the big guns
Advanced OS performance tools and their more basic alternatives for
tackling production issues
Nikolai Savvinov, Deutsche Bank
2. About me
• worked with databases for 19 years (started with MySQL)
• working with Oracle database since approx. 2005
• Oracle database engineer, Deutsche Bank Moscow/London 2007-now
• Last 10 years, specialize in database performance/internals
• blog: savvinov.com, twitter: @oradiag
3. About this presentation
• 50/50 overview of tools / practical use cases
• All examples based on Oracle Linux 6 (kernel 4.1.12),
Exadata X6-2 EF 3-node cluster
• All opinions are my own and not of my employer (or Oracle corp)
• Caveat emptor
4. Why use OS performance tools?
• Database instruments things that are expected to be slow
• Also, bugs and blind spots
• Database doesn’t sit in a vacuum, there isn’t always clear-cut
separation between application/database/OS/hw layers
• Better understand Unix/Storage/Network when jointly
troubleshooting issues
5. What kind of database prod issues are solved with
OS performance tools
• Memory issues
• low-level memory leaks
• memory fragmentation
• NUMA
• swapping
• Kernel-level locking
• I/O issues
• e.g. suboptimal low-level I/O settings
• Filesystem issues
• e.g. too many files in a directory because a bug leading to excessive tracing
• Network issues
• e.g. poor TCP throughput due to congestion events
7. Basic process level tools
• pidstat
• -u for CPU (default)
• -d for disk I/O
• -r for memory
• ps
• -e to do output for all processes
• -o to pick fields you like
• e.g. wchan, state, rss
• most useful when used “ASH-style” in ExaWatcher / OSWatcher
• especially when combined with “OEM-style” visualization
8. WCHAN
• WCHAN: the outermost system call where the process is waiting
• Zero-overhead stack profiling tool that is always on (w/ OSWatcher)
• Low frequency (every few seconds)
• Sometimes doesn’t go all the way for some reason
• Off CPU only
• Kernel-space only
9. WCHAN example
• Removed state=‘S’ (interruptible sleep)
• cma_acquire_dev is biggest
• CMA = contiguos memory allocation
• called inside rdma_bind_addr
• in drivers/infiniband/core/cma.c
• RDMA is core IB technology
• Exafusion relies on RDMA
• RDMA is not NUMA-friendly
10. DIY-profilers
• debugger command dumping a stack with a loop around it
• not always stable/safe/predictable
• pstack (Tanel Poder’s “poor man’s profiler”)
• /proc/<pid>/stack (Luca Canali’s kstacksampler)
• oradebug short_stack
11. Profilers/tracers in Linux
• perf
• systemtap
• dtrace for Oracle Linux
• ftrace
• lttng
https://sourceware.org/systemtap/wiki/SystemtapDtraceComparison
http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
12. Scope of profiling tools
• The output from a profiler does not always represent wall clock time
• The standard (on-CPU) profiling is only valid for CPU-intensive loads
• Off-CPU profiling can be tricky
(e.g. perf requires CONFIG_SCHEDSTATS=y which is not the case for
some Linux distributions), DIY-profilers or WCHAN can help
• User-space vs kernel-space: a profiler can only be giving you one half
of the picture, and not necessarily the one you need
• Be sure to pick the right profiler for the task
13. Safety considerations
• ptrace-based tools (strace, gdb, pstack etc.) are less safe
• systemtap had some teething problems in early days,
considered relatively safe now, but still has issues
• Oracle Linux dtrace, perf and bcc(bpf) are probably the safest
• It’s a good idea to build a large arsenal of tools
• Use UAT as much as possible: even if the issue doesn’t reproduce itself in
all its entirety, doesn’t mean some aspects of it can’t be reproduced
• If the problem can’t be reproduced on UAT, one can try to reproduce it as
isolated mockup activity on production – so if tracing crashes it, no biggie
• Balance of risk: side effects of diagnostics vs issue going undiagnosed
15. Step 1: high-level picture
• Problems started shortly after 18c upgrade
• The main symptom experienced by users was connection delays
• The obvious things to check were listener, cluster database alert and
kernel logs
• On the system level, somewhat elevated sys CPU was noticed
• WCHAN analysis didn’t reveal any interesting waits, but showed that
extended periods of busy CPU for some of the listeners
• pidstat confirmed there were periods of 100% sys CPU for listener
processes
16. Step 2: getting stacks
• We know what processes we want
• We are interested in on-CPU samples
• We are interested in kernel-space stacks
• Two obvious choices: /proc/<pid>/stack based sampling or
“proper” stack profiling using perf
• Both are sufficiently safe, but perf can cause noticeable overhead, so
we started with kstacksampler, but then also used perf
18. Step 4: make sense out of results
• Identify biggest branch(es)
• Identify stack structure, e.g. in this example
• Oracle Network Session layer calls
• Oracle Network Transport calls
• VFS syscalls
• TCP syscalls
• page allocation
• direct compaction
• page migration
• Identify key elements
• Read relevant parts of documentation
• Look at the source code
• Read source comments + git blame/history
19. Sidenote: compaction and fragmentation
• Compaction is an algorithm for memory defragmentation
• Normally kernel shouldn’t care if memory is fragmented
• Some pieces do, however (like device drivers)
• Apparently, TCP implementation also relies on contiguous allocations
• A chunk is 2^N pages (4kB), N = order
• When initial allocation attempt fails, there are a number of possible
fallback strategies (depending on GFP flags)
• One common scenario is direct compaction
• While doing compaction, the process will be unresponsive
23. Summary
• OS tools can be very useful or even necessary for troubleshooting
complex cluster or database issues
• Much can be done with basic risk-free OS tools like ps
• Some tracing/profiling tools are safer than others
• Low-level OS tools become safer over time, but can still carry risk
• There are ways to minimize the risk
• Weigh the risk of side effects against the risk of not solving the issue
24. Credits
• Brendan Gregg – Linux performance expert
• Tanel Poder, Luca Canali, Frits Hoogland, Andrey Nikolaev,
Alexander Anokhin – pioneered use of OS low-level tools in Oracle
troubleshooting
• Thanks UKOUG organizers for the opportunity!
28. Systemtap safety
“In practice, there are several weak points in systemtap and the
underlying kprobes system at the time of writing. Putting probes
indiscriminately into unusually sensitive parts of the kernel (low level
context switching, interrupt dispatching) has reportedly caused crashes
in the past. We are fixing these bugs as they are found, and
constructing a probe point “blacklist”, but it is not complete”
Frank Ch. Eigler, Systemtap tutorial, November 2019
https://sourceware.org/systemtap/tutorial.pdf
29. Bcc (bpf) safety
It is unlikely that the programs should cause the kernel to crash, loop or
become unresponsive because they run in a safe virtual machine in the
kernel.
32. Tcpdump for network performance
• SQL*Net tracing on client/server side doesn’t always reveal problem
• Network-side metrics don’t always reveal problem
• Various pings almost never reveal the problem
• Many TCP performance problems have to do with congestion control
• A variety of tools for analyzing the dumps, e.g. Wireshark
• Can dump to ASCII and use own tools
33. Our case
• Log file sync delays, sometimes spiking to tens of seconds
• Production synchronously replicated to standby via DataGuard
• Synchronicity was essential (Max Availability)
• Pings show nothing
• netops say the network is fine
34. Analysis
• We did a tcpdump
capture at both ends
• Tcpdump shows
congestion window
(bytes in flight)
• It was shrinking in
response to congestion
events
35. Remediation
• Remediation: netops removed bottlenecks, optimized QoS,
top users working on improving colocality of their estate
• Monitoring/alerting: how do we define thresholds for packet loss?
• Zero-loss networks are expensive
• In a non zero-loss network, what level of packet loss is acceptable?
• Relationship between throughput and packet loss in TCP was approximated
by Mathis in 1997
• 𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡 ≤
𝑀𝑆𝑆
𝑙𝑎𝑡𝑒𝑛𝑐𝑦 𝑝𝑎𝑐𝑘𝑒𝑡 𝑙𝑜𝑠𝑠
• Assuming 1 ms latency, this meant
1.5
𝑝𝑎𝑐𝑘𝑒𝑡 𝑙𝑜𝑠𝑠
MB/s, or 0.25% for 30 MB/s
36. Iosnoop
• gives a high resolution picture of
I/O usage (low-res can be
obtained from iotop)
• ftrace based
• reported safe
• observed high performance
overhead
37. Our case
• General slowness on one of the nodes during certain periods
• Nothing helpful in AWR/ASH
• ExaWatcher iostat showed I/O spikes on another node
• High “reliable message” waits on that other node
• iotop didn’t reveal the culprit
38. Analysis & remediation
• iosnoop told us the high I/O was from
admin f/s housekeeping
• housekeeping had too much work due to
excessive tracing
• processes were slow due to slow writes
to trace files
• slowness propagated to another node
via inter-node communication (“reliable
message”)
• Remediation: excessive tracing patched,
housekeeping job optimized, old files
moved manually, scheduling clash
resolved