2. DPDK: Why is it difficult to debug?
What we envision as simplified program
What we end up doing
3. DPDK: Where to debug?
RX to Worker stage â
since we mostly use
RSS for flow id
Worker â dequeue and
enqueue points
Dequeue packets to TX
â identify port-queue
pair against flows
4. DPDK: What to debug?
âą How many actual
(enqueue â drop) from RX
for a given flow?
âą Which is the RX port-
queue which packet are
dropped?
âą Which worker is has
lowest dequeue
request in a given
event queue?
âą Which flow that is
least events
enqueued?
âą Which Is the port
with highest
enqueue towards
specific QoS?
5. Issues: PROCINFO, PDUMP, LTTNG, User PROBE
Category DPDK-PROCINFO DPDK-PDUMP LTTNG USER PROBE
DPDK Yes Yes No No
User Library No No Yes Yes
Selective Trace Limited No No Function points (entry_ or exit_),
registers
Kernel No No No Yes
Impact on Application No Yes Limited Very Limited
Impact on OS threads Yes Yes Yes Yes
Selective probe No Possible with eBPF filters on packet at
rx-tx
no No
Arch & Lib independent No No No Yes
Requires separate management
thread or process
Yes Yes Yes No
Corrupt Buffer No No Yes Need to explore
8. TOOLS: With STDP
8
Looku
p
Table
Counters
API:
I. Application
Specific
II. DPDK
eBPF Binaries
When: for dynamic debug
How it works:
1. Use DPDK 18.11 (LTS) or above
2. load eBPF to existing applications
3. same as user space eBPF
Where:
1. Applications in field (with const ptr)
2. Application in dev-test environment for
dynamic debug
3. Recompile not possible, No gdb, stripped,
Kernel USER_PORBE is disabled
4. Penalty in writing elaborate if-else conditions
for debug
14. Future things to explore
âą user probe like dynamic trace.
âą csv import of user events counter to vtune.
âą csv import of user events meta-data to vtune.
âą EBPF entry function from âvoid *â to âconst void *â to prevent data
corruption in eBPF byte code.
âą More DPDK cases: pipeline, soft-nic, crypto, hqos, CVL meta-data
âą More application cases: spdk, ipsec stack, ADK, vpp. OVS