En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de eventos por segundo
1. Cómo se diseña una base de
datos que pueda ingerir más de
cuatro millones de eventos por
segundo
Javier Ramirez
Head of Developer Relations
@supercoco9
2. Some things I will talk about
● Accept you are not PostgreSQL. You are not for everyone and cannot do everything
● Make the right assumptions
● Take advantage of modern hardware and operating systems
● Obsess about storage
● Reduce/control your dependencies
● Measure-implement-repeat continuously to improve performance
3.
4. We would like to be known for:
● Performance
○ Better performance with smaller machines
● Developer Experience
● Proudly Open Source (Apache 2.0)
9. Try out query performance on open datasets
https://demo.questdb.io/
10. All benchmarks are lies (but they give us a ballpark)
Ingesting over 1.4 million rows per second (using 5 CPU threads)
https://questdb.io/blog/2021/05/10/questdb-release-6-0-tsbs-benchmark/
While running queries scanning over 4 billion rows per second (16 CPU threads)
https://questdb.io/blog/2022/05/26/query-benchmark-questdb-versus-clickhouse-timescale/
Time-series specialised benchmark
https://github.com/timescale/tsbs
16. Do you have a time-series problem? Write patterns
● You mostly insert data. You rarely update or delete individual rows
● It is likely you write data more frequently than you read data
● Since data keeps growing, you will very likely end up with much bigger
data than your typical operational database would be happy with
● Your data origin might experience bursts or lag, but keeping the correct
order of events is important to you
● Both ingestion and querying speed are critical for your business
17. Do you have a time-series problem? Read patterns
● Most of your queries are scoped to a time range
● You typically access recent/fresh data rather than older data
● But still want to keep older data around for occasional analytics
● You often need to resample your data for aggregations/analytics
● You often need to align timestamps from multiple data series
18. We can make many
assumptions about the shape
of the data and usage patterns
19. Data will most often be queried in a continuous range, and recent data will be preferred =>
Store data physically sorted by “designated timestamp” on disk (deal with out of order
data)
Store data in partitions, so we can skip a lot of data quickly
Aggressive use of prefetching by the file system
Most queries are not a select *, but aggregations on timestamp + a few columns =>
Columnar storage model. Open only the files for the column the query needs
Most rows will have some sort of non-unique ID (string or numeric) to scope on =>
Special Symbol type, looks like a String, behaves like a Number. Faster and smaller
Some assumptions when reading data
20. Data will be fast and continuous =>
Keep (configurable) buffers to reduce write operations
Slower reads should not slow down writes =>
Shared CPU/threads pool model, with default separate thread for ingestion and
possibility to dedicate threads for parsing or other tasks
Stale data is useful, but longer latencies are fine =>
Allow mounting old partitions on slower/cheaper drives
Old data needs to be removed eventually =>
Allow unmounting/deleting partitions (archiving into object storage in the roadmap)
Some assumptions when writing data
21. Queries should allow for reasonably complex filters and aggregations =>
Implement SQL, with pg-wire compatibility for compatibility
Writes should be fast. Also, some users might be already using other TSDB =>
Implement the Influx Line Protocol (ILP) for speed and compatibility. Provide client
libraries, as ILP is not as popular
Many integrations might be from IoT or simple devices with bash scripting =>
Implement HTTP endpoint for querying, importing, and exporting data
Operations teams will want to read QuestDB metrics, not stored data
Implement health and metrics endpoint, with Prometheus compatibility
Some assumptions when connecting
26. Native unsafe memory. Shared across languages and OS
Java
C/C++
Rust *
Mmap
https://db.cs.cmu.edu/mmap-cidr2022/
* https://github.com/questdb/rust-maven-plugin
27. SIMD vectorization and own JIT compiler
Single Instruction, multiple Data (SIMD):
parallelizes/vectorizes operations in multiple
registers. QuestDB only supports it on Intel and AMD
processors.
JIT compiler: compiles SQL statements
EXPLAIN: helps understand execution plans and
vectorization
29. SELECT count(), max(total_amount),
avg(total_amount)
FROM trips
WHERE total_amount > 150 AND passenger_count = 1;
(Trips table has 1.6 billion rows and
24 columns, but we only access 2 of
them)
You can try it live at
https://demo.questdb.io
30. Re-implement the JAVA std library
● Java Classes work with heap memory. We need off-heap
● JAVA classes tend to do too many things (they are
generic) and a lot of type conversions
● This includes IO, logging, atomics, collections… using zero
GC and native memory
● Zero Dependencies (except for testing) on our pom.xml
31. Down to the nanosecond
Benchmark Mode Cnt Score Error Units
LogBenchmark.testLogOneIntBlocking avgt 2 265.391 ns/op
LogBenchmark.testLogOneInt avgt 2 82.985 ns/op
LogBenchmark.testLogOneIntDisabled avgt 2 0.661 ns/op
Log4jBenchmark.testLogOneInt avgt 2 877.266 ns/op
Log4jBenchmark.testLogOneIntDisabled avgt 2 1.368 ns/op
32.
33. How would *YOU* efficiently sort a multi GB
unordered CSV file?
34. Improved batch import (3 Million rows/second)*
● File doesn’t fit into memory, so we need to rely on disk IO for sorting
● Designed a multi-pass parallel strategy
● Using the new io_uring Linux IO interface to max out disk access
concurrency
Before:
A 76GB heavily unordered CSV file would take ~28 minutes to ingest
After:
Same file takes 335 seconds to ingest, at about 3 Million rows per second (also
changed disk type)
https://questdb.io/blog/2022/09/12/importing-3m-rows-with-io-uring * Remember all benchmarks are lies
35. Some things we are trying out next for performance
● Compression, and exploring data formats like arrow/ parquet
● Own ingestion protocol
● Embedding Julia in the database for custom code/UDFs
● Moving some parts to Rust
● Second level partitioning
● Improved vectorization of some operations (group by multiple
columns or by expressions
● Add specific joins optimizations (index nested loop joins, for
example)
37. Quick recap
● Accept you are not PostgreSQL. You are not for everyone and cannot do
everything
● Make the right assumptions
● Take advantage of modern hardware and operating systems
● Obsess about storage
● Reduce/control your dependencies
● Measure-implement-repeat continuously to improve performance
● All benchmark are lies, but if you like them take a look at
https://questdb.io/blog/tags/engineering/