SlideShare ist ein Scribd-Unternehmen logo
1 von 29
Downloaden Sie, um offline zu lesen
The new time series kid on the block
Florian Lautenschlager
@flolaut
68.000.000.000* time correlated data objects.
3
* collect every 10 seconds 72 metrics x 15 processes x 20 hosts over 1 years
How to store such amount of data on your laptop computer
and retrieve any point within a few milliseconds?
Well we tried that approach…
4
■ Store data objects in a classical RDBMS
■ But…
■Slow import of data objects
■Huge amount of hard drive space
■Slow retrieval of time series
■Limited scalability due to RDBMS
■Missing query functions for time series data
!68.000.000.000!
Measurement Series
Name
Start
End
Time Series
Start
End
Data Object
Timestamp
Value
Metric
Attributes
Host
Process
…
* *
*
*
Name
5
Hence it felt like …
Image Credit: http://www.sail-world.com/
But what to do? Chunks + Compression + Document storage!
6
■ The key ideas to enable the efficient storage of billion data objects:
■Split time series into chunks of the same size with data objects
■Compress these chunks to reduce the data volume
■Store the compressed chunk and the attributes in one record
■ Reason for success:
■32 GB disk usage to store 68 billion data objects
■Fast retrieval of data objects within a few milliseconds
■Fast navigation on attributes (finding the chunk)
■Everything runs on a laptop computer
■… and many more!
Time Series Record
Start
End
Chunk[]
Size
Metadata, …
1 Million
!68.000!
That‘s all. No secrets, nothing special and nothing more to say.
 Time Series Database - What’s that? Definitions and typical features.
 Why did we choose Apache Solr and are there alternatives?
 Chronix Architecture that is based on Solr and Lucene.
 What’s needed to speed up Chronix to a firehorse.
What comes next?
Time Series Database: What’s that?
8
■ Definition 1: “A data object d is a tuple of {timestamp, value}, where
the value could be any kind of object.”
■ Definition 2: “A time series T is an arbitrary list of chronological
ordered data objects of one value type”.
■ Definition 3: “A chunk C is a chronological ordered part of a time
series.”
■ Definition 4: “A time series database TSDB is a specialized database
for storing and retrieving time series in an efficient and optimized
way”.
d
{t,v}
1
T
{d1,d2}
T
CT
T1
C1,1
C1,2
TSDB
T3C2,2
T1 C2,1
A few typical features of a time series database
9
■ Data management
■Round Robin Storages
■Down-sample old time series
■Compression
■Delta-Encoding
■ Describing Attributes
■Arbitrary amount of attributes
■For time series (Country, Host, Customer, …)
■For data object (Scale, Unit, Type)
■ Performance and Operational
■Rare updates, Inserts are additive
■Fast inserts and retrievals
■Distributed and efficient per node
■No need of ACID, but consistency
■ Time series language and API
■Statistics: Aggregation (min, max, median), …
■Transformations: Time windows, time shifting,
resampling, ..
■High level analyses: Outlier, Trends
Check out: A good post about the requirements of a time series:
http://www.xaprb.com/blog/2014/06/08/time-series-database-requirements/
10
Some time series databases out there.
■RRDTool - http://oss.oetiker.ch/rrdtool/
■Mainly used in traditional monitoring systems
■Graphite – https://github.com/graphite-project
■Uses the concepts of RRDTool and puts some sugar on it
■InfluxDB - https://influxdata.com/time-series-platform/influxdb/
■A distributed time series database with a very handy query language
■OpenTSDB - http://opentsdb.net/
■Is a scalable time series database and runs on Hadoop and Hbase
■Prometheus- https://prometheus.io/
■ A monitoring system and time series database
■KairosDB - https://kairosdb.github.io/
■Like OpenTSDB but is based on Apache Cassandra
■… many more! And of course Chronix! - http://chronix.io/
“Ey, there are so many time series databases out there? Why did
you create a new solution?”
11
Our Requirements
■ A fast write and query performance
■ Run the database on a laptop computer
■ Minimal data volume for stored data objects
■ Storing arbitrary attributes
■ A query API for searching on all attributes
■ Large community and an active development
That delivers Apache Solr
■ Based on Lucene which is really fast
■ Runs embedded, standalone, distributed
■ Lucene has a built-in compression
■ Schema or schemaless
■ Solr Query Language
■ Lucidworks and an Apache project
“Our tool has been around for a good few years, and in the beginning there was no time series
database that complies our requirements. And there isn’t one today!”Elastic Search is
an alternative. It
is also based on
Lucene.
12
Let‘s dig deeper into Chronix’ internals.
Image Credit: http://www.taringa.net/posts/ciencia-educacion/12656540/La-Filosofia-del-Dr-House-2.html
Chronix’ architecture enables both efficient storage of time
series and millisecond range queries.
13
(1)
Semantic Compression
(2)
Attributes and Chunks
(3)
Basic Compression
(4)
Multi-Dimensional
Storage
Record
data:<chunk>
attributes
Record
data:compressed
<chunk>
attributes
Record Storage
68 Billion Points
1 Mio. Chunks *
68.000 Points
~ 96% Compression
Optional
The key data type of Chronix is called a record.
It stores a compressed time series chunk and its attributes.
14
record{
data:compressed{<chunk>}
//technical fields
id: 3dce1de0−...−93fb2e806d19
version: 1501692859622883300
start: 1427457011238
end: 1427471159292
//optional attributes
host: prodI5
process: scheduler
group: jmx
metric: heapMemory.Usage.Used
max: 896.571
}
Data:compressed{<chunk of time series data>}
■ Time Series: timestamp, numeric value
■ Traces: calls, exceptions, …
■ Logs: access, method runtimes
■ Complex data: models, test coverage,
anything else…
Optional attributes
■ Arbitrary attributes for the time series
■ Attributes are indexed
■ Make the chunk searchable
■ Can contain pre-calculated values
Chronix provides specialized aggregations, transformations,
and analyses for time series that are commonly used.
15
Aggregations (ag)
■ Min / Max / Average / Sum / Count
■ Percentile
■ Standard Deviation
■ First / Last
■ Range
Analyses (analysis)
■ Trend Analysis
Using a linear regression model
■ Outlier Analysis
Using the IQR
■ Frequency Analysis
Check occurrence within a time range
■ Fast Dynamic Time Warping
Time series similarity search
■ Symbolic Aggregate Approximation
Similarity and pattern search
Transformations (tr)
■ Bottom/Top n-values
■ Moving average
■ Divide / Scale
■ Vectorisation
Only scalar values? One size fits all? No! What about logs,
traces, and others? No problem – Just do it yourself!
16
■ Chronix Kassiopeia (Format)
■Time Series framework that is used by Chronix.
■Time Series Types:
■Numeric: Doubles (the time series known to be the default)
■Thread Dumps: Stack traces (e.g. java stack traces)
■Strace: Strace dumps (system call, duration, arguments
public interface TimeSeriesConverter<T> {
/**
* Shall create an object of type T from the given binary time series.
*/
T from(BinaryTimeSeries binaryTimeSeriesChunk, long queryStart, long queryEnd);
/**
* Shall do the conversation of the custom time series T into the binary time series that is
stored.
*/
BinaryTimeSeries to(T timeSeriesChunk);
}
Plain
That‘s the easiest way to play with Chronix. A single instance of
Chronix on a single node with a Apache Solr instance.
17
Java 8 (JRE)
Chronix - 0.2
Solr - 6.0.0
Lucene
Solr plugins
8983
Your Computer
Chronix-Query-Handler
Chronix-Response-Writer
Chronix-Retention
Chronix-Client
Json + Binary
Binary + Binary
Json + Json
Java 8 (JRE)
HTTP
Code-Slide: How to set up Chronix, ask for time series data, and
call some server-side aggregations.
18
■ Create a connection to Solr and set up Chronix
■ Define and range query and stream its results
■ Call some aggregations
solr = new HttpSolrClient("http://localhost:8913/solr/chronix/")
chronix = new ChronixClient(new KassiopeiaSimpleConverter<>(),
new ChronixSolrStorage(200, groupBy, reduce))
query = new SolrQuery("metric:*Load*")
chronix.stream(solr,query)
query.addFilterQuery("ag=max,min,count,sdiff")
stream = chronix.stream(solr,query) Signed Difference:
First=20, Last=-100
 -80
Group chunks on a combination
of attributes and reduce them to
a time series.
Get all time series whose
metric contains Load
That’s the four
week data that is
shipped with the
release!
Tune Chronix to a firehorse. Even with defaults it’s blazing fast!
We have tuned Chronix in terms of chunk size, and compression
technique to get the ideal default values for you.
21
■ Tuning Dataset
■Three real-world projects
■15 GB of time series data (typical monitoring data)
■About 500 million points in 15k time series
■92 typical queries with different time range and occurrence
■ We have measured:
■Compression rate for serval compression techniques (T) and chunk sizes (C).
■Query time for all 92 queries in the mix (range + aggregations)
■ What we want to know: Ideal values for T and C
We have evaluated several compression techniques and chunk
sizes of the time series data to get the best parameter values.
22
T= GZIP +
C = 128 kBytes
Florian Lautenschlager, Michael Philippsen, Andreas Kumlehn, Josef Adersberger
Chronix: Efficient Storage and Query of Operational Time Series
International Conference on Software Maintenance and Evolution 2016 (submitted)
For more details
about the tuning
check our paper.
Compared to other time series databases Chronix‘ results for
our use case are outstanding. The approach works!
23
■ We have evaluated Chronix with:
■InfluxDB, Graphite, OpenTSDB, and KairosDB
■All databases are configured as single node
■ Storage demand for 15 GB of raw csv time
series data
■Chronix (237 MB) takes 4 – 84 times less space
■ Query times on imported data
■49 – 91 % faster than the evaluated time
series databases
■ Memory footprint: after start, max during
import, max during query mix
■Graphite is best (926 MB), Chronix (1.5 GB) is
second. Others 16 to 39 GB
The hard facts. For more details I suggest you to read our
research paper about Chronix.
24
Florian Lautenschlager, Michael Philippsen,
Andreas Kumlehn, Josef Adersberger
Chronix: Efficient Storage and Query of
Operational Time Series
International Conference on Software
Maintenance and Evolution 2016 (submitted)
Now it’s your turn.
Now it’s your turn.
The whole Chronix Stack. Not yet completely implemented.
Outlook: A powerful way to work with time series. A Chronix
Cloud, a Spark Cluster, and an analysis workbench like Zeppelin.
27
Chronix Cloud
Chronix Node Chronix Node Chronix Node Chronix Node
Spark Cluster
Spark Node Spark Node Spark Node Spark Node
Zeppelin
Chronix Spark Context
Java Scala
Various Applications as Workbench
Spark SQL
Context
Chronix and Spark.
Time Series Processing with
Apache Spark – Josef
Adersberger, Wed, 3:00 pm
(mail) florian.lautenschlager@qaware.de
(twitter) @flolaut
(twitter) @ChronixDB
(web) www.chronix.io
#lovetimeseries
Bart Simpson
Other interesting related talks:
Real-world Analytics with Solr
Cloud and Spark – Johannes
Weigend, Wed, 3:00 pm
Time Series Processing with
Apache Spark – Josef
Adersberger, Wed, 3:00 pm
Code-Slide: Use Spark to process time series data that comes
out right now from Chronix.
29
■ Create a ChronixSparkContext
■ Define and range query and stream its results
■ Play with the data
conf = new SparkConf().setMaster(SPARK_MASTER).setAppName(CHRONIX)
jsc = new JavaSparkContext(conf)
csc = new ChronixSparkContext(jsc)
sqlc = new SQLContext(jsc)
query = new SolrQuery("metric:*Load*")
rdd = csc.queryChronixChunks(query,ZK_HOST,CHRONIX_COLLECTION,
new ChronixSolrCloudStorage());
DataSet<MetricObservation> ds = rdd.toObservationsDataset(sqlContext)
rdd.mean()
rdd.max()
rdd.iterator()
Dataset to use Spark SQL
features
Set up Spark, a JavaSparkContext, a
ChronixSparkContext, and a SQLContext
Get all time series whose metric
contains Load

Weitere ähnliche Inhalte

Was ist angesagt?

Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
Alexey Kharlamov
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Flink Forward
 

Was ist angesagt? (20)

Chronix as Long-Term Storage for Prometheus
Chronix as Long-Term Storage for PrometheusChronix as Long-Term Storage for Prometheus
Chronix as Long-Term Storage for Prometheus
 
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...
 
Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
Building large-scale analytics platform with Storm, Kafka and Cassandra - NYC...
 
QCon London 2016 - Patterns of reliable in-stream processing @ Scale
QCon London 2016 - Patterns of reliable in-stream processing @ ScaleQCon London 2016 - Patterns of reliable in-stream processing @ Scale
QCon London 2016 - Patterns of reliable in-stream processing @ Scale
 
Scalable real-time processing techniques
Scalable real-time processing techniquesScalable real-time processing techniques
Scalable real-time processing techniques
 
Imply at Apache Druid Meetup in London 1-15-20
Imply at Apache Druid Meetup in London 1-15-20Imply at Apache Druid Meetup in London 1-15-20
Imply at Apache Druid Meetup in London 1-15-20
 
Engineering fast indexes
Engineering fast indexesEngineering fast indexes
Engineering fast indexes
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
 
C*ollege Credit: CEP Distribtued Processing on Cassandra with Storm
C*ollege Credit: CEP Distribtued Processing on Cassandra with StormC*ollege Credit: CEP Distribtued Processing on Cassandra with Storm
C*ollege Credit: CEP Distribtued Processing on Cassandra with Storm
 
Argus Production Monitoring at Salesforce
Argus Production Monitoring at SalesforceArgus Production Monitoring at Salesforce
Argus Production Monitoring at Salesforce
 
Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa
 
Cassandra & Spark for IoT
Cassandra & Spark for IoTCassandra & Spark for IoT
Cassandra & Spark for IoT
 
Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase
 
OpenTSDB 2.0
OpenTSDB 2.0OpenTSDB 2.0
OpenTSDB 2.0
 
Real-time Analytics with Apache Flink and Druid
Real-time Analytics with Apache Flink and DruidReal-time Analytics with Apache Flink and Druid
Real-time Analytics with Apache Flink and Druid
 
Webinar: Using Control Theory to Keep Compactions Under Control
Webinar: Using Control Theory to Keep Compactions Under ControlWebinar: Using Control Theory to Keep Compactions Under Control
Webinar: Using Control Theory to Keep Compactions Under Control
 
Aggregated queries with Druid on terrabytes and petabytes of data
Aggregated queries with Druid on terrabytes and petabytes of dataAggregated queries with Druid on terrabytes and petabytes of data
Aggregated queries with Druid on terrabytes and petabytes of data
 
Introduction to InfluxDB
Introduction to InfluxDBIntroduction to InfluxDB
Introduction to InfluxDB
 
Druid meetup 4th_sql_on_druid
Druid meetup 4th_sql_on_druidDruid meetup 4th_sql_on_druid
Druid meetup 4th_sql_on_druid
 
Spark with Cassandra by Christopher Batey
Spark with Cassandra by Christopher BateySpark with Cassandra by Christopher Batey
Spark with Cassandra by Christopher Batey
 

Andere mochten auch

Monitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_TutorialMonitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_Tutorial
Tim Vaillancourt
 

Andere mochten auch (20)

Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series Analysis
 
Time Series Processing with Apache Spark
Time Series Processing with Apache SparkTime Series Processing with Apache Spark
Time Series Processing with Apache Spark
 
Real Time BOM Explosions with Apache Solr and Spark
Real Time BOM Explosions with Apache Solr and SparkReal Time BOM Explosions with Apache Solr and Spark
Real Time BOM Explosions with Apache Solr and Spark
 
Automotive Information Research driven by Apache Solr
Automotive Information Research driven by Apache SolrAutomotive Information Research driven by Apache Solr
Automotive Information Research driven by Apache Solr
 
Vamp - The anti-fragilitiy platform for digital services
Vamp - The anti-fragilitiy platform for digital servicesVamp - The anti-fragilitiy platform for digital services
Vamp - The anti-fragilitiy platform for digital services
 
Azure Functions - Get rid of your servers, use functions!
Azure Functions - Get rid of your servers, use functions!Azure Functions - Get rid of your servers, use functions!
Azure Functions - Get rid of your servers, use functions!
 
A Hitchhiker's Guide to the Cloud Native Stack
A Hitchhiker's Guide to the Cloud Native StackA Hitchhiker's Guide to the Cloud Native Stack
A Hitchhiker's Guide to the Cloud Native Stack
 
Time series database by Harshil Ambagade
Time series database by Harshil AmbagadeTime series database by Harshil Ambagade
Time series database by Harshil Ambagade
 
Time Series Processing with Apache Spark
Time Series Processing with Apache SparkTime Series Processing with Apache Spark
Time Series Processing with Apache Spark
 
Monitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_TutorialMonitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_Tutorial
 
Zero Downtime Postgres Upgrades
Zero Downtime Postgres UpgradesZero Downtime Postgres Upgrades
Zero Downtime Postgres Upgrades
 
Time series databases
Time series databasesTime series databases
Time series databases
 
Building a citizen sensor network in windows azure
Building a citizen sensor network in windows azureBuilding a citizen sensor network in windows azure
Building a citizen sensor network in windows azure
 
Axibase Time Series Database
Axibase Time Series DatabaseAxibase Time Series Database
Axibase Time Series Database
 
Cassandra Summit 2014: Apache Cassandra on Pivotal CloudFoundry
Cassandra Summit 2014: Apache Cassandra on Pivotal CloudFoundryCassandra Summit 2014: Apache Cassandra on Pivotal CloudFoundry
Cassandra Summit 2014: Apache Cassandra on Pivotal CloudFoundry
 
On time-series databases
On time-series databasesOn time-series databases
On time-series databases
 
Understanding MySQL Performance through Benchmarking
Understanding MySQL Performance through BenchmarkingUnderstanding MySQL Performance through Benchmarking
Understanding MySQL Performance through Benchmarking
 
Hands-on K8s: Deployments, Pods and Fun
Hands-on K8s: Deployments, Pods and FunHands-on K8s: Deployments, Pods and Fun
Hands-on K8s: Deployments, Pods and Fun
 
Kubernetes 101 and Fun
Kubernetes 101 and FunKubernetes 101 and Fun
Kubernetes 101 and Fun
 
Lightweight developer provisioning with gradle and seu as-code
Lightweight developer provisioning with gradle and seu as-codeLightweight developer provisioning with gradle and seu as-code
Lightweight developer provisioning with gradle and seu as-code
 

Ähnlich wie Chronix Time Series Database - The New Time Series Kid on the Block

Ugif 04 2011 france ug04042011-jroy_ts
Ugif 04 2011   france ug04042011-jroy_tsUgif 04 2011   france ug04042011-jroy_ts
Ugif 04 2011 france ug04042011-jroy_ts
UGIF
 

Ähnlich wie Chronix Time Series Database - The New Time Series Kid on the Block (20)

Chronix: A fast and efficient time series storage based on Apache Solr
Chronix: A fast and efficient time series storage based on Apache SolrChronix: A fast and efficient time series storage based on Apache Solr
Chronix: A fast and efficient time series storage based on Apache Solr
 
Chronix Poster for the Poster Session FAST 2017
Chronix Poster for the Poster Session FAST 2017Chronix Poster for the Poster Session FAST 2017
Chronix Poster for the Poster Session FAST 2017
 
Re-Engineering PostgreSQL as a Time-Series Database
Re-Engineering PostgreSQL as a Time-Series DatabaseRe-Engineering PostgreSQL as a Time-Series Database
Re-Engineering PostgreSQL as a Time-Series Database
 
InfluxDB Internals
InfluxDB InternalsInfluxDB Internals
InfluxDB Internals
 
Spanner (may 19)
Spanner (may 19)Spanner (may 19)
Spanner (may 19)
 
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
Calum McCrea, Software Engineer at Kx Systems, "Kx: How Wall Street Tech can ...
 
MongoDB IoT City Tour LONDON: Managing the Database Complexity, by Arthur Vie...
MongoDB IoT City Tour LONDON: Managing the Database Complexity, by Arthur Vie...MongoDB IoT City Tour LONDON: Managing the Database Complexity, by Arthur Vie...
MongoDB IoT City Tour LONDON: Managing the Database Complexity, by Arthur Vie...
 
Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018
Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018
Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018
 
Apache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdbApache con 2020 use cases and optimizations of iotdb
Apache con 2020 use cases and optimizations of iotdb
 
Webinar: Dyn + DataStax - helping companies deliver exceptional end-user expe...
Webinar: Dyn + DataStax - helping companies deliver exceptional end-user expe...Webinar: Dyn + DataStax - helping companies deliver exceptional end-user expe...
Webinar: Dyn + DataStax - helping companies deliver exceptional end-user expe...
 
MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...
MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...
MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...
 
High Performance With Java
High Performance With JavaHigh Performance With Java
High Performance With Java
 
Gnocchi v3 brownbag
Gnocchi v3 brownbagGnocchi v3 brownbag
Gnocchi v3 brownbag
 
Apache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoTApache IOTDB: a Time Series Database for Industrial IoT
Apache IOTDB: a Time Series Database for Industrial IoT
 
Ugif 04 2011 france ug04042011-jroy_ts
Ugif 04 2011   france ug04042011-jroy_tsUgif 04 2011   france ug04042011-jroy_ts
Ugif 04 2011 france ug04042011-jroy_ts
 
Time Series Processing with Solr and Spark
Time Series Processing with Solr and SparkTime Series Processing with Solr and Spark
Time Series Processing with Solr and Spark
 
On the need for a W3C community group on RDF Stream Processing
On the need for a W3C community group on RDF Stream ProcessingOn the need for a W3C community group on RDF Stream Processing
On the need for a W3C community group on RDF Stream Processing
 
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
 
(ARC311) Decoding The Genetic Blueprint Of Life On A Cloud Ecosystem
(ARC311) Decoding The Genetic Blueprint Of Life On A Cloud Ecosystem(ARC311) Decoding The Genetic Blueprint Of Life On A Cloud Ecosystem
(ARC311) Decoding The Genetic Blueprint Of Life On A Cloud Ecosystem
 
Riga dev day: Lambda architecture at AWS
Riga dev day: Lambda architecture at AWSRiga dev day: Lambda architecture at AWS
Riga dev day: Lambda architecture at AWS
 

Mehr von QAware GmbH

"Mixed" Scrum-Teams – Die richtige Mischung macht's!
"Mixed" Scrum-Teams – Die richtige Mischung macht's!"Mixed" Scrum-Teams – Die richtige Mischung macht's!
"Mixed" Scrum-Teams – Die richtige Mischung macht's!
QAware GmbH
 
Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
 Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See... Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
QAware GmbH
 

Mehr von QAware GmbH (20)

50 Shades of K8s Autoscaling #JavaLand24.pdf
50 Shades of K8s Autoscaling #JavaLand24.pdf50 Shades of K8s Autoscaling #JavaLand24.pdf
50 Shades of K8s Autoscaling #JavaLand24.pdf
 
Make Agile Great - PM-Erfahrungen aus zwei virtuellen internationalen SAFe-Pr...
Make Agile Great - PM-Erfahrungen aus zwei virtuellen internationalen SAFe-Pr...Make Agile Great - PM-Erfahrungen aus zwei virtuellen internationalen SAFe-Pr...
Make Agile Great - PM-Erfahrungen aus zwei virtuellen internationalen SAFe-Pr...
 
Fully-managed Cloud-native Databases: The path to indefinite scale @ CNN Mainz
Fully-managed Cloud-native Databases: The path to indefinite scale @ CNN MainzFully-managed Cloud-native Databases: The path to indefinite scale @ CNN Mainz
Fully-managed Cloud-native Databases: The path to indefinite scale @ CNN Mainz
 
Down the Ivory Tower towards Agile Architecture
Down the Ivory Tower towards Agile ArchitectureDown the Ivory Tower towards Agile Architecture
Down the Ivory Tower towards Agile Architecture
 
"Mixed" Scrum-Teams – Die richtige Mischung macht's!
"Mixed" Scrum-Teams – Die richtige Mischung macht's!"Mixed" Scrum-Teams – Die richtige Mischung macht's!
"Mixed" Scrum-Teams – Die richtige Mischung macht's!
 
Make Developers Fly: Principles for Platform Engineering
Make Developers Fly: Principles for Platform EngineeringMake Developers Fly: Principles for Platform Engineering
Make Developers Fly: Principles for Platform Engineering
 
Der Tod der Testpyramide? – Frontend-Testing mit Playwright
Der Tod der Testpyramide? – Frontend-Testing mit PlaywrightDer Tod der Testpyramide? – Frontend-Testing mit Playwright
Der Tod der Testpyramide? – Frontend-Testing mit Playwright
 
Was kommt nach den SPAs
Was kommt nach den SPAsWas kommt nach den SPAs
Was kommt nach den SPAs
 
Cloud Migration mit KI: der Turbo
Cloud Migration mit KI: der Turbo Cloud Migration mit KI: der Turbo
Cloud Migration mit KI: der Turbo
 
Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
 Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See... Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
Migration von stark regulierten Anwendungen in die Cloud: Dem Teufel die See...
 
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
 
Endlich gute API Tests. Boldly Testing APIs Where No One Has Tested Before.
Endlich gute API Tests. Boldly Testing APIs Where No One Has Tested Before.Endlich gute API Tests. Boldly Testing APIs Where No One Has Tested Before.
Endlich gute API Tests. Boldly Testing APIs Where No One Has Tested Before.
 
Kubernetes with Cilium in AWS - Experience Report!
Kubernetes with Cilium in AWS - Experience Report!Kubernetes with Cilium in AWS - Experience Report!
Kubernetes with Cilium in AWS - Experience Report!
 
50 Shades of K8s Autoscaling
50 Shades of K8s Autoscaling50 Shades of K8s Autoscaling
50 Shades of K8s Autoscaling
 
Kontinuierliche Sicherheitstests für APIs mit Testkube und OWASP ZAP
Kontinuierliche Sicherheitstests für APIs mit Testkube und OWASP ZAPKontinuierliche Sicherheitstests für APIs mit Testkube und OWASP ZAP
Kontinuierliche Sicherheitstests für APIs mit Testkube und OWASP ZAP
 
Service Mesh Pain & Gain. Experiences from a client project.
Service Mesh Pain & Gain. Experiences from a client project.Service Mesh Pain & Gain. Experiences from a client project.
Service Mesh Pain & Gain. Experiences from a client project.
 
50 Shades of K8s Autoscaling
50 Shades of K8s Autoscaling50 Shades of K8s Autoscaling
50 Shades of K8s Autoscaling
 
Blue turns green! Approaches and technologies for sustainable K8s clusters.
Blue turns green! Approaches and technologies for sustainable K8s clusters.Blue turns green! Approaches and technologies for sustainable K8s clusters.
Blue turns green! Approaches and technologies for sustainable K8s clusters.
 
Per Anhalter zu Cloud Nativen API Gateways
Per Anhalter zu Cloud Nativen API GatewaysPer Anhalter zu Cloud Nativen API Gateways
Per Anhalter zu Cloud Nativen API Gateways
 
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
Aus blau wird grün! Ansätze und Technologien für nachhaltige Kubernetes-Cluster
 

Kürzlich hochgeladen

Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
vexqp
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
gajnagarg
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
gajnagarg
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
ranjankumarbehera14
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
wsppdmt
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
gajnagarg
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
gajnagarg
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
ptikerjasaptiker
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
ahmedjiabur940
 
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
nirzagarg
 

Kürzlich hochgeladen (20)

5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
 
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham Ware
 
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
 
Aspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - AlmoraAspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - Almora
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
 
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxThe-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
 
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
 

Chronix Time Series Database - The New Time Series Kid on the Block

  • 1. The new time series kid on the block Florian Lautenschlager @flolaut
  • 2.
  • 3. 68.000.000.000* time correlated data objects. 3 * collect every 10 seconds 72 metrics x 15 processes x 20 hosts over 1 years How to store such amount of data on your laptop computer and retrieve any point within a few milliseconds?
  • 4. Well we tried that approach… 4 ■ Store data objects in a classical RDBMS ■ But… ■Slow import of data objects ■Huge amount of hard drive space ■Slow retrieval of time series ■Limited scalability due to RDBMS ■Missing query functions for time series data !68.000.000.000! Measurement Series Name Start End Time Series Start End Data Object Timestamp Value Metric Attributes Host Process … * * * * Name
  • 5. 5 Hence it felt like … Image Credit: http://www.sail-world.com/
  • 6. But what to do? Chunks + Compression + Document storage! 6 ■ The key ideas to enable the efficient storage of billion data objects: ■Split time series into chunks of the same size with data objects ■Compress these chunks to reduce the data volume ■Store the compressed chunk and the attributes in one record ■ Reason for success: ■32 GB disk usage to store 68 billion data objects ■Fast retrieval of data objects within a few milliseconds ■Fast navigation on attributes (finding the chunk) ■Everything runs on a laptop computer ■… and many more! Time Series Record Start End Chunk[] Size Metadata, … 1 Million !68.000!
  • 7. That‘s all. No secrets, nothing special and nothing more to say.  Time Series Database - What’s that? Definitions and typical features.  Why did we choose Apache Solr and are there alternatives?  Chronix Architecture that is based on Solr and Lucene.  What’s needed to speed up Chronix to a firehorse. What comes next?
  • 8. Time Series Database: What’s that? 8 ■ Definition 1: “A data object d is a tuple of {timestamp, value}, where the value could be any kind of object.” ■ Definition 2: “A time series T is an arbitrary list of chronological ordered data objects of one value type”. ■ Definition 3: “A chunk C is a chronological ordered part of a time series.” ■ Definition 4: “A time series database TSDB is a specialized database for storing and retrieving time series in an efficient and optimized way”. d {t,v} 1 T {d1,d2} T CT T1 C1,1 C1,2 TSDB T3C2,2 T1 C2,1
  • 9. A few typical features of a time series database 9 ■ Data management ■Round Robin Storages ■Down-sample old time series ■Compression ■Delta-Encoding ■ Describing Attributes ■Arbitrary amount of attributes ■For time series (Country, Host, Customer, …) ■For data object (Scale, Unit, Type) ■ Performance and Operational ■Rare updates, Inserts are additive ■Fast inserts and retrievals ■Distributed and efficient per node ■No need of ACID, but consistency ■ Time series language and API ■Statistics: Aggregation (min, max, median), … ■Transformations: Time windows, time shifting, resampling, .. ■High level analyses: Outlier, Trends Check out: A good post about the requirements of a time series: http://www.xaprb.com/blog/2014/06/08/time-series-database-requirements/
  • 10. 10 Some time series databases out there. ■RRDTool - http://oss.oetiker.ch/rrdtool/ ■Mainly used in traditional monitoring systems ■Graphite – https://github.com/graphite-project ■Uses the concepts of RRDTool and puts some sugar on it ■InfluxDB - https://influxdata.com/time-series-platform/influxdb/ ■A distributed time series database with a very handy query language ■OpenTSDB - http://opentsdb.net/ ■Is a scalable time series database and runs on Hadoop and Hbase ■Prometheus- https://prometheus.io/ ■ A monitoring system and time series database ■KairosDB - https://kairosdb.github.io/ ■Like OpenTSDB but is based on Apache Cassandra ■… many more! And of course Chronix! - http://chronix.io/
  • 11. “Ey, there are so many time series databases out there? Why did you create a new solution?” 11 Our Requirements ■ A fast write and query performance ■ Run the database on a laptop computer ■ Minimal data volume for stored data objects ■ Storing arbitrary attributes ■ A query API for searching on all attributes ■ Large community and an active development That delivers Apache Solr ■ Based on Lucene which is really fast ■ Runs embedded, standalone, distributed ■ Lucene has a built-in compression ■ Schema or schemaless ■ Solr Query Language ■ Lucidworks and an Apache project “Our tool has been around for a good few years, and in the beginning there was no time series database that complies our requirements. And there isn’t one today!”Elastic Search is an alternative. It is also based on Lucene.
  • 12. 12 Let‘s dig deeper into Chronix’ internals. Image Credit: http://www.taringa.net/posts/ciencia-educacion/12656540/La-Filosofia-del-Dr-House-2.html
  • 13. Chronix’ architecture enables both efficient storage of time series and millisecond range queries. 13 (1) Semantic Compression (2) Attributes and Chunks (3) Basic Compression (4) Multi-Dimensional Storage Record data:<chunk> attributes Record data:compressed <chunk> attributes Record Storage 68 Billion Points 1 Mio. Chunks * 68.000 Points ~ 96% Compression Optional
  • 14. The key data type of Chronix is called a record. It stores a compressed time series chunk and its attributes. 14 record{ data:compressed{<chunk>} //technical fields id: 3dce1de0−...−93fb2e806d19 version: 1501692859622883300 start: 1427457011238 end: 1427471159292 //optional attributes host: prodI5 process: scheduler group: jmx metric: heapMemory.Usage.Used max: 896.571 } Data:compressed{<chunk of time series data>} ■ Time Series: timestamp, numeric value ■ Traces: calls, exceptions, … ■ Logs: access, method runtimes ■ Complex data: models, test coverage, anything else… Optional attributes ■ Arbitrary attributes for the time series ■ Attributes are indexed ■ Make the chunk searchable ■ Can contain pre-calculated values
  • 15. Chronix provides specialized aggregations, transformations, and analyses for time series that are commonly used. 15 Aggregations (ag) ■ Min / Max / Average / Sum / Count ■ Percentile ■ Standard Deviation ■ First / Last ■ Range Analyses (analysis) ■ Trend Analysis Using a linear regression model ■ Outlier Analysis Using the IQR ■ Frequency Analysis Check occurrence within a time range ■ Fast Dynamic Time Warping Time series similarity search ■ Symbolic Aggregate Approximation Similarity and pattern search Transformations (tr) ■ Bottom/Top n-values ■ Moving average ■ Divide / Scale ■ Vectorisation
  • 16. Only scalar values? One size fits all? No! What about logs, traces, and others? No problem – Just do it yourself! 16 ■ Chronix Kassiopeia (Format) ■Time Series framework that is used by Chronix. ■Time Series Types: ■Numeric: Doubles (the time series known to be the default) ■Thread Dumps: Stack traces (e.g. java stack traces) ■Strace: Strace dumps (system call, duration, arguments public interface TimeSeriesConverter<T> { /** * Shall create an object of type T from the given binary time series. */ T from(BinaryTimeSeries binaryTimeSeriesChunk, long queryStart, long queryEnd); /** * Shall do the conversation of the custom time series T into the binary time series that is stored. */ BinaryTimeSeries to(T timeSeriesChunk); }
  • 17. Plain That‘s the easiest way to play with Chronix. A single instance of Chronix on a single node with a Apache Solr instance. 17 Java 8 (JRE) Chronix - 0.2 Solr - 6.0.0 Lucene Solr plugins 8983 Your Computer Chronix-Query-Handler Chronix-Response-Writer Chronix-Retention Chronix-Client Json + Binary Binary + Binary Json + Json Java 8 (JRE) HTTP
  • 18. Code-Slide: How to set up Chronix, ask for time series data, and call some server-side aggregations. 18 ■ Create a connection to Solr and set up Chronix ■ Define and range query and stream its results ■ Call some aggregations solr = new HttpSolrClient("http://localhost:8913/solr/chronix/") chronix = new ChronixClient(new KassiopeiaSimpleConverter<>(), new ChronixSolrStorage(200, groupBy, reduce)) query = new SolrQuery("metric:*Load*") chronix.stream(solr,query) query.addFilterQuery("ag=max,min,count,sdiff") stream = chronix.stream(solr,query) Signed Difference: First=20, Last=-100  -80 Group chunks on a combination of attributes and reduce them to a time series. Get all time series whose metric contains Load
  • 19. That’s the four week data that is shipped with the release!
  • 20. Tune Chronix to a firehorse. Even with defaults it’s blazing fast!
  • 21. We have tuned Chronix in terms of chunk size, and compression technique to get the ideal default values for you. 21 ■ Tuning Dataset ■Three real-world projects ■15 GB of time series data (typical monitoring data) ■About 500 million points in 15k time series ■92 typical queries with different time range and occurrence ■ We have measured: ■Compression rate for serval compression techniques (T) and chunk sizes (C). ■Query time for all 92 queries in the mix (range + aggregations) ■ What we want to know: Ideal values for T and C
  • 22. We have evaluated several compression techniques and chunk sizes of the time series data to get the best parameter values. 22 T= GZIP + C = 128 kBytes Florian Lautenschlager, Michael Philippsen, Andreas Kumlehn, Josef Adersberger Chronix: Efficient Storage and Query of Operational Time Series International Conference on Software Maintenance and Evolution 2016 (submitted) For more details about the tuning check our paper.
  • 23. Compared to other time series databases Chronix‘ results for our use case are outstanding. The approach works! 23 ■ We have evaluated Chronix with: ■InfluxDB, Graphite, OpenTSDB, and KairosDB ■All databases are configured as single node ■ Storage demand for 15 GB of raw csv time series data ■Chronix (237 MB) takes 4 – 84 times less space ■ Query times on imported data ■49 – 91 % faster than the evaluated time series databases ■ Memory footprint: after start, max during import, max during query mix ■Graphite is best (926 MB), Chronix (1.5 GB) is second. Others 16 to 39 GB
  • 24. The hard facts. For more details I suggest you to read our research paper about Chronix. 24 Florian Lautenschlager, Michael Philippsen, Andreas Kumlehn, Josef Adersberger Chronix: Efficient Storage and Query of Operational Time Series International Conference on Software Maintenance and Evolution 2016 (submitted)
  • 25. Now it’s your turn. Now it’s your turn.
  • 26. The whole Chronix Stack. Not yet completely implemented.
  • 27. Outlook: A powerful way to work with time series. A Chronix Cloud, a Spark Cluster, and an analysis workbench like Zeppelin. 27 Chronix Cloud Chronix Node Chronix Node Chronix Node Chronix Node Spark Cluster Spark Node Spark Node Spark Node Spark Node Zeppelin Chronix Spark Context Java Scala Various Applications as Workbench Spark SQL Context Chronix and Spark. Time Series Processing with Apache Spark – Josef Adersberger, Wed, 3:00 pm
  • 28. (mail) florian.lautenschlager@qaware.de (twitter) @flolaut (twitter) @ChronixDB (web) www.chronix.io #lovetimeseries Bart Simpson Other interesting related talks: Real-world Analytics with Solr Cloud and Spark – Johannes Weigend, Wed, 3:00 pm Time Series Processing with Apache Spark – Josef Adersberger, Wed, 3:00 pm
  • 29. Code-Slide: Use Spark to process time series data that comes out right now from Chronix. 29 ■ Create a ChronixSparkContext ■ Define and range query and stream its results ■ Play with the data conf = new SparkConf().setMaster(SPARK_MASTER).setAppName(CHRONIX) jsc = new JavaSparkContext(conf) csc = new ChronixSparkContext(jsc) sqlc = new SQLContext(jsc) query = new SolrQuery("metric:*Load*") rdd = csc.queryChronixChunks(query,ZK_HOST,CHRONIX_COLLECTION, new ChronixSolrCloudStorage()); DataSet<MetricObservation> ds = rdd.toObservationsDataset(sqlContext) rdd.mean() rdd.max() rdd.iterator() Dataset to use Spark SQL features Set up Spark, a JavaSparkContext, a ChronixSparkContext, and a SQLContext Get all time series whose metric contains Load