Successfully reported this slideshow.
O C T O B E R 1 1 - 1 4 , 2 0 1 6 • B O S T O N , M A
Time Series Processing with Solr and Spark
Josef Adersberger (@adersberger)
CTO, QAware
TIME SERIES 101
4
01
WE’RE SURROUNDED BY TIME SERIES
▸ Operational data: Monitoring data, performance
metrics, log events, …
▸ Data Wareho...
5
WE’RE SURROUNDED BY TIME SERIES (Pt. 2)
▸ Oktoberfest: Visitor and beer consumption trend
the singularity
6
01
TIME SERIES: BASIC TERMS
univariate time series multivariate time series multi-dimensional time
series (time series t...
7
01
ILLUSTRATIVE OPERATIONS ON TIME SERIES
align
Time series => Time series
diff downsampling outlier
min/max avg/med slo...
OUR USE CASE
Monitoring Data Analysis 

of a business-critical,

worldwide distributed 

software system. Enable

root cause analysis a...
11
01
USE CASE: EXPLORING
Drill-down
host
process
measurements
counters (metrics)
Query time series metadata
Superimpose t...
12
01
USE CASE: STATISTICS
@adersberger
13
01
USE CASE: ANOMALY DETECTION
Featuring Twitter Anomaly Detection (https://github.com/twitter/AnomalyDetection

and Ya...
14
01
USE CASE: SQL AND ZEPPELIN
@adersberger
CHRONIX SPARK
https://github.com/ChronixDB/chronix.spark
http://www.datasciencecentral.com
19
01
AVAILABLE TIME SERIES DATABASES
https://github.com/qaware/big-data-landscape
EASY-TO-USE BIG TIME
SERIES DATA STORAGE &
PROCESSING ON SPARK
21
01
THE CHRONIX STACK chronix.io
Big time series database
Scale-out
Storage-efficient
Interactive queries

No separate se...
22
node
Distributed Data &

Data Retrieval
‣ Data sharding
‣ Fast index-based queries
‣ Efficient storage format
Distribute...
23
TIME SERIES MODEL
Set of univariate multi-dimensional numeric time series
▸ set … because it’s more flexible and better ...
24
01
CHRONIX SPARK API: ENTRY POINTS
CHRONIX SPARK


ChronixRDD
ChronixSparkContext
‣ Represents a set of time series
‣ D...
25
01
CHRONIX SPARK API: DATA MODEL
MetricTimeSeries
MetricObservationDataFrame
+ toDataFrame()
@adersberger
Dataset<Metri...
26
01
SPARK APIs FOR DATA PROCESSING
RDD DataFrame Dataset
typed yes no yes
optimized medium highly highly
mature yes yes ...
27
01
CHRONIX RDD
Statistical operations
the set characteristic: 

a JavaRDD of 

MetricTimeSeries
Filter the set (esp. by...
28
01
METRICTIMESERIES DATA TYPE
access all timestamps
the multi-dimensionality:

get/set dimensions

(attributes)
access ...
30
01
//Create Chronix Spark context from a SparkContext / JavaSparkContext

ChronixSparkContext csc = new ChronixSparkCon...
CHRONIX SPARK INTERNALS
32
Distributed Data &

Data Retrieval
‣ Data sharding (OK)
‣ Fast index-based queries (OK)
‣ Efficient storage format
@ader...
33
01
CHRONIX FORMAT: CHUNKING TIME SERIES
TIME SERIES
‣ start: TimeStamp
‣ end: TimeStamp
‣ dimensions: Map<String, Strin...
34
01
CHRONIX FORMAT: ENCODING OF OBSERVATIONS
Binary encoding of all timestamp/value pairs (observations) with ProtoBuf i...
35
01
CHRONIX FORMAT: TUNING CHUNK SIZE AND CODEC
GZIP +
128
kBytes
Florian Lautenschlager, Michael Philippsen, Andreas Ku...
36
01
CHRONIX FORMAT: STORAGE EFFICIENCY BENCHMARK
@adersberger
37
01
CHRONIX FORMAT: PERFORMANCE BENCHMARK
unit: secondsnbr of queries query
@adersberger
38
Distributed Processing
‣ Heavy lifting distributed
processing
‣ Efficient integration of Spark

and Solr
@adersberger
39
01
SPARK AND SOLR BEST PRACTICES: ALIGN PARALLELISM
SolrDocument

(Chunk)
Solr Shard Solr Shard
TimeSeries TimeSeries T...
40
01
ALIGN THE PARALLELISM WITHIN CHRONIXRDD
public ChronixRDD queryChronixChunks(

final SolrQuery query,

final String ...
41
01
SPARK AND SOLR BEST PRACTICES: PUSHDOWN
SolrQuery query = new SolrQuery(

“<Solr query containing filters and aggreg...
42
01
SPARK AND SOLR BEST PRACTICES: EFFICIENT DATA TRANSFER
Reduce volume: Pushdown & compression

Use efficient protocols...
43
private Stream<MetricTimeSeries> 

streamWithCloudSolrStream(String zkHost, String collection, String shardUrl, SolrQue...
Time Series Databases should be first-class citizens.
Chronix leverages Solr and Spark to 

be storage efficient and to allo...
THANK YOU! QUESTIONS?
Mail: josef.adersberger@qaware.de
Twitter: @adersberger
TWITTER.COM/QAWARE - SLIDESHARE.NET/QAWARE
BONUS SLIDES
PERFORMANCE
codingvoding.tumblr.com
PREMATURE OPTIMIZATION IS
NOT EVIL IF YOU HANDLE BIG
DATA
Josef Adersberger
PERFORMANCE
USING A JAVA PROFILER WITH A LOCAL CLUSTER
PERFORMANCE
HIGH-PERFORMANCE, LOW-OVERHEAD COLLECTIONS
PERFORMANCE
830 MB -> 360 MB

(- 57%)
unveiled wrong Jackson 

handling inside of SolrClient
53
01
THE SECRETS OF DISTRIBUTED PROCESSING PERFORMANCE


Rule 1: Be as close to the data as possible!

(CPU cache > memor...
PERFORMANCE
THE RULES APPLIED
‣ Rule 1: Be as close to the data as possible!
1. Solr caching

2. Spark in-memory processin...
APACHE SPARK 101
CHRONIX SPARK WONDERLAND
ARCHITECTURE
APACHE SPARK
SPARK TERMINOLOGY (1/2)
▸ RDD: Has transformations and actions. Hides data partitioning &
distributed computa...
APACHE SPARK
SPARK TERMINOLOGY (2/2)
▸ Job: A computation job which is launched when an action is called on a
RDD.
▸ Task:...
THE COMPETITORS / ALTERNATIVES
CHRONIX RDD VS. SPARK-TS
▸ Spark-TS provides no specific time series storage it uses the Spa...
CHRONIX SPARK INTERNALS
61
01
CHRONIXRDD: GET THE CHUNKS FROM SOLR
public ChronixRDD queryChronixChunks(

final SolrQuery query,

final String zkH...
62
01
BINARY PROTOCOL WITH STANDARD SOLR CLIENT
private Stream<MetricTimeSeries> streamWithHttpSolrClient(String shardUrl,...
63
private Stream<MetricTimeSeries> 

streamWithCloudSolrStream(String zkHost, String collection, String shardUrl, SolrQue...
64
01
CHRONIXRDD: FROM CHUNKS TO TIME SERIES
public ChronixRDD joinChunks() {

JavaPairRDD<MetricTimeSeriesKey, Iterable<M...
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware
Nächste SlideShare
Wird geladen in …5
×

Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware

782 Aufrufe

Veröffentlicht am

Presented at Lucene/Solr Revolution 2016

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Time Series Processing with Solr and Spark: Presented by Josef Adersberger, QAware

  1. 1. O C T O B E R 1 1 - 1 4 , 2 0 1 6 • B O S T O N , M A
  2. 2. Time Series Processing with Solr and Spark Josef Adersberger (@adersberger) CTO, QAware
  3. 3. TIME SERIES 101
  4. 4. 4 01 WE’RE SURROUNDED BY TIME SERIES ▸ Operational data: Monitoring data, performance metrics, log events, … ▸ Data Warehouse: Dimension time ▸ Measured Me: Activity tracking, ECG, … ▸ Sensor telemetry: Sensor data, … ▸ Financial data: Stock charts, … ▸ Climate data: Temperature, … ▸ Web tracking: Clickstreams, … ▸ … @adersberger
  5. 5. 5 WE’RE SURROUNDED BY TIME SERIES (Pt. 2) ▸ Oktoberfest: Visitor and beer consumption trend the singularity
  6. 6. 6 01 TIME SERIES: BASIC TERMS univariate time series multivariate time series multi-dimensional time series (time series tensor) time series setobservation @adersberger
  7. 7. 7 01 ILLUSTRATIVE OPERATIONS ON TIME SERIES align Time series => Time series diff downsampling outlier min/max avg/med slope std-dev Time series => Scalar @adersberger
  8. 8. OUR USE CASE
  9. 9. Monitoring Data Analysis 
 of a business-critical,
 worldwide distributed 
 software system. Enable
 root cause analysis and
 anomaly detection.
 > 1,000 nodes worldwide > 10 processes per node > 20 metrics per process
 (OS, JVM, App-spec.) Measured every second. = about 6.3 trillions observations p.a.
 Data retention: 5 yrs.
  10. 10. 11 01 USE CASE: EXPLORING Drill-down host process measurements counters (metrics) Query time series metadata Superimpose time series @adersberger
  11. 11. 12 01 USE CASE: STATISTICS @adersberger
  12. 12. 13 01 USE CASE: ANOMALY DETECTION Featuring Twitter Anomaly Detection (https://github.com/twitter/AnomalyDetection
 and Yahoo EGDAS https://github.com/yahoo/egads @adersberger
  13. 13. 14 01 USE CASE: SQL AND ZEPPELIN @adersberger
  14. 14. CHRONIX SPARK https://github.com/ChronixDB/chronix.spark
  15. 15. http://www.datasciencecentral.com
  16. 16. 19 01 AVAILABLE TIME SERIES DATABASES https://github.com/qaware/big-data-landscape
  17. 17. EASY-TO-USE BIG TIME SERIES DATA STORAGE & PROCESSING ON SPARK
  18. 18. 21 01 THE CHRONIX STACK chronix.io Big time series database Scale-out Storage-efficient Interactive queries
 No separate servers: Drop-in 
 to existing Solr and Spark 
 installations
 Integrated into the relevant
 open source ecosystem @adersberger Core Chronix Storage Chronix Server Chronix Spark ChronixFormat GrafanaChronix Analytics Collection Analytics Frontends Logstash fluentd collectd Zeppelin Prometheus Ingestion Bridge KairosDB OpenTSDBInfluxDB Graphite
  19. 19. 22 node Distributed Data &
 Data Retrieval ‣ Data sharding ‣ Fast index-based queries ‣ Efficient storage format Distributed Processing ‣ Heavy lifting distributed processing ‣ Efficient integration of Spark and Solr Result Processing Post-processing on a smaller set of time series data flow icon credits to Nimal Raj (database), Arthur Shlain (console) and alvarobueno (takslist) @adersberger
  20. 20. 23 TIME SERIES MODEL Set of univariate multi-dimensional numeric time series ▸ set … because it’s more flexible and better to parallelise if operations can input and output multiple time series. ▸ univariate … because multivariate will introduce too much complexity (and we have our set to bundle multiple time series). ▸ multi-dimensional … because the ability to slice & dice in the set of time series is very convenient for a lot of use cases. ▸ numeric … because it’s the most common use case. A single time series is identified by a combination of its non-temporal dimensional values (e.g. unit “mem usage” + host “aws42” + process “tomcat”) @adersberger
  21. 21. 24 01 CHRONIX SPARK API: ENTRY POINTS CHRONIX SPARK 
 ChronixRDD ChronixSparkContext ‣ Represents a set of time series ‣ Distributed operations on sets of time series ‣ Creates ChronixRDDs ‣ Speaks with the Chronix Server (Solr) @adersberger
  22. 22. 25 01 CHRONIX SPARK API: DATA MODEL MetricTimeSeries MetricObservationDataFrame + toDataFrame() @adersberger Dataset<MetricTimeSeries> Dataset<MetricObservation> + toDataset() + toObservationsDataset() ChronixRDD
  23. 23. 26 01 SPARK APIs FOR DATA PROCESSING RDD DataFrame Dataset typed yes no yes optimized medium highly highly mature yes yes medium SQL no yes no @adersberger
  24. 24. 27 01 CHRONIX RDD Statistical operations the set characteristic: 
 a JavaRDD of 
 MetricTimeSeries Filter the set (esp. by
 dimensions) @adersberger
  25. 25. 28 01 METRICTIMESERIES DATA TYPE access all timestamps the multi-dimensionality:
 get/set dimensions
 (attributes) access all observations as stream access all numeric values @adersberger
  26. 26. 30 01 //Create Chronix Spark context from a SparkContext / JavaSparkContext
 ChronixSparkContext csc = new ChronixSparkContext(sc);
 
 //Read data into ChronixRDD
 SolrQuery query = new SolrQuery(
 "metric:"java.lang:type=Memory/HeapMemoryUsage/used"");
 
 ChronixRDD rdd = csc.query(query,
 "localhost:9983", //ZooKeeper host
 "chronix", //Solr collection for Chronix
 new ChronixSolrCloudStorage());
 
 //Calculate the overall min/max/mean of all time series in the RDD
 double min = rdd.min();
 double max = rdd.max();
 double mean = rdd.mean(); DataFrame df = rdd.toDataFrame(sqlContext);
 DataFrame res = df
 .select("time", "value", "process", "metric")
 .where("process='jenkins-jolokia'")
 .orderBy("time");
 res.show(); @adersberger
  27. 27. CHRONIX SPARK INTERNALS
  28. 28. 32 Distributed Data &
 Data Retrieval ‣ Data sharding (OK) ‣ Fast index-based queries (OK) ‣ Efficient storage format @adersberger
  29. 29. 33 01 CHRONIX FORMAT: CHUNKING TIME SERIES TIME SERIES ‣ start: TimeStamp ‣ end: TimeStamp ‣ dimensions: Map<String, String> ‣ observations: byte[] TIME SERIES ‣ start: TimeStamp ‣ end: TimeStamp ‣ dimensions: Map<String, String> ‣ observations: byte[] Logical TIME SERIES ‣ start: TimeStamp ‣ end: TimeStamp ‣ dimensions: Map<String, String> ‣ observations: byte[] Physical Chunking: 1 logical time series = 
 n physical time series (chunks) 1 chunk = fixed amount of observations 1 chunk = 1 Solr document @adersberger
  30. 30. 34 01 CHRONIX FORMAT: ENCODING OF OBSERVATIONS Binary encoding of all timestamp/value pairs (observations) with ProtoBuf incl. binary compression. Delta encoding leading to more effective binary compression … of time stamps (DCC, Date-Delta-Compaction)
 
 
 
 
 
 
 … of values: diff chunck • timespan • nbr. of observations periodic distributed time stamps (pts): timespan / nbr. of observations real time stamps (rts) if |pts(x) - rts(x)| < threshold : rts(x) = pts(x) value_to_store = pts(x) - rts(x) value_to_store = value(x) - value(x-1) @adersberger
  31. 31. 35 01 CHRONIX FORMAT: TUNING CHUNK SIZE AND CODEC GZIP + 128 kBytes Florian Lautenschlager, Michael Philippsen, Andreas Kumlehn, Josef Adersberger
 Chronix: Efficient Storage and Query of Operational Time Series International Conference on Software Maintenance and Evolution 2016 (submitted) @adersberger storage 
 demand access
 time
  32. 32. 36 01 CHRONIX FORMAT: STORAGE EFFICIENCY BENCHMARK @adersberger
  33. 33. 37 01 CHRONIX FORMAT: PERFORMANCE BENCHMARK unit: secondsnbr of queries query @adersberger
  34. 34. 38 Distributed Processing ‣ Heavy lifting distributed processing ‣ Efficient integration of Spark
 and Solr @adersberger
  35. 35. 39 01 SPARK AND SOLR BEST PRACTICES: ALIGN PARALLELISM SolrDocument
 (Chunk) Solr Shard Solr Shard TimeSeries TimeSeries TimeSeries TimeSeries TimeSeries Partition Partition ChronixRDD • Unit of parallelism in Spark: Partition • Unit of parallelism in Solr: Shard • 1 Spark Partition = 1 Solr Shard SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) SolrDocument
 (Chunk) @adersberger
  36. 36. 40 01 ALIGN THE PARALLELISM WITHIN CHRONIXRDD public ChronixRDD queryChronixChunks(
 final SolrQuery query,
 final String zkHost,
 final String collection,
 final ChronixSolrCloudStorage<MetricTimeSeries> chronixStorage) throws SolrServerException, IOException {
 
 // first get a list of replicas to query for this collection
 List<String> shards = chronixStorage.getShardList(zkHost, collection);
 
 // parallelize the requests to the shards
 JavaRDD<MetricTimeSeries> docs = jsc.parallelize(shards, shards.size()).flatMap(
 (FlatMapFunction<String, MetricTimeSeries>) shardUrl -> chronixStorage.streamFromSingleNode(
 new KassiopeiaSimpleConverter(), shardUrl, query)::iterator);
 return new ChronixRDD(docs);
 } Figure out all Solr shards (using CloudSolrClient in the background) Query each shard in parallel and convert SolrDocuments to MetricTimeSeries @adersberger
  37. 37. 41 01 SPARK AND SOLR BEST PRACTICES: PUSHDOWN SolrQuery query = new SolrQuery(
 “<Solr query containing filters and aggregations>");
 
 ChronixRDD rdd = csc.query(query, … @adersberger Predicate pushdown • Pre-filter time series based on their 
 metadata (dimensions, start, end)
 with Solr.
 Aggregation pushdown • Perform pre-aggregations (min/max/avg/…) at ingestion time and store it as metadata. • (to come) Perform aggregations on Solr-level at query time by enabling Solr to decode observations
  38. 38. 42 01 SPARK AND SOLR BEST PRACTICES: EFFICIENT DATA TRANSFER Reduce volume: Pushdown & compression
 Use efficient protocols: 
 Low-overhead, bulk, stream
 Avoid remote transfer: Place Spark
 tasks (processes 1 partition) on the 
 Solr node with the appropriate shard.
 (to come by using SolrRDD) @adersberger Export 
 Handler Chronix
 RDD CloudSolr
 Stream Format Decoder bulk of 
 JSON tuples Chronix Spark Solr / SolrJ
  39. 39. 43 private Stream<MetricTimeSeries> 
 streamWithCloudSolrStream(String zkHost, String collection, String shardUrl, SolrQuery query,
 TimeSeriesConverter<MetricTimeSeries> converter) throws IOException {
 Map params = new HashMap();
 params.put("q", query.getQuery());
 params.put("sort", "id asc");
 params.put("shards", extractShardIdFromShardUrl(shardUrl));
 params.put("fl",
 Schema.DATA + ", " + Schema.ID + ", " + Schema.START + ", " + Schema.END +
 ", metric, host, measurement, process, ag, group");
 params.put("qt", "/export");
 params.put("distrib", false);
 
 CloudSolrStream solrStream = new CloudSolrStream(zkHost, collection, params);
 solrStream.open();
 SolrTupleStreamingService tupStream = new SolrTupleStreamingService(solrStream, converter);
 return StreamSupport.stream( Spliterators.spliteratorUnknownSize(tupStream, Spliterator.SIZED), false);
 } Pin query to one shard Use export request handler Boilerplate code to stream response @adersberger
  40. 40. Time Series Databases should be first-class citizens. Chronix leverages Solr and Spark to 
 be storage efficient and to allow interactive 
 queries for big time series data.
  41. 41. THANK YOU! QUESTIONS? Mail: josef.adersberger@qaware.de Twitter: @adersberger TWITTER.COM/QAWARE - SLIDESHARE.NET/QAWARE
  42. 42. BONUS SLIDES
  43. 43. PERFORMANCE
  44. 44. codingvoding.tumblr.com
  45. 45. PREMATURE OPTIMIZATION IS NOT EVIL IF YOU HANDLE BIG DATA Josef Adersberger
  46. 46. PERFORMANCE USING A JAVA PROFILER WITH A LOCAL CLUSTER
  47. 47. PERFORMANCE HIGH-PERFORMANCE, LOW-OVERHEAD COLLECTIONS
  48. 48. PERFORMANCE 830 MB -> 360 MB
 (- 57%) unveiled wrong Jackson 
 handling inside of SolrClient
  49. 49. 53 01 THE SECRETS OF DISTRIBUTED PROCESSING PERFORMANCE 
 Rule 1: Be as close to the data as possible!
 (CPU cache > memory > local disk > network)
 Rule 2: Reduce data volume as early as possible! 
 (as long as you don’t sacrifice parallelization)
 Rule 3: Parallelize as much as possible! 
 (max = #cores * x)
  50. 50. PERFORMANCE THE RULES APPLIED ‣ Rule 1: Be as close to the data as possible! 1. Solr caching 2. Spark in-memory processing with activated RDD compression 3. Binary protocol between Solr and Spark
 ‣ Rule 2: Reduce data volume as early as possible! ‣ Efficient storage format (Chronix Format) ‣ Predicate pushdown to Solr (query) ‣ Group-by & aggregation pushdown to Solr (faceting within a query)
 ‣ Rule 3: Parallelize as much as possible! ‣ Scale-out on data-level with SolrCloud ‣ Scale-out on processing-level with Spark
  51. 51. APACHE SPARK 101
  52. 52. CHRONIX SPARK WONDERLAND ARCHITECTURE
  53. 53. APACHE SPARK SPARK TERMINOLOGY (1/2) ▸ RDD: Has transformations and actions. Hides data partitioning & distributed computation. References a set of partitions (“output partitions”) - materialized or not - and has dependencies to another RDD (“input partitions”). RDD operations are evaluated as late as possible (when an action is called). As long as not being the root RDD the partitions of an RDD are in memory but they can be persisted by request. ▸ Partitions: (Logical) chunks of data. Default unit and level of parallelism - inside of a partition everything is a sequential operation on records. Has to fit into memory. Can have different representations (in-memory, on disk, off heap, …)
  54. 54. APACHE SPARK SPARK TERMINOLOGY (2/2) ▸ Job: A computation job which is launched when an action is called on a RDD. ▸ Task: The atomic unit of work (function). Bound to exactly one partition. ▸ Stage: Set of Task pipelines which can be executed in parallel on one executor. ▸ Shuffling: If partitions need to be transferred between executors. Shuffle write = outbound partition transfer. Shuffle read = inbound partition transfer. ▸ DAG Scheduler: Computes DAG of stages from RDD DAG. Determines the preferred location for each task.
  55. 55. THE COMPETITORS / ALTERNATIVES CHRONIX RDD VS. SPARK-TS ▸ Spark-TS provides no specific time series storage it uses the Spark persistence mechanisms instead. This leads to a less efficient storage usage and less possibilities to perform performance optimizations via predicate pushdown. ▸ In contrast to Spark-TS Chronix does not align all time series values on one vector of timestamps. This leads to greater flexibility in time series aggregation ▸ Chronix provides multi-dimensional time series as this is very useful for data warehousing and APM. ▸ Chronix has support for Datasets as this will be an important Spark API in the near future. But Chronix currently doesn’t support an IndexedRowMatrix for SparkML. ▸ Chronix is purely written in Java. There is no explicit support for Python and Scala yet. ▸ Chronix doesn not support a ZonedTime as this makes it way more complicated.
  56. 56. CHRONIX SPARK INTERNALS
  57. 57. 61 01 CHRONIXRDD: GET THE CHUNKS FROM SOLR public ChronixRDD queryChronixChunks(
 final SolrQuery query,
 final String zkHost,
 final String collection,
 final ChronixSolrCloudStorage<MetricTimeSeries> chronixStorage) throws SolrServerException, IOException {
 
 // first get a list of replicas to query for this collection
 List<String> shards = chronixStorage.getShardList(zkHost, collection);
 
 // parallelize the requests to the shards
 JavaRDD<MetricTimeSeries> docs = jsc.parallelize(shards, shards.size()).flatMap(
 (FlatMapFunction<String, MetricTimeSeries>) shardUrl -> chronixStorage.streamFromSingleNode(
 new KassiopeiaSimpleConverter(), shardUrl, query)::iterator);
 return new ChronixRDD(docs);
 } Figure out all Solr shards (using CloudSolrClient in the background) Query each shard in parallel and convert SolrDocuments to MetricTimeSeries
  58. 58. 62 01 BINARY PROTOCOL WITH STANDARD SOLR CLIENT private Stream<MetricTimeSeries> streamWithHttpSolrClient(String shardUrl,
 SolrQuery query,
 TimeSeriesConverter<MetricTimeSeries> converter) {
 HttpSolrClient solrClient = getSingleNodeSolrClient(shardUrl);
 solrClient.setRequestWriter(new BinaryRequestWriter());
 query.set("distrib", false);
 SolrStreamingService<MetricTimeSeries> solrStreamingService = 
 new SolrStreamingService<>(converter, query, solrClient, nrOfDocumentPerBatch);
 return StreamSupport.stream(
 Spliterators.spliteratorUnknownSize(solrStreamingService, Spliterator.SIZED), false);
 } Use HttpSolrClient pinned to one shard Use binary (request)
 protocol Boilerplate code to stream response
  59. 59. 63 private Stream<MetricTimeSeries> 
 streamWithCloudSolrStream(String zkHost, String collection, String shardUrl, SolrQuery query,
 TimeSeriesConverter<MetricTimeSeries> converter) throws IOException {
 Map params = new HashMap();
 params.put("q", query.getQuery());
 params.put("sort", "id asc");
 params.put("shards", extractShardIdFromShardUrl(shardUrl));
 params.put("fl",
 Schema.DATA + ", " + Schema.ID + ", " + Schema.START + ", " + Schema.END +
 ", metric, host, measurement, process, ag, group");
 params.put("qt", "/export");
 params.put("distrib", false);
 
 CloudSolrStream solrStream = new CloudSolrStream(zkHost, collection, params);
 solrStream.open();
 SolrTupleStreamingService tupStream = new SolrTupleStreamingService(solrStream, converter);
 return StreamSupport.stream( Spliterators.spliteratorUnknownSize(tupStream, Spliterator.SIZED), false);
 } EXPORT HANDLER PROTOCOL Pin query to one shard Use export request handler Boilerplate code to stream response
  60. 60. 64 01 CHRONIXRDD: FROM CHUNKS TO TIME SERIES public ChronixRDD joinChunks() {
 JavaPairRDD<MetricTimeSeriesKey, Iterable<MetricTimeSeries>> groupRdd
 = this.groupBy(MetricTimeSeriesKey::new);
 
 JavaPairRDD<MetricTimeSeriesKey, MetricTimeSeries> joinedRdd
 = groupRdd.mapValues((Function<Iterable<MetricTimeSeries>, MetricTimeSeries>) mtsIt -> {
 MetricTimeSeriesOrdering ordering = new MetricTimeSeriesOrdering();
 List<MetricTimeSeries> orderedChunks = ordering.immutableSortedCopy(mtsIt);
 MetricTimeSeries result = null;
 for (MetricTimeSeries mts : orderedChunks) {
 if (result == null) {
 result = new MetricTimeSeries
 .Builder(mts.getMetric())
 .attributes(mts.attributes()).build();
 }
 result.addAll(mts.getTimestampsAsArray(), mts.getValuesAsArray());
 }
 return result;
 });
 
 JavaRDD<MetricTimeSeries> resultJavaRdd =
 joinedRdd.map((Tuple2<MetricTimeSeriesKey, MetricTimeSeries> mtTuple) -> mtTuple._2);
 
 return new ChronixRDD(resultJavaRdd); } group chunks according identity join chunks to
 logical time 
 series

×