This document discusses using Spark and Cassandra together for interactive analytics. It describes how Evan Chan uses both technologies at Ooyala to solve the problem of generating analytics from raw data in Cassandra in a flexible and fast way. It outlines their architecture of using Spark to generate materialized views from Cassandra data and then powering queries with those cached views for low latency queries.
2. • Staff Engineer, Compute and Data Services, Ooyala
• Building multiple web-scale real-time systems on top of
C*, Kafka, Storm, etc.
• Scala/Akka guy
• Very excited by open source, big data projects
• @evanfchan
Who is this guy?
2
Monday, April 7, 14
3. • Cassandra at Ooyala
• What problem are we trying to solve?
• Spark and Shark
• Integrating Cassandra and Spark
• Our Spark/Cassandra Architecture
Agenda
3
Monday, April 7, 14
6. CONFIDENTIAL—DO NOT DISTRIBUTE 6CONFIDENTIAL—DO NOT DISTRIBUTE
Founded in 2007
Commercially launch in 2009
230+ employees in Silicon Valley, LA, NYC,
London, Paris, Tokyo, Sydney & Guadalajara
Global footprint, 200M unique users,
110+ countries, and more than 6,000 websites
Over 1 billion videos played per month
and 2 billion analytic events per day
25% of U.S. online viewers watch video
powered by Ooyala
COMPANY OVERVIEW
Monday, April 7, 14
7. CONFIDENTIAL—DO NOT DISTRIBUTE 7
TRUSTED VIDEO PARTNER
STRATEGIC PARTNERS
CUSTOMERS
CONFIDENTIAL—DO NOT DISTRIBUTE
Monday, April 7, 14
8. TITLE TEXT GOES HERE
• 12 clusters ranging in size from 3 to 107
nodes
• Total of 28TB of data managed over
~220 nodes
• Powers all of our analytics infrastructure
• Traditional analytics aggregations
• Recommendations and trends
• DSE/C* 1.0.x, 1.1.x, 1.2.6
We are a large Cassandra user
8
Monday, April 7, 14
9. TITLE TEXT GOES HERE
• Started investing in Spark beginning of 2013
• 2 teams of developers doing stuff with Spark
• Actively contributing to Spark developer community
• Deploying Spark to a large (>100 node) production
cluster
• Spark community very active, huge amount of interest
Becoming a big Spark user...
9
Monday, April 7, 14
13. Today: Precomputed Aggregates
• Video metrics computed along several high cardinality dimensions
• Very fast lookups, but inflexible, and hard to change
• Most computed aggregates are never read
• What if we need more dynamic queries?
• Top content for mobile users in France
• Engagement curves for users who watched recommendations
• Data mining, trends, machine learning
Monday, April 7, 14
14. The Static - Dynamic Continuum
• Super fast lookups
• Inflexible,
wasteful
• Best for 80% most
common queries
• Always compute results
from raw data
• Flexible but slow
100% Precomputation 100% Dynamic
Monday, April 7, 14
15. Where We Want To Be
Partly dynamic
• Pre-aggregate most
common queries
• Flexible, fast
dynamic queries
• Easily generate
many materialized
views
Monday, April 7, 14
17. Introduction To Spark
• In-memory distributed computing framework
• Created by UC Berkeley AMP Lab in 2010
• Targeted problems that MR is bad at:
–Iterative algorithms (machine learning)
–Interactive data mining
• More general purpose than Hadoop MR
• Top level Apache project
• Active contributions from Intel, Yahoo, lots of
companies -- huge momentum
Monday, April 7, 14
19. Throughput: Memory Is King
0 37500 75000 112500 150000
C*, cold cache
C*, warm cache
Spark RDD
6-node C*/DSE 1.1.9 cluster,
Spark 0.7.0
Spark cached RDD 10-50x faster than raw Cassandra
Monday, April 7, 14
20. Developers Love It
• “I wrote my first aggregation job in 30
minutes”
• High level “distributed collections” API
• No Hadoop cruft
• Full power of Scala, Java, Python
• Interactive REPL shell
Monday, April 7, 14
21. Spark Vs Hadoop Word Count
file = spark.textFile("hdfs://...")
file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
1 package org.myorg;
2
3 import java.io.IOException;
4 import java.util.*;
5
6 import org.apache.hadoop.fs.Path;
7 import org.apache.hadoop.conf.*;
8 import org.apache.hadoop.io.*;
9 import org.apache.hadoop.mapreduce.*;
10 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
11 import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
12 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
13 import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
14
15 public class WordCount {
16
17 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>
{
18 private final static IntWritable one = new IntWritable(1);
19 private Text word = new Text();
20
21 public void map(LongWritable key, Text value, Context context) throws
IOException, InterruptedException {
22 String line = value.toString();
23 StringTokenizer tokenizer = new StringTokenizer(line);
24 while (tokenizer.hasMoreTokens()) {
25 word.set(tokenizer.nextToken());
26 context.write(word, one);
27 }
28 }
29 }
30
31 public static class Reduce extends Reducer<Text, IntWritable, Text,
IntWritable> {
32
33 public void reduce(Text key, Iterable<IntWritable> values, Context context)
34 throws IOException, InterruptedException {
35 int sum = 0;
36 for (IntWritable val : values) {
37 sum += val.get();
38 }
39 context.write(key, new IntWritable(sum));
40 }
41 }
42
43 public static void main(String[] args) throws Exception {
44 Configuration conf = new Configuration();
45
46 Job job = new Job(conf, "wordcount");
47
48 job.setOutputKeyClass(Text.class);
49 job.setOutputValueClass(IntWritable.class);
50
51 job.setMapperClass(Map.class);
52 job.setReducerClass(Reduce.class);
53
54 job.setInputFormatClass(TextInputFormat.class);
55 job.setOutputFormatClass(TextOutputFormat.class);
56
57 FileInputFormat.addInputPath(job, new Path(args[0]));
58 FileOutputFormat.setOutputPath(job, new Path(args[1]));
59
60 job.waitForCompletion(true);
61 }
62
63 }
Monday, April 7, 14
22. One Platform To Rule Them All
HIVE on Spark
Spark Streaming -
discretized stream
processing
• SQL, Graph, ML, Streaming all in one
framework
• Much higher code sharing/reuse
• Easy integration between components
• Fewer platforms == lower TCO
• Integration with Mesos, YARN helps
share resources
Monday, April 7, 14
23. Shark - HIVE On Spark
• 100% HiveQL compatible
• 10-100x faster than HIVE, answers in seconds
• Reuse UDFs, SerDe’s, StorageHandlers
• Can use DSE / CassandraFS for Metastore
Monday, April 7, 14
28. Node 2Node 1
Spark RDD
• RDD = Resilient Distributed Dataset
• Multiple partitions living on different
nodes
• Each partition has records
Partition 1 Partition 2 Partition 3 Partition 4
Monday, April 7, 14
29. Inputformat Vs RDD
InputFormat RDD
Supports Hadoop, HIVE, Spark,
Shark
Spark / Shark only
Have to implement multiple classes
- InputFormat, RecordReader,
Writeable, etc. Clunky API.
One class - simple API.
Two APIs, and often need to
implement both (HIVE needs older...)
Just one API.
• You can easily use InputFormats in
Spark using newAPIHadoopRDD().
• Writing a custom RDD could have
saved us lots of time.
Monday, April 7, 14
30. Node 2Node 1
Skipping The InputFormat
Row 1 data Row 2 data Row 3 data Row 4 data
Rowkey1 Rowkey2 Rowkey3 Rowkey4
sc.parallelize(rowkeys).flatMap(readColumns(_))
Monday, April 7, 14
32. From Raw Logs To Fast Queries
Process
C*
columnar
storage
Raw Logs
Raw Logs
Raw Logs
Spark
Spark
Spark
OLAP Table
1
OLAP Table
2
OLAP Table
3
Spark
Shark
Predefined
queries
Ad-hoc
HiveQL
Monday, April 7, 14
33. Why Cassandra Alone Isn’t Enough
• Over 30 million multi-dimensional
fact table rows per day
• Materializing every possible answer
isn’t close to possible
• Multi dimensional filtering and
grouping alone leads to many
billions of possible answers
• Querying fact tables in Cassandra is
too slow
• Reading millions or billion
random rows
• CQL doesn’t support grouping
Monday, April 7, 14
34. Our Approach
• Use Cassandra to store the raw
fact tables
• Optimize the schema for OLAP
workloads
• Fast full table reads
• Easily read fewer columns
• Use Spark for fast random row
access and fast distributed
computation
Monday, April 7, 14
35. uuid-
part-0
uuid-
part-1
2013-04-05T
00:00Z#id1
Section 0 Section 1 Section 2
country rows 0-9999 rows 10000-19999 rows ....
city rows 0-9999 rows 10000-19999 rows ....
Index CF
Columns CF
An OLAP Schema for Cassandra
Metadata
2013-04-05T
00:00-part-0
{columns:
[“country”,
“city”,
“plays” ]}
Metadata CF
•Optimized for: selective column loading, maximum throughput and compression
Monday, April 7, 14
38. Fault Tolerance
• Cached dataset lives in Java Heap only - what if
process dies?
• Spark lineage - automatic recomputation from
source, but this is expensive!
• Can also replicate cached dataset to survive single
node failures
• Persist materialized views back to C*, then load into
cache -- now recovery path is much faster
Monday, April 7, 14