Slides from a presentation by Monal Daxini at Disney, Glendale CA about Netflix Open Source Software, Cloud Data Persistence, and Cassandra best Practices
8. Micro Services
Micro services DOES NOT mean better
Availability
Need Fault Tolerant Architecture
Service Dependency View
Distributed Tracing (Dapper inspired)
17. Encoding PaaS
Master - Worker Pattern
Decoupled by Priority Queues
with message lease
State in Cassandra
18. Oracle >> Cassandra
Data Model & Lack of ACID
Client Cluster Symbiosis
Embrace Eventual Consistency
Data Migration
Shadow Write / Reads
19. Object To Cassandra Mapping
/**
* @author mdaxini
*/
@CColumnFamily(name = “Sequence", shared = true)
@Audited(columnFamily = "sequence_audit")
public class SequenceBean {
@CId(name = "id")
private String sequenceName;
@CColumn(name = "sequenceValue")
private Long sequenceValue;
@CColumn(name = "updated")
@TemporalAutoUpdate
@JsonProperty("updated")
private Date updated;
20. Object To Cassandra Mapping
@JsonAutoDetect(JsonMethod.NONE)
@JsonIgnoreProperties(ignoreUnknown = true)
!
@CColumnFamily(name = "task")
public class Job {
@CId
private JobKey jobKey;
public final class TaskKey {
@CId(order = 0)
private Long packageId;
@CId(order = 1)
private UUID taskId;
21. Priority-Scheduling Queue
Evolution:
One SQS Queue per priority range
Store and forward (rate-adaptive) to SQS
Queue
Rule based priority, leases, RDBMS based with
prefetch
22. Encoding PaaS Farm
One command deployment and upgrade
Self Serve
Homogeneous View of Windows and Linux
Pioneered Ubuntu - production since 2011
23. Innovate Fast
Build for Pragmatic Scale
Innovate for Business
Standardize Later*
25. Platform Big Data/Caching & Services
Cassandra
Astyanax Priam
CassJMeter
Hadoop Platform
As a Service
Genie
Lipstick
Adapted from a slide by @stonse
Caching
Inviso*
26. CDE Charter
Dynomite*
Redis
ElasticSearch
Spark*
Solr*
* Under Construction
Cassandra (1.2.x >> 2.0.x)
Priam
Astyanax
Skynet*
30. Use RandomPartitioner
Have at least 3 replicas (quorum)
Same number of replicas - simpler operations
!
create keyspace oracle
with placement_strategy = 'NetworkTopologyStrategy'
!
and strategy_options = {us-west-2 : 3, us-east : 3}
31. Move to CQL3 from thrift
Codifies best practices
Leverage Collections (albeit restricted cardinality)
Use Key Caching
As a default turn off Row Caching
Rename all composite columns in one ALTER
TABLE statement.
32. Watch length of column names
Use “COMPACT STORAGE” wisely
Cannot use collections - depends on
CompositeType
Non compact storage uses 2 bytes per internal
cell, but preferred.
!
!
* Image courtsey Datastax blog
34. Prefer CL_ONE
data replication within 500ms across the region
Using quorum reads and writes, then set
read_repair_chance to 0.0 or very low value.
Make sure repairs are run often
Eventual Consistency does not mean hopeful
consistency
35. Avoid secondary indexes for high cardinality
values
Most cases we set gc_grace_seconds = 10 days
Avoid hot rows
detect using node level latency metrics
36. Avoid heavy rows
Avoid too wide rows (< 100K columns if smaller)
Don’t use C* as a Queue
Tombstones will bite you
39. Guesstimate and then validate sstable_size_in_mb
Hint: based on write rate and size
160mb for LeveledCompactionStrategy
SizeTieredCompactionStrategy - C* default 50mb
40. Atomic batches
no isolation, only atomic for row within
partition key
no automatic rollback
Lightweight transactions
42. If your C* clusters footprint is significant
must have good automation
at least a C* semi-expert
Use cstar_perf to validate your initial clusters
We don’t use vnodes
On each node size disk to have 2x of expected
data - ephemeral ssds no ebs
43. Monitoring and alerting
read write latency - co-ordinator & node level
Compaction stats
Heap Usage
Network
Max & Min Row sizes
44. Fixed tokens, double the cluster to expand
Important to size the cluster for app needs
initially
benefits of fixed tokens outweighs vnodes
Take back up of all the nodes
to allow for eventual consistency on restores
Note: commitlog by default fsync only ever 10
seconds
45. Run repairs before GCGraceSeconds expires
Throttle compactions and repairs
Repairs can take a long time
run a primary range and a Keyspace at a time to
avoid performance impact.
46. Schema disagreements - pick the nodes with the
older date and restart them one at time.
nodetool reset local schema not persistent on 1.2
Recyle nodes in aws to prevent staleness
Expanding to new region
Launch nodes in new region without
bootstrapping
Change Keyspace replication
Run nodetool rebuild on nodes in new region.
47. More Info
http://techblog.netflix.com/
http://netflix.github.io/
http://slideshare.net/netflix
https://www.youtube.com/user/NetflixOpenSource
https://www.youtube.com/user/NetflixIR $$$