Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Building Scalable and
Extendable Data Pipeline
for Call of Duty Games:
Lessons Learned
Yaroslav Tkachenko
Software Archite...
Game Telemetry ETL
Game Telemetry ETL
Data Pipeline
Data Services
Other
1+
PB
Data lake size
(AWS S3)
Number of topics in the
biggest cluster
(Apache Kafka)
500+
10k - 100k+
Messages per second
(Apache Kafka)
Scaling the data pipeline even further
Volume
Industry best practices
Games
Using previous experience
Use-cases
Completely...
Scaling the pipeline
in terms of Volume
Producers Consumers
Scaling producers
• Asynchronous / non-blocking writes (default)
• Compression and batching
• Sampling
• Throttling
• Acks...
Proxy
Each approach has pros and cons
• Simple
• Low-latency connection
• Number of TCP connections per
broker starts to look sc...
Simple rule for
high-performant
producers? Just write
to Kafka, nothing
else1
.
1. Not even auth?
Scaling consumers is
usually pretty trivial -
just increase the number of
partitions.
Unless… you can’t. What
then?
Metadata Message Queue
Archiver
Block
Storage
Work Queue
Populator
MetadataMicrobatch
Even if you can add more
partitions
• Still can have bottlenecks within a partition (large messages)
• In case of reproces...
It’s not always about
tuning. Sometimes we need
more than one cluster.
Different workloads require
different topologies.
● Ingestion (HTTP Proxy)
● Long retention
● High SLA
● Lots of consumers
● Medium retention
● ACL
● Stream processing
● Sh...
You can’t be sure about
any improvements without
load testing.
Not only for a cluster,
but producers and
consumers too.
Scaling and extending
the pipeline in terms of
Games and Use-cases
We need to keep the number
of topics and partitions low
• More topics means more operational burden
• Number of partitions...
Topic naming convention
$env.$source.$title.$category-$version
prod.glutton.1234.telemetry_match_event-v1
Unique game id
“...
A proper solution has
been invented decades
ago.
Think about databases.
Messaging system IS a
form of a database
Data topic = Database + Table.
Data topic = Namespace + Data type.
telemetry.matches
user.logins
marketplace.purchases
prod.glutton.1234.telemetry_match_event-v1
dev.user_login_records.4321...
Each approach has pros and cons
• Metadata simplifies monitoring
• Metadata helps consumers to
consume precisely what they...
After removing
necessary metadata
from the topic names
stream processing
becomes mandatory.
Stream processing becomes mandatory
Measuring → Validating → Enriching → Filtering & routing
Having a single
message schema for a
topic is more than
just a nice-to-have.
Number of supported
message formats 8
Stream processor
JSON Protobuf
Custom Avro
? ?
? ?
// Application.java
props.put("value.deserializer",
"com.example.CustomDeserializer");
// CustomDeserializer.java
public c...
Message envelope anatomy
ID, env, timestamp, source, game, ...
Event
Header / Metadata
Body / Payload
Message
Unified message envelope
syntax = "proto2";
message MessageEnvelope {
optional bytes message_id = 1;
optional uint64 creat...
Schema Registry in Activision
• Custom Python application with Cassandra as a data store
• Single source of truth for all ...
Summary
• Kafka tuning and best practices matter
• Invest in good SDKs for producing and consuming data
• Unified message ...
Next Steps: Realtime Streaming ETL
• Kafka all the way!
• Kafka Streams for all ETL steps*
• Kafka Connect for all downstr...
Thanks!
@sap1ens
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019
Nächste SlideShare
Wird geladen in …5
×

Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019

167 Aufrufe

Veröffentlicht am

What’s easier than building a data pipeline nowadays? You add a few Apache Kafka clusters and a way to ingest data (probably over HTTP), design a way to route your data streams, add a few stream processors and consumers, integrate with a data warehouse… wait, this looks like a lot of things, doesn’t it? And you probably want to make it highly scalable and available too. Join this session to learn best practices for building a data pipeline, drawn from my experience at Activision/Demonware. I’ll share the lessons learned about scaling pipelines, not only in terms of volume, but also in terms of supporting more games and more use cases. You’ll also hear about message schemas and envelopes, Apache Kafka organization, topics naming conventions, routing, reliable and scalable producers and the ingestion layer, as well as stream processing.

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

Building Scalable and Extendable Data Pipeline for Call of Duty Games (Yaroslav Tkachenko, Activision) Kafka Summit NYC 2019

  1. 1. Building Scalable and Extendable Data Pipeline for Call of Duty Games: Lessons Learned Yaroslav Tkachenko Software Architect at Central Data Platform Activision
  2. 2. Game Telemetry ETL
  3. 3. Game Telemetry ETL Data Pipeline Data Services Other
  4. 4. 1+ PB Data lake size (AWS S3)
  5. 5. Number of topics in the biggest cluster (Apache Kafka) 500+
  6. 6. 10k - 100k+ Messages per second (Apache Kafka)
  7. 7. Scaling the data pipeline even further Volume Industry best practices Games Using previous experience Use-cases Completely unpredictable Complexity
  8. 8. Scaling the pipeline in terms of Volume
  9. 9. Producers Consumers
  10. 10. Scaling producers • Asynchronous / non-blocking writes (default) • Compression and batching • Sampling • Throttling • Acks? 0, 1, -1
  11. 11. Proxy
  12. 12. Each approach has pros and cons • Simple • Low-latency connection • Number of TCP connections per broker starts to look scary • Really hard to do maintenance on Kafka clusters • Flexible • Easier to manage Kafka clusters • Possible to do basic enrichment
  13. 13. Simple rule for high-performant producers? Just write to Kafka, nothing else1 . 1. Not even auth?
  14. 14. Scaling consumers is usually pretty trivial - just increase the number of partitions. Unless… you can’t. What then?
  15. 15. Metadata Message Queue Archiver Block Storage Work Queue Populator MetadataMicrobatch
  16. 16. Even if you can add more partitions • Still can have bottlenecks within a partition (large messages) • In case of reprocessing, it’s really hard to quickly add A LOT of new partitions AND remove them after • Also, number of partitions is not infinite • … but there are other ways to solve the challenges above ;-)
  17. 17. It’s not always about tuning. Sometimes we need more than one cluster. Different workloads require different topologies.
  18. 18. ● Ingestion (HTTP Proxy) ● Long retention ● High SLA ● Lots of consumers ● Medium retention ● ACL ● Stream processing ● Short retention ● More partitions
  19. 19. You can’t be sure about any improvements without load testing. Not only for a cluster, but producers and consumers too.
  20. 20. Scaling and extending the pipeline in terms of Games and Use-cases
  21. 21. We need to keep the number of topics and partitions low • More topics means more operational burden • Number of partitions in a fixed cluster is not infinite • Scaling Kafka is hard, autoscaling is very hard
  22. 22. Topic naming convention $env.$source.$title.$category-$version prod.glutton.1234.telemetry_match_event-v1 Unique game id “CoD WW2 on PSN” Producer
  23. 23. A proper solution has been invented decades ago. Think about databases.
  24. 24. Messaging system IS a form of a database Data topic = Database + Table. Data topic = Namespace + Data type.
  25. 25. telemetry.matches user.logins marketplace.purchases prod.glutton.1234.telemetry_match_event-v1 dev.user_login_records.4321.all-v1 prod.marketplace.5678.purchase_event-v1 Compare this
  26. 26. Each approach has pros and cons • Metadata simplifies monitoring • Metadata helps consumers to consume precisely what they want • These dynamic fields can and will change • Very efficient utilization of topics and partitions • It’s very hard to enforce any constraints with a topic name
  27. 27. After removing necessary metadata from the topic names stream processing becomes mandatory.
  28. 28. Stream processing becomes mandatory Measuring → Validating → Enriching → Filtering & routing
  29. 29. Having a single message schema for a topic is more than just a nice-to-have.
  30. 30. Number of supported message formats 8
  31. 31. Stream processor JSON Protobuf Custom Avro ? ? ? ?
  32. 32. // Application.java props.put("value.deserializer", "com.example.CustomDeserializer"); // CustomDeserializer.java public class CustomDeserializer implements Deserializer<???> { @Override public ??? deserialize(String topic, byte[] data) { ??? } } Custom deserialization
  33. 33. Message envelope anatomy ID, env, timestamp, source, game, ... Event Header / Metadata Body / Payload Message
  34. 34. Unified message envelope syntax = "proto2"; message MessageEnvelope { optional bytes message_id = 1; optional uint64 created_at = 2; optional uint64 ingested_at = 3; optional string source = 4; optional uint64 title_id = 5; optional string env = 6; optional UserInfo resource_owner = 7; optional SchemaInfo schema_info = 8; optional string message_name = 9; optional bytes message = 100; }
  35. 35. Schema Registry in Activision • Custom Python application with Cassandra as a data store • Single source of truth for all producers and consumers • Supports immutability, versioning and basic validation • Schema Registry clients use a lot of caching at different layers
  36. 36. Summary • Kafka tuning and best practices matter • Invest in good SDKs for producing and consuming data • Unified message envelope and topic names make adding a new game almost effortless • “Operational” stream processing makes it possible. Make sure you can support adhoc filtering and routing of data • Topic names should express data types, not producer or consumer metadata • Schema Registry is a must-have
  37. 37. Next Steps: Realtime Streaming ETL • Kafka all the way! • Kafka Streams for all ETL steps* • Kafka Connect for all downstream integrations • Spark for file consolidation * except aggregations
  38. 38. Thanks! @sap1ens

×