SlideShare ist ein Scribd-Unternehmen logo
1 von 27
Downloaden Sie, um offline zu lesen
past and present...
gord[at]live.ca
@gord_chung
v4 features
□ simplified scheduling
□ less pandas, more numpy
□ Redis incoming driver
□ In-memory incoming Ceph
driver
□ Other general features:
■ http://gnocchi.xyz/releasenotes/4.0.html
■ http://gnocchi.xyz/releasenotes/unreleased.html
scheduling
incoming data sharded
into sacks to allow simple
division of work across
metricd workers
numpy
old
Pandas - a monolithic, all-in-one, data
analysis toolkit
new
Numpy - a lightweight, high-performance,
N-dimensional array (and a bit more)
library
in-memory
the memory is mightier.
leverage Redis driver or
LevelDB/RocksDB
internals for Ceph
benchmarks
back with another one of those block rockin’ beats
v2 & v3
node1
- OpenStack controller node
- Ceph Monitor Service
- Redis (coordination)
node2
- OpenStack Compute Node
- Ceph OSD node (10 OSDs + SSD
Journal)
- 18 metricd (24 in v2)
node3
- Gnocchi API (32 workers)
- Ceph OSD node (10 OSDs + SSD
Journal)
- 18 metricd (24 in v2)
node4
- OpenStack Compute Node
- Ceph OSD node (10 OSDs + SSD
Journal)
- PostgreSQL (
- 18 metricd (24 in v2)
environment
v4.x
node1
- OpenStack controller node
- Ceph Monitor Service
- Redis
- MySQL
node2
- OpenStack Compute Node
- Ceph OSD node (10 OSDs + SSD
Journal)
node3
- OpenStack Compute Node
- Ceph OSD node (10 OSDs + SSD
Journal)
- Gnocchi API (32 workers)
- 18 metricd
all nodes are physical servers:
- 24CPU (48 hyperthreaded)
- 256GB memory
- 10K disks
- 1GB network
- CentOS 7.1
less services and hardware when
running v4. all gnocchi services on
single node
all tests use Ceph as a storage
driver for aggregates.
data generated using benchmark
tool in client (modified to use
threads). 4 clients w/ 12 threads
running simultaneously.
write throughput
total
datapoints
written per
second.
(higher is
better)
number of
requests
made per
second.
(higher is
better)
write throughput
test case 1
1K resources, 20 metrics
each. flood Gnocchi with
60 individual points per
metric. 1.2M calls/run.
run it a few times.
time to
POST 1.2M
individual
measures
for 20K
metrics to
Gnocchi.
post time
v3.1 had anomaly that caused
degradation over time.
processing time
v4 tests use 18 metricd, v3 test
uses 54 metricd
time to
aggregate
all
measures
according to
policy.
(lower is
better)
v4 only comparison
processing time
processing time
number of
recorded,
unprocessed
measures
over a single
run
poor scheduling logic resulted
inefficient handling of many
tiny objects in v3.
processing time
number of
recorded,
unprocessed
measures
over a single
run backlog size dependent on
both API’s ability to write
data and metricd’s ability
to process it.
test case 2
1K resources, 20 metrics
each. flood Gnocchi with
60 batched points per
metric. 20K calls/run. run
it a few times.
processing time
v4 tests use 18 metricd for 3x8
aggregates/metric, v2 and v3
tests, use 72 and 54 metricd
respectively
time to
aggregate
all
measures
according to
policy.
(lower is
better)
aggregation time
time to
aggregate 60
measures of
a metric into
3x8
aggregates
(lower is
better)
average time reflects a
combination of scheduling
efficiency, computation
efficiency and IO performance.
test case 3
500 resources, 20 metrics
each. flood Gnocchi with
720 batched points per
metric. 10K calls/run. run
it a few times.
time to
aggregate
all
measures
according to
policy.
(lower is
better)
processing time
v4 tests use 18 metricd for 3x8
aggregates/metric. v2 and v3
tests, use 72 metricd
aggregation time
time to
aggregate 720
measures of a
metric into
3x8
aggregates
(lower is
better)
computation efficiency improved
for larger series. ~3x
improvement for 60 points and
~6x improvement for 720 points
some more numbers
peep this...
time to
aggregate
metric with
varying
unbatched
measure
sizes (lower
is better)
processing time
numbers represent optimal
performance. benchmark was
taken under zero load.
time to
retrieve a
single time
series using
curl and
client
(lower is
better)
query time
client overhead attributed to
but not limited to formatting
no significant performance
difference vs v3
time to
aggregate
all
measures
according to
default
‘medium’
policy.
(lower is
better)
default configurations
v3 tests use 54 metricd.
v4 tests use 18 metricd.
- v3 medium policy:
- minute/hourly/daily rollups
- 8 aggregates each
- v4 medium policy:
- minute/hourly rollups
- 6 aggregates each
thanks!
Any questions?
You can find me at
@gord_chung
gord[at]live.ca
?
Credits
Special thanks to all the people who
made and released these awesome
resources for free:
□ Presentation template by
SlidesCarnival

Weitere ähnliche Inhalte

Was ist angesagt?

Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
DataStax
 

Was ist angesagt? (20)

HBaseCon 2013: OpenTSDB at Box
HBaseCon 2013: OpenTSDB at BoxHBaseCon 2013: OpenTSDB at Box
HBaseCon 2013: OpenTSDB at Box
 
Monitoring MySQL with OpenTSDB
Monitoring MySQL with OpenTSDBMonitoring MySQL with OpenTSDB
Monitoring MySQL with OpenTSDB
 
opentsdb in a real enviroment
opentsdb in a real enviromentopentsdb in a real enviroment
opentsdb in a real enviroment
 
ELK: Moose-ively scaling your log system
ELK: Moose-ively scaling your log systemELK: Moose-ively scaling your log system
ELK: Moose-ively scaling your log system
 
An Introduction to Priam
An Introduction to PriamAn Introduction to Priam
An Introduction to Priam
 
OpenTSDB 2.0
OpenTSDB 2.0OpenTSDB 2.0
OpenTSDB 2.0
 
OpenTSDB for monitoring @ Criteo
OpenTSDB for monitoring @ CriteoOpenTSDB for monitoring @ Criteo
OpenTSDB for monitoring @ Criteo
 
Taking Your Database Beyond the Border of a Single Kubernetes Cluster
Taking Your Database Beyond the Border of a Single Kubernetes ClusterTaking Your Database Beyond the Border of a Single Kubernetes Cluster
Taking Your Database Beyond the Border of a Single Kubernetes Cluster
 
Time Series Data in a Time Series World
Time Series Data in a Time Series WorldTime Series Data in a Time Series World
Time Series Data in a Time Series World
 
Building a Fast, Resilient Time Series Store with Cassandra (Alex Petrov, Dat...
Building a Fast, Resilient Time Series Store with Cassandra (Alex Petrov, Dat...Building a Fast, Resilient Time Series Store with Cassandra (Alex Petrov, Dat...
Building a Fast, Resilient Time Series Store with Cassandra (Alex Petrov, Dat...
 
Stabilising the jenga tower
Stabilising the jenga towerStabilising the jenga tower
Stabilising the jenga tower
 
Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase
 
OpenTSDB: HBaseCon2017
OpenTSDB: HBaseCon2017OpenTSDB: HBaseCon2017
OpenTSDB: HBaseCon2017
 
Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
Cassandra Backups and Restorations Using Ansible (Joshua Wickman, Knewton) | ...
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Managing your Black Friday Logs
Managing your Black Friday LogsManaging your Black Friday Logs
Managing your Black Friday Logs
 
"Metrics: Where and How", Vsevolod Polyakov
"Metrics: Where and How", Vsevolod Polyakov"Metrics: Where and How", Vsevolod Polyakov
"Metrics: Where and How", Vsevolod Polyakov
 
Chronix Poster for the Poster Session FAST 2017
Chronix Poster for the Poster Session FAST 2017Chronix Poster for the Poster Session FAST 2017
Chronix Poster for the Poster Session FAST 2017
 
Ted Dunning – Very High Bandwidth Time Series Database Implementation - NoSQL...
Ted Dunning – Very High Bandwidth Time Series Database Implementation - NoSQL...Ted Dunning – Very High Bandwidth Time Series Database Implementation - NoSQL...
Ted Dunning – Very High Bandwidth Time Series Database Implementation - NoSQL...
 
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
 

Ähnlich wie Gnocchi v4 - past and present

The state of Spark in the cloud
The state of Spark in the cloudThe state of Spark in the cloud
The state of Spark in the cloud
Nicolas Poggi
 

Ähnlich wie Gnocchi v4 - past and present (20)

LAS16-305: Smart City Big Data Visualization on 96Boards
LAS16-305: Smart City Big Data Visualization on 96BoardsLAS16-305: Smart City Big Data Visualization on 96Boards
LAS16-305: Smart City Big Data Visualization on 96Boards
 
Smart City Big Data Visualization on 96Boards - Linaro Connect Las Vegas 2016
Smart City Big Data Visualization on 96Boards - Linaro Connect Las Vegas 2016Smart City Big Data Visualization on 96Boards - Linaro Connect Las Vegas 2016
Smart City Big Data Visualization on 96Boards - Linaro Connect Las Vegas 2016
 
Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)
 
Pivotal Real Time Data Stream Analytics
Pivotal Real Time Data Stream AnalyticsPivotal Real Time Data Stream Analytics
Pivotal Real Time Data Stream Analytics
 
The state of Hive and Spark in the Cloud (July 2017)
The state of Hive and Spark in the Cloud (July 2017)The state of Hive and Spark in the Cloud (July 2017)
The state of Hive and Spark in the Cloud (July 2017)
 
OSN_2022.pdf
OSN_2022.pdfOSN_2022.pdf
OSN_2022.pdf
 
Paris.rb – 07/19 – Sidekiq scaling, workers vs processes
Paris.rb – 07/19 – Sidekiq scaling, workers vs processesParis.rb – 07/19 – Sidekiq scaling, workers vs processes
Paris.rb – 07/19 – Sidekiq scaling, workers vs processes
 
Querying a Complex Web-Based KB for Cultural Heritage Preservation
Querying a Complex Web-Based KB  for Cultural Heritage PreservationQuerying a Complex Web-Based KB  for Cultural Heritage Preservation
Querying a Complex Web-Based KB for Cultural Heritage Preservation
 
Dache: A Data Aware Caching for Big-Data Applications Using the MapReduce Fra...
Dache: A Data Aware Caching for Big-Data Applications Usingthe MapReduce Fra...Dache: A Data Aware Caching for Big-Data Applications Usingthe MapReduce Fra...
Dache: A Data Aware Caching for Big-Data Applications Using the MapReduce Fra...
 
Ingestion and Dimensions Compute and Enrich using Apache Apex
Ingestion and Dimensions Compute and Enrich using Apache ApexIngestion and Dimensions Compute and Enrich using Apache Apex
Ingestion and Dimensions Compute and Enrich using Apache Apex
 
11. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2
 
MongoDB Sharding Webinar 2014
MongoDB Sharding Webinar 2014MongoDB Sharding Webinar 2014
MongoDB Sharding Webinar 2014
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introduction
 
The state of Spark in the cloud
The state of Spark in the cloudThe state of Spark in the cloud
The state of Spark in the cloud
 
Wmware NoSQL
Wmware NoSQLWmware NoSQL
Wmware NoSQL
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
NoSQL meetup July 2011
NoSQL meetup July 2011NoSQL meetup July 2011
NoSQL meetup July 2011
 
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14thSnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
 
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
 
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
 

Kürzlich hochgeladen

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Kürzlich hochgeladen (20)

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 

Gnocchi v4 - past and present

  • 2. v4 features □ simplified scheduling □ less pandas, more numpy □ Redis incoming driver □ In-memory incoming Ceph driver □ Other general features: ■ http://gnocchi.xyz/releasenotes/4.0.html ■ http://gnocchi.xyz/releasenotes/unreleased.html
  • 3. scheduling incoming data sharded into sacks to allow simple division of work across metricd workers
  • 4. numpy old Pandas - a monolithic, all-in-one, data analysis toolkit new Numpy - a lightweight, high-performance, N-dimensional array (and a bit more) library
  • 5. in-memory the memory is mightier. leverage Redis driver or LevelDB/RocksDB internals for Ceph
  • 6. benchmarks back with another one of those block rockin’ beats
  • 7. v2 & v3 node1 - OpenStack controller node - Ceph Monitor Service - Redis (coordination) node2 - OpenStack Compute Node - Ceph OSD node (10 OSDs + SSD Journal) - 18 metricd (24 in v2) node3 - Gnocchi API (32 workers) - Ceph OSD node (10 OSDs + SSD Journal) - 18 metricd (24 in v2) node4 - OpenStack Compute Node - Ceph OSD node (10 OSDs + SSD Journal) - PostgreSQL ( - 18 metricd (24 in v2) environment v4.x node1 - OpenStack controller node - Ceph Monitor Service - Redis - MySQL node2 - OpenStack Compute Node - Ceph OSD node (10 OSDs + SSD Journal) node3 - OpenStack Compute Node - Ceph OSD node (10 OSDs + SSD Journal) - Gnocchi API (32 workers) - 18 metricd all nodes are physical servers: - 24CPU (48 hyperthreaded) - 256GB memory - 10K disks - 1GB network - CentOS 7.1 less services and hardware when running v4. all gnocchi services on single node all tests use Ceph as a storage driver for aggregates.
  • 8. data generated using benchmark tool in client (modified to use threads). 4 clients w/ 12 threads running simultaneously. write throughput total datapoints written per second. (higher is better)
  • 9. number of requests made per second. (higher is better) write throughput
  • 10. test case 1 1K resources, 20 metrics each. flood Gnocchi with 60 individual points per metric. 1.2M calls/run. run it a few times.
  • 11. time to POST 1.2M individual measures for 20K metrics to Gnocchi. post time v3.1 had anomaly that caused degradation over time.
  • 12. processing time v4 tests use 18 metricd, v3 test uses 54 metricd time to aggregate all measures according to policy. (lower is better)
  • 14. processing time number of recorded, unprocessed measures over a single run poor scheduling logic resulted inefficient handling of many tiny objects in v3.
  • 15. processing time number of recorded, unprocessed measures over a single run backlog size dependent on both API’s ability to write data and metricd’s ability to process it.
  • 16. test case 2 1K resources, 20 metrics each. flood Gnocchi with 60 batched points per metric. 20K calls/run. run it a few times.
  • 17. processing time v4 tests use 18 metricd for 3x8 aggregates/metric, v2 and v3 tests, use 72 and 54 metricd respectively time to aggregate all measures according to policy. (lower is better)
  • 18. aggregation time time to aggregate 60 measures of a metric into 3x8 aggregates (lower is better) average time reflects a combination of scheduling efficiency, computation efficiency and IO performance.
  • 19. test case 3 500 resources, 20 metrics each. flood Gnocchi with 720 batched points per metric. 10K calls/run. run it a few times.
  • 20. time to aggregate all measures according to policy. (lower is better) processing time v4 tests use 18 metricd for 3x8 aggregates/metric. v2 and v3 tests, use 72 metricd
  • 21. aggregation time time to aggregate 720 measures of a metric into 3x8 aggregates (lower is better) computation efficiency improved for larger series. ~3x improvement for 60 points and ~6x improvement for 720 points
  • 23. time to aggregate metric with varying unbatched measure sizes (lower is better) processing time numbers represent optimal performance. benchmark was taken under zero load.
  • 24. time to retrieve a single time series using curl and client (lower is better) query time client overhead attributed to but not limited to formatting no significant performance difference vs v3
  • 25. time to aggregate all measures according to default ‘medium’ policy. (lower is better) default configurations v3 tests use 54 metricd. v4 tests use 18 metricd. - v3 medium policy: - minute/hourly/daily rollups - 8 aggregates each - v4 medium policy: - minute/hourly rollups - 6 aggregates each
  • 26. thanks! Any questions? You can find me at @gord_chung gord[at]live.ca ?
  • 27. Credits Special thanks to all the people who made and released these awesome resources for free: □ Presentation template by SlidesCarnival