SlideShare ist ein Scribd-Unternehmen logo
1 von 20
Artimon
                  Mathias Herberts - @herberts




Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
Arkéa Real Time Information Monitoring
Scalable metrics collection and analysis framework




▪ Collects metrics called 'variable instances'
▪ Dynamic discovery, (almost) no conf needed
▪ Rich analysis library
▪ Fits IT and business needs
▪ Adapts to third party metrics
▪ Uses Flume and Kafka for transport
What's in a variable instance?

          name{label0=value0,label1=value1,...}


▪ name is the name of the variable
   linux.proc.diskstats.reads.ms
   hadoop.jobtracker.maps_completed

▪ Labels are text strings, they characterize a variable instance
   Some labels are automatically set dc, rack, module, context, uuid, ...
   Others are user defined

▪ Variable instances are typed
   INTEGER, DOUBLE, BOOLEAN, STRING

▪ Variable instance values are timestamped
▪ Variable instance values are Thrift objects
Exporting metrics


▪ Metrics are exported via a Thrift service
▪ Each MonitoringContext (context=...) exposes a service
▪ MCs register their dynamic port in ZooKeeper
   /zk/artimon/contexts/xxx/ip:port:uuid

▪ MonitoringContext wrapped in a BookKeeper class
   public interface ArtimonBookKeeper {
     public void setIntegerVar(String name, final Map<String,String> labels, long value);
     public long addToIntegerVar(String name, final Map<String,String> labels, long delta);
     public Long getIntegerVar(String name, final Map<String,String> labels);
     public void removeIntegerVar(String name, final Map<String,String> labels);

       public   void setDoubleVar(String name, final Map<String,String> labels, double value);
       public   double addToDoubleVar(String name, final Map<String,String> labels, double delta);
       public   Double getDoubleVar(String name, final Map<String,String> labels);
       public   void removeDoubleVar(String name, final Map<String,String> labels);

       public void setStringVar(String name, final Map<String,String> labels, String value);
       public String getStringVar(String name, final Map<String,String> labels);
       public void removeStringVar(String name, final Map<String,String> labels);

       public void setBooleanVar(String name, final Map<String,String> labels, boolean value);
       public Boolean getBooleanVar(String name, final Map<String,String> labels);
       public void removeBooleanVar(String name, final Map<String,String> labels);
   }
Exporting metrics


▪ Thrift service returns the latest values of known instances
▪ ZooKeeper not mandatory, can use a fixed port
▪ Artimon written in Java
▪ Checklist for porting to other languages
   ▪ Thrift support

   ▪ Optional ZooKeeper support
Collecting Metrics


▪ Flume launched on every machine
▪ 'artimon' source
   artimon(hosts, contexts, vars[, polling_interval])
   eg artimon(“self”,”*”,”~.*”)

   ▪ Watches ZooKeeper for contexts to poll

   ▪ Periodically collects latest values

▪ 'artimonProxy' decorator
   artimonProxy([[port],[ttl]])

   ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
Collecting Metrics


▪ Simulated flow using flume.flow event attribute
   artimon(...) | artimonProxy(...) value("flume.flow", "artimon")...

▪ Events batched and gzipped
   ... value("flume.flow", "artimon") batch(100,100) gzip() ...

▪ Kafka sink
   kafkasink(topic, propname=value...)

   ... gzip()   < failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")")
                 ? diskFailover("-kafka-flume-artimon")
                 insistentAppend stubbornAppend insistentOpen
                 failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >;


                      ~ kafkaDFOChain
Consuming Metrics


▪ Kafka source
   kafkasource(topic, propname=value...)

▪ Custom BytesWritableEscapedSeqFileEventSink
   bwseqfile(filename[, idle[, maxage]])
   bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon");

   ▪ N archivers in a single Kafka consumer group (same groupid)
   ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables
   ▪ Can add archivers if metrics flow increases
   ▪ Ability to manipulate those metrics using Pig
Consuming Metrics


▪ In-Memory history data (VarHistoryMemStore, VHMS)
  artimonVHMSDecorator(nthreads[0],
                       bucketspan[60000],
                       bucketcount[60],
                       gc_grace_period[600000],
                       port[27847],
                       gc_period[60000],
                       get_limit[100000]) null;

  ▪ Each VHMS in its own Kafka consumer group (each gets all metrics)
  ▪ Multiple VHMS with different granularities
     60x1', 48x5', 96*15', 72*24h
  ▪ Filter to ignore some metrics for some VHMS
     artimonFilter("!~linux.proc.pid.*")
Why Kafka?


▪ Initially used tsink/rpcSource
   ▪ No ZooKeeper use for Flume (avoid flapping)
   ▪ Collector load balancing using DNS
   ▪ Worked fine for some time...

▪ But as metrics volume was increasing...
   ▪ DNS load balancing not ideal (herd effect when restarting collectors)
   ▪ Flume's push architecture got in the way
      Slowdowns not considered failures
      Had to add mechanisms for dropping metrics when congested
Why Kafka?


▪ Kafka to the rescue! Source/sink coded in less than a day
   ▪ Acts as a buffer between metrics producers and consumers
   ▪ ZooKeeper based discovery and load balancing
   ▪ Easily scalable, just add brokers

▪ Performance has increased
   ▪ Producers now push their metrics in less than 2s
   ▪ VHMS/Archivers consume at their pace with no producer slowdown
       => 1.3M metrics in ~10s


▪ Ability to go back in time when restarting a VHMS
▪ Flume still valuable, notably for DFO (collect metrics during NP)
▪ Artimon [pull] Flume [push] Kafka [pull] Flume
Analyzing Metrics


▪ Groovy library
   ▪ Talks to a VHMS to retrieve time series
   ▪ Manipulates time series, individually or in bulk

▪ Groovy scripts for monitoring
   ▪ Use the Artimon library

   ▪ IT Monitoring
   ▪ BAM (Business Activity Monitoring)

▪ Ability to generate alerts

   ▪ Each alert is an Artimon metric (archived for SLA compliance)
   ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
Analyzing Metrics


▪ Bulk time series manipulation
   ▪ Equivalence classes based on labels (same values, same class)
   ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes

          import static com.arkea.artimon.groovy.LibArtimon.*

          vhmssrc=export['vhms.60']

          dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000)

          dfvars = select(sel_isfinite(), dfvars)

          free = select(dfvars, '=linux.df.bytes.free', [:])
          capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:]))

          usage = sort(apply(op_div(), free, capacity, [], 'freespace'))

          used50   =   select(sel_lt(0.50),    usage)
          used75   =   select(sel_lt(0.25),    usage)
          used90   =   select(sel_lt(0.10),    usage)
          used95   =   select(sel_lt(0.05),    usage)

          println   'Volumes   occupied   >   50%:   '   +   used50.size()
          println   'Volumes   occupied   >   75%:   '   +   used75.size()
          println   'Volumes   occupied   >   90%:   '   +   used90.size()
          println   'Volumes   occupied   >   95%:   '   +   used95.size()

          println 'Total volumes: ' + usage.size()


                        Same script can handle any number of volumes, dynamically
Analyzing Metrics


▪ Map paradigm
  ▪ Apply a Groovy closure on n consecutive values of a time serie
     map(closure, vars, nticks, name)
     Predefined map_delta(), map_rate(), map_{min,max,mean}()
     map(map_delta(), vars, 2, '+:delta')

▪ Reduce paradigm
  ▪ Apply a Groovy closure on equivalence classes
  ▪ Generate one time serie for each equivalence class
     reduceby(closure, vars, bylabels, name, relabels)
     Predefined red_sum(), red_{min,max,mean,sd}()
     reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
Analyzing Metrics


▪ A whole lot more
   getvars      selectbylabels   relabel
   fetch        partition        fillprevious
   find         top              fillnext
   findlabels   bottom           fillvalue
   display      outliers         map
   makevar      dropOutliers     reduceby
   nticks       resample         settype
   timespan     normalize        triggerAlert
   lasttick     standardize      clearAlert
   values       sort             CDF
   targets      scalar           PDF
   getlabels    ntrim            Percentile
   dump         timetrim         sparkline
   select       apply            ...
Third Party Metrics


▪ JMX Agent
       ▪ Expose any JMX metrics as Artimon metrics
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    525762846
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    511880426
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    492037666
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    436896839
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    333034505
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163186980
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163047011
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162916713
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162704303
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162565421
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8835417
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8794654
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8793525
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8741181
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8019699
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51999885
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51991203
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51986318
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51980976
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     48008009
Third Party Metrics


▪ Flume artimonReader source
   artimonReader(context, periodicity, file0[, fileX])

   ▪ Periodically reads files containing text representation of metrics
       [timestamp] name{labels} value


   ▪ Exposes those metrics via the standard mechanism
   ▪ Simply create scripts which write those files and add them to crontab
   ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ...

       1319718601000   mysql.bytes_received{db=mysql-roller} 296493399
       1319718601000   mysql.bytes_sent{db=mysql-roller} 3655368849
       1319718601000   mysql.com_admin_commands{db=mysql-roller} 673028
       1319718601000   mysql.com_alter_db{db=mysql-roller} 0
       1319718601000   mysql.com_alter_table{db=mysql-roller} 0
       1319718601000   mysql.com_analyze{db=mysql-roller} 0
       1319718601000   mysql.com_backup_table{db=mysql-roller} 0
PostMortem Analysis


▪ Extract specific metrics from HDFS
   ▪ Simple Pig script

▪ Load extracted metrics into a local VHMS
▪ Interact with VHMS using Groovy
   ▪ Existing scripts can be ran directly if parameterized correctly

▪ Interesting use cases
   ▪ Did we respect our SLAs? Would the new SLAs be respected too?
   ▪ What happened pre/post incident?
   ▪ Would a modified alert condition have triggered an alert?
Should we OpenSource this?




  http://www.arkea.com/



         @herberts

Weitere ähnliche Inhalte

Was ist angesagt?

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webclkao
 
2013 0928 programming by cuda
2013 0928 programming by cuda2013 0928 programming by cuda
2013 0928 programming by cuda小明 王
 
Performance Profiling in Rust
Performance Profiling in RustPerformance Profiling in Rust
Performance Profiling in RustInfluxData
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenPostgresOpen
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Michaël Figuière
 
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeSCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
 
Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Cong Zhang
 
Workshop on command line tools - day 2
Workshop on command line tools - day 2Workshop on command line tools - day 2
Workshop on command line tools - day 2Leandro Lima
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraDataStax
 
Workshop on command line tools - day 1
Workshop on command line tools - day 1Workshop on command line tools - day 1
Workshop on command line tools - day 1Leandro Lima
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraMario IC
 
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloadingkinan keshkeh
 
Lua tech talk
Lua tech talkLua tech talk
Lua tech talkLocaweb
 
Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]RootedCON
 
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Gavin Guo
 
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...Altinity Ltd
 

Was ist angesagt? (20)

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time web
 
2013 0928 programming by cuda
2013 0928 programming by cuda2013 0928 programming by cuda
2013 0928 programming by cuda
 
Performance Profiling in Rust
Performance Profiling in RustPerformance Profiling in Rust
Performance Profiling in Rust
 
Nginx-lua
Nginx-luaNginx-lua
Nginx-lua
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!
 
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeSCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
 
Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2
 
Workshop on command line tools - day 2
Workshop on command line tools - day 2Workshop on command line tools - day 2
Workshop on command line tools - day 2
 
A22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle HaileyA22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle Hailey
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with Cassandra
 
Workshop on command line tools - day 1
Workshop on command line tools - day 1Workshop on command line tools - day 1
Workshop on command line tools - day 1
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - Suestra
 
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
 
Lua tech talk
Lua tech talkLua tech talk
Lua tech talk
 
Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]
 
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
 
Ordered Record Collection
Ordered Record CollectionOrdered Record Collection
Ordered Record Collection
 
Top Node.js Metrics to Watch
Top Node.js Metrics to WatchTop Node.js Metrics to Watch
Top Node.js Metrics to Watch
 
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
 

Andere mochten auch

Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Mathias Herberts
 
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationIoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationMathias Herberts
 
Programmation fonctionnelle
Programmation fonctionnelleProgrammation fonctionnelle
Programmation fonctionnelleJean Detoeuf
 
Scala : programmation fonctionnelle
Scala : programmation fonctionnelleScala : programmation fonctionnelle
Scala : programmation fonctionnelleMICHRAFY MUSTAFA
 
The Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptThe Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptNorman Richards
 
Programmation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptProgrammation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptLoïc Knuchel
 
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Loïc Knuchel
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaModern Data Stack France
 

Andere mochten auch (11)

Dev ops Monitoring
Dev ops   MonitoringDev ops   Monitoring
Dev ops Monitoring
 
Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop Ecosystem
 
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationIoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
 
Programmation fonctionnelle
Programmation fonctionnelleProgrammation fonctionnelle
Programmation fonctionnelle
 
Scala : programmation fonctionnelle
Scala : programmation fonctionnelleScala : programmation fonctionnelle
Scala : programmation fonctionnelle
 
The Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptThe Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScript
 
Programmation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptProgrammation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScript
 
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
 
Cisco OpenSOC
Cisco OpenSOCCisco OpenSOC
Cisco OpenSOC
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
 

Ähnlich wie Artimon - Apache Flume (incubating) NYC Meetup 20111108

Kafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingKafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingYaroslav Tkachenko
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixInfluxData
 
Clojure ♥ cassandra
Clojure ♥ cassandra Clojure ♥ cassandra
Clojure ♥ cassandra Max Penet
 
Realtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQRealtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQXin Wang
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!Guido Schmutz
 
Containerizing Distributed Pipes
Containerizing Distributed PipesContainerizing Distributed Pipes
Containerizing Distributed Pipesinside-BigData.com
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraDeependra Ariyadewa
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...DataWorks Summit/Hadoop Summit
 
Introduction To Apache Mesos
Introduction To Apache MesosIntroduction To Apache Mesos
Introduction To Apache MesosJoe Stein
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applicationsRobert Sanders
 
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...ScyllaDB
 
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammOSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammNETWAYS
 
Monitoring VoIP Systems
Monitoring VoIP SystemsMonitoring VoIP Systems
Monitoring VoIP Systemssipgate
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaAOE
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseKostas Tzoumas
 
Building and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosBuilding and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosJoe Stein
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustEvan Chan
 

Ähnlich wie Artimon - Apache Flume (incubating) NYC Meetup 20111108 (20)

Kafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingKafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processing
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul Dix
 
Clojure ♥ cassandra
Clojure ♥ cassandra Clojure ♥ cassandra
Clojure ♥ cassandra
 
Realtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQRealtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQ
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!
 
Containerizing Distributed Pipes
Containerizing Distributed PipesContainerizing Distributed Pipes
Containerizing Distributed Pipes
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
 
Introduction To Apache Mesos
Introduction To Apache MesosIntroduction To Apache Mesos
Introduction To Apache Mesos
 
Solr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene EuroconSolr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene Eurocon
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applications
 
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
 
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammOSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
 
Monitoring VoIP Systems
Monitoring VoIP SystemsMonitoring VoIP Systems
Monitoring VoIP Systems
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS Lambda
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
 
Building and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosBuilding and Deploying Application to Apache Mesos
Building and Deploying Application to Apache Mesos
 
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
 
Data Pipeline at Tapad
Data Pipeline at TapadData Pipeline at Tapad
Data Pipeline at Tapad
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to Rust
 

Mehr von Mathias Herberts

2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...Mathias Herberts
 
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoopMathias Herberts
 
WebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachWebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachMathias Herberts
 

Mehr von Mathias Herberts (6)

2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
 
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
 
Big Data Tribute
Big Data TributeBig Data Tribute
Big Data Tribute
 
Hadoop Pig Syntax Card
Hadoop Pig Syntax CardHadoop Pig Syntax Card
Hadoop Pig Syntax Card
 
Hadoop Pig
Hadoop PigHadoop Pig
Hadoop Pig
 
WebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachWebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic Approach
 

Kürzlich hochgeladen

VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...Aggregage
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostMatt Ray
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URLRuncy Oommen
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 

Kürzlich hochgeladen (20)

VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
20150722 - AGV
20150722 - AGV20150722 - AGV
20150722 - AGV
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URL
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 

Artimon - Apache Flume (incubating) NYC Meetup 20111108

  • 1. Artimon Mathias Herberts - @herberts Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
  • 2. Arkéa Real Time Information Monitoring
  • 3. Scalable metrics collection and analysis framework ▪ Collects metrics called 'variable instances' ▪ Dynamic discovery, (almost) no conf needed ▪ Rich analysis library ▪ Fits IT and business needs ▪ Adapts to third party metrics ▪ Uses Flume and Kafka for transport
  • 4. What's in a variable instance? name{label0=value0,label1=value1,...} ▪ name is the name of the variable linux.proc.diskstats.reads.ms hadoop.jobtracker.maps_completed ▪ Labels are text strings, they characterize a variable instance Some labels are automatically set dc, rack, module, context, uuid, ... Others are user defined ▪ Variable instances are typed INTEGER, DOUBLE, BOOLEAN, STRING ▪ Variable instance values are timestamped ▪ Variable instance values are Thrift objects
  • 5. Exporting metrics ▪ Metrics are exported via a Thrift service ▪ Each MonitoringContext (context=...) exposes a service ▪ MCs register their dynamic port in ZooKeeper /zk/artimon/contexts/xxx/ip:port:uuid ▪ MonitoringContext wrapped in a BookKeeper class public interface ArtimonBookKeeper { public void setIntegerVar(String name, final Map<String,String> labels, long value); public long addToIntegerVar(String name, final Map<String,String> labels, long delta); public Long getIntegerVar(String name, final Map<String,String> labels); public void removeIntegerVar(String name, final Map<String,String> labels); public void setDoubleVar(String name, final Map<String,String> labels, double value); public double addToDoubleVar(String name, final Map<String,String> labels, double delta); public Double getDoubleVar(String name, final Map<String,String> labels); public void removeDoubleVar(String name, final Map<String,String> labels); public void setStringVar(String name, final Map<String,String> labels, String value); public String getStringVar(String name, final Map<String,String> labels); public void removeStringVar(String name, final Map<String,String> labels); public void setBooleanVar(String name, final Map<String,String> labels, boolean value); public Boolean getBooleanVar(String name, final Map<String,String> labels); public void removeBooleanVar(String name, final Map<String,String> labels); }
  • 6. Exporting metrics ▪ Thrift service returns the latest values of known instances ▪ ZooKeeper not mandatory, can use a fixed port ▪ Artimon written in Java ▪ Checklist for porting to other languages ▪ Thrift support ▪ Optional ZooKeeper support
  • 7. Collecting Metrics ▪ Flume launched on every machine ▪ 'artimon' source artimon(hosts, contexts, vars[, polling_interval]) eg artimon(“self”,”*”,”~.*”) ▪ Watches ZooKeeper for contexts to poll ▪ Periodically collects latest values ▪ 'artimonProxy' decorator artimonProxy([[port],[ttl]]) ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
  • 8. Collecting Metrics ▪ Simulated flow using flume.flow event attribute artimon(...) | artimonProxy(...) value("flume.flow", "artimon")... ▪ Events batched and gzipped ... value("flume.flow", "artimon") batch(100,100) gzip() ... ▪ Kafka sink kafkasink(topic, propname=value...) ... gzip() < failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") ? diskFailover("-kafka-flume-artimon") insistentAppend stubbornAppend insistentOpen failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >; ~ kafkaDFOChain
  • 9. Consuming Metrics ▪ Kafka source kafkasource(topic, propname=value...) ▪ Custom BytesWritableEscapedSeqFileEventSink bwseqfile(filename[, idle[, maxage]]) bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon"); ▪ N archivers in a single Kafka consumer group (same groupid) ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables ▪ Can add archivers if metrics flow increases ▪ Ability to manipulate those metrics using Pig
  • 10. Consuming Metrics ▪ In-Memory history data (VarHistoryMemStore, VHMS) artimonVHMSDecorator(nthreads[0], bucketspan[60000], bucketcount[60], gc_grace_period[600000], port[27847], gc_period[60000], get_limit[100000]) null; ▪ Each VHMS in its own Kafka consumer group (each gets all metrics) ▪ Multiple VHMS with different granularities 60x1', 48x5', 96*15', 72*24h ▪ Filter to ignore some metrics for some VHMS artimonFilter("!~linux.proc.pid.*")
  • 11. Why Kafka? ▪ Initially used tsink/rpcSource ▪ No ZooKeeper use for Flume (avoid flapping) ▪ Collector load balancing using DNS ▪ Worked fine for some time... ▪ But as metrics volume was increasing... ▪ DNS load balancing not ideal (herd effect when restarting collectors) ▪ Flume's push architecture got in the way Slowdowns not considered failures Had to add mechanisms for dropping metrics when congested
  • 12. Why Kafka? ▪ Kafka to the rescue! Source/sink coded in less than a day ▪ Acts as a buffer between metrics producers and consumers ▪ ZooKeeper based discovery and load balancing ▪ Easily scalable, just add brokers ▪ Performance has increased ▪ Producers now push their metrics in less than 2s ▪ VHMS/Archivers consume at their pace with no producer slowdown => 1.3M metrics in ~10s ▪ Ability to go back in time when restarting a VHMS ▪ Flume still valuable, notably for DFO (collect metrics during NP) ▪ Artimon [pull] Flume [push] Kafka [pull] Flume
  • 13. Analyzing Metrics ▪ Groovy library ▪ Talks to a VHMS to retrieve time series ▪ Manipulates time series, individually or in bulk ▪ Groovy scripts for monitoring ▪ Use the Artimon library ▪ IT Monitoring ▪ BAM (Business Activity Monitoring) ▪ Ability to generate alerts ▪ Each alert is an Artimon metric (archived for SLA compliance) ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
  • 14. Analyzing Metrics ▪ Bulk time series manipulation ▪ Equivalence classes based on labels (same values, same class) ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes import static com.arkea.artimon.groovy.LibArtimon.* vhmssrc=export['vhms.60'] dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000) dfvars = select(sel_isfinite(), dfvars) free = select(dfvars, '=linux.df.bytes.free', [:]) capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:])) usage = sort(apply(op_div(), free, capacity, [], 'freespace')) used50 = select(sel_lt(0.50), usage) used75 = select(sel_lt(0.25), usage) used90 = select(sel_lt(0.10), usage) used95 = select(sel_lt(0.05), usage) println 'Volumes occupied > 50%: ' + used50.size() println 'Volumes occupied > 75%: ' + used75.size() println 'Volumes occupied > 90%: ' + used90.size() println 'Volumes occupied > 95%: ' + used95.size() println 'Total volumes: ' + usage.size() Same script can handle any number of volumes, dynamically
  • 15. Analyzing Metrics ▪ Map paradigm ▪ Apply a Groovy closure on n consecutive values of a time serie map(closure, vars, nticks, name) Predefined map_delta(), map_rate(), map_{min,max,mean}() map(map_delta(), vars, 2, '+:delta') ▪ Reduce paradigm ▪ Apply a Groovy closure on equivalence classes ▪ Generate one time serie for each equivalence class reduceby(closure, vars, bylabels, name, relabels) Predefined red_sum(), red_{min,max,mean,sd}() reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
  • 16. Analyzing Metrics ▪ A whole lot more getvars selectbylabels relabel fetch partition fillprevious find top fillnext findlabels bottom fillvalue display outliers map makevar dropOutliers reduceby nticks resample settype timespan normalize triggerAlert lasttick standardize clearAlert values sort CDF targets scalar PDF getlabels ntrim Percentile dump timetrim sparkline select apply ...
  • 17. Third Party Metrics ▪ JMX Agent ▪ Expose any JMX metrics as Artimon metrics jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 525762846 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 511880426 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 492037666 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 436896839 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 333034505 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163186980 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163047011 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162916713 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162704303 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162565421 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8835417 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8794654 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8793525 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8741181 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8019699 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51999885 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51991203 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51986318 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51980976 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 48008009
  • 18. Third Party Metrics ▪ Flume artimonReader source artimonReader(context, periodicity, file0[, fileX]) ▪ Periodically reads files containing text representation of metrics [timestamp] name{labels} value ▪ Exposes those metrics via the standard mechanism ▪ Simply create scripts which write those files and add them to crontab ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ... 1319718601000 mysql.bytes_received{db=mysql-roller} 296493399 1319718601000 mysql.bytes_sent{db=mysql-roller} 3655368849 1319718601000 mysql.com_admin_commands{db=mysql-roller} 673028 1319718601000 mysql.com_alter_db{db=mysql-roller} 0 1319718601000 mysql.com_alter_table{db=mysql-roller} 0 1319718601000 mysql.com_analyze{db=mysql-roller} 0 1319718601000 mysql.com_backup_table{db=mysql-roller} 0
  • 19. PostMortem Analysis ▪ Extract specific metrics from HDFS ▪ Simple Pig script ▪ Load extracted metrics into a local VHMS ▪ Interact with VHMS using Groovy ▪ Existing scripts can be ran directly if parameterized correctly ▪ Interesting use cases ▪ Did we respect our SLAs? Would the new SLAs be respected too? ▪ What happened pre/post incident? ▪ Would a modified alert condition have triggered an alert?
  • 20. Should we OpenSource this? http://www.arkea.com/ @herberts