3. shameless plug
Infoblox is working on some cool stuff...
- DNS, DHCP, IPAM, NCCM
- IPv6 Center of Excellence
- IF-Map / DNSSec
- Hiring (sales, services, support, engineering)
6. overview
What will we cover:
- What is Graphite?
- What data to capture
- Chart interpretation
7. but why
I worked at a place with major scale fail
- boxed vs service
- 100's of servers in multiple datacenters
- manual processes, shell scripts
- no insight into the app, infrastructure
- n-tier architecture
- on-call duties
- needed therapy, got it, didn't help
8. what is graphite
- Scalable real-time graphing system
- 3 main components:
- Web front-end, graphite
- Processing backend, carbon
- Database, whisper
- Python based*
* It's good to learn other languages
9. what is graphite
Setup / Documentation:
- Easy to setup
- Decent documentation
- API and CLI access
10. what is graphite
What does it capture?
- Numeric time-series data...
point some.data.path
value 3.2
timestamp 1337690041 (epoch)
11. what is graphite
How much data?
- configurable
- precision
- retention period
- aggregation
13. what is graphite
Notes / gotchas:
- Scales horizontally
- Heavy on disk-io
- Fault tolerance
- Data loss
- Precision or Storage Space / io
14. what data to capture
...so what information should we capture?
..how detailed do we get?
..and does it have historical relevance?
..are just a few key metrics enough?
16. what data to capture
Thoughts on maximum vs. minimum:
- What information do you need to capture?
- Application Data (yes!)
- System Data: cpu, disk-io, mem usage
- Network: Connections? Latency? Packet loss?
- Fine-grained vs summary and aggregate?
17. what data to capture
In your app:
- function / method / calculation time
- template / content generation
- database query execution
- Internal and 3rd-party API calls
- queue sizes, processing times
- A/B testing?
18. what data to capture
From the systems:
- cpu
- disk usage
- io (disk, network interface)
- memory / paging / swap
- file handles
- log entries
19. what data to capture
At the network level:
- connection count
- socket state
- qos levels
- firewall stats
- cdn / cache response
- 3rd party status
20. chart interpretation
...it's like reading tea leaves...
...domains of knowledge leave gaps...
...thats not my job...
...forest through the trees...
21. chart interpretation
So what are we looking for:
- normality *
- deviations
- jitters
- historical performance
- double rainbows
* not present per Cal's keynote
22. chart interpretation
Because at 3am when you get paged...
Wouldn't it be great to correlate the site going
down... due to swapping... because of high
memory usage... thanks to that code that got
pushed... that had that change to how you
processed row results from a large database
query.
23. chart interpretation
Or that change window that just happened...
Where the security folks made some config
changes to one of the firewalls.. that is now
blocking your outbound API calls.. just from
some app servers in one of the datacenters..
24. chart interpretation
What about that new kernel that fixes a
memory leak...
Can you compare side by side, and with
historical context, what that looks like?
What about a physical machine vs a virtual
one?
25. chart interpretation
Do we need to retune our load-balancers, app
servers, or database replication?
Does higher site traffic over the past few
weeks show signs of strain?
Did that cache layer we add help any?
Is historical data choking once-fast pages?
27. some final thoughts
- come full circle, stats back in
- this is one solution, there are others (statsd)
- part of a larger tool bag
- implement before big changes
- establish a reference / baseline
- suitable for dev, qa, and production
- make implementing data capture easy