This document describes how to build a web analytics service using node.js, Amazon DynamoDB, and Amazon Elastic MapReduce (EMR). Node.js servers collect minute-level analytics data and write it to DynamoDB. EMR runs Hadoop jobs that roll up the minute-level data into hourly, daily, and monthly aggregates which are also stored in DynamoDB. The system can process billions of data points per month from major websites and provide analytics data at different granularities to applications through a RESTful API.
2. Who Am I?
•Jonathan Keebler @keebler
•Built video player for all CTV properties
–Worked on news sites like CTV, TSN, CP24
•CTO, Founder of ScribbleLive
•Bootstrapped a high scalability startup
–Credit card limit wasn’t that high, had to find cheap
ways to handle the load of top tier news sites
3. What is ScribbleLive?
•Leading provider of real-time engagement
management solutions
•We enable real-time publication and syndication
of digital content
•Our platform is transforming the way the world’s
largest brands and media approach
communication and content creation, creating true
real-time engagement
5. Today
•Learn to build your own analytics service
– Seriously, we’re going to do it
•node.js on Amazon EC2: web servers
•Amazon DynamoDB: database
•Hadoop/Hive on Amazon Elastic MapReduce
(EMR): roll-up data
6. Why would we do this?
•ScribbleLive tracks “engagement minutes” (EMs)
across all customer sites
– e.g., ESPN.com, CNN.com, Reuters.com
– EM = 1 minute of a user watching a webpage
– 2.5B per month, 120M+ per hour
•Big analytics providers couldn’t do it
– Didn’t have the features
– Too inaccurate
7. How are we going to do this?
Visitors
Elastic Load Balancing
node.js
node.js
node.js
DynamoDB
node.js
8. DynamoDB: data structure
•Separate tables by timeframe
– Minute (written by node.js directly)
– Hour (EMR from minute data)
– Day (EMR from hour data)
– Month (EMR from day data)
•Structure
– Hash: Item (page id)
– Range: Time (rounded to min, hour, day)
– { Hits: 1 }
10. Elastic Load Balancing: Load balancing
•1 load balancer
•Cookies keep unique user on same instance
•Auto-scaling
– CPU >50% or network-in 50M bytes, triggers new
servers coming online and added to Elastic Load
Balancing
11. node.js: Overview of code
•Accepts GET /?item={ID}&uid={UserID}
•Dictionary/Array of how many GETs per item in this
minute
– Hits[Minute][“{ID}”]++
– Example: Hits[“1/1/2014 1:23:00”][“abcd”]++
•Dictionary/Array of Users already counted in
Item:Minute (prevent double-counting)
•At end of minute, write data back to DynamoDB
12. node.js: Bulk writing to DynamoDB
•Writing all data back immediately in a loop = BAD!
– Throughput would spike in that ~second
– Would have to use higher throughput limit
– More $$$$
•Instead, figure out how many writes need to happen /
60 seconds = how many writes per second you should do
13. node.js: Bulk writing to DynamoDB
•Call to DynamoDB per item:
– update: (atomic) add X to {ID}:{Minute}
14. Hadoop: What we map and reduce
•To go from minute to hourly data
– Round every minute down to the nearest hour
(floor( Minute / 3600 ) * 3600)
– Sum the # of “Hits” from each data point
•Just look at the past 24 hours to save time
•Do the same for hourly to daily, daily to monthly
15. Hadoop: Hive scripts
INSERT OVERWRITE TABLE MetricsHourly
SELECT
Item,
(floor( Time / 3600 ) * 3600) AS Time,
SUM(Hits) AS Hits,
from_unixtime(floor( Time / 3600 ) * 3600 ) AS TimeFriendly
FROM Metrics WHERE Time >= floor( unix_timestamp() / 86400 ) * 86400 - ( 86400 * 1 )
GROUP BY Item, floor( Time / 3600 ) * 3600;
17. Hadoop: Setting Up EMR
• “Start an Interactive Hive Session”
• Run a cron job every 15 minutes to check if
the Hive job is complete
• If complete, downloads newest Hive script
and restarts the job
• Amazon CloudWatch alarms if jobs taking
longer than 12 hours
19. Application API
•RESTful API in the language of your choice
•Calls to DynamoDB:
–query: Hash:{ID} w/ Range:{Time A}-{Time B}
•Since M-R could take a day to run, need to reconstruct
hourly data from minutes for most recent 24 hours
–e.g. if you want hourly data for last 2 days, take 24 hourly data
pts from yesterday, and 24*60 minute data pts from today
(convert to hourly data pts in code)