Learn how you can use Amazon ElastiCache to easily deploy a Memcached or Redis compatible, in-memory caching system to speed up your application performance. We show you how to use Amazon ElastiCache to improve your application latency and reduce the load on your database servers. We'll also show you how to build a caching layer that is easy to manage and scale as your application grows. During this session, we go over various scenarios and use cases that can benefit by enabling caching, and discuss the features provided by Amazon ElastiCache.
2. Speakers
• Omer Zaki
– Senior Product Manager, AWS
– omerz@amazon.com
• Nick Dor
– Senior Director of Engineering, GREE International, Inc.
• James Kenigsberg
– Chief Technology Officer, 2U, Inc.
3. What is a Cache?
• Specialized data store that keeps frequently
accessed data in memory
• Memory is order of magnitudes faster than disk
4. Why Use a Cache?
• “Latency is the mother of interactivity”*
• Handle hot data, handle spikes
• Reduce load on backend
• For a majority of web applications, workloads are read
heavy
– Often as high as 80-90% reads vs. writes
* http://highscalability.com/blog/2009/7/25/latency-is-everywhere-and-it-costs-you-sales-how-to-crush-it.html
5. Caches, caches, caches
• Types – browser cache, proxy cache, server cache,
database cache, file system cache
• Characteristics – persistence, scalability, data
model, warming
• Architecture – side cache, read through, write back
• Options – Memcached, Redis, etc.
6. Memcached
• Free, open-source, high-performance, in-memory
key-value store
• Developed for LiveJournal in 2003
• Used by many of the worlds top websites
– YouTube, Facebook, Twitter, Pinterest, Tumblr, …
7. Memcached: Architecture
value = get(key)
API
APP
database
reads / writes
Client Lib
set(key,value,expiry)
add(key,value,expiry)
replace(key,value,expiry)
app reads /
cache updates
persistent TCP
session
no communication
between servers
which memcached server?
server = server_list [key mod n]
can handle
large number
of TCP
sessions
Source: http://architects.dzone.com/news/notes-memcached
8. Redis
• High speed, in-memory, key-value data store
• Data structure support – strings, lists, sets, sorted sets
• Asynchronous replication
• Optional durability (persistence via snapshot or append-only
file)
• Pub/sub functionality
10. Amazon ElastiCache
• Web service that lets you easily create and use
cache clusters in the cloud
• Memcached, Redis compatible
• Managed, scalable, secure
• Pay-as-you-go and flexible, so you can add capacity
when you need it
16. GREE International
• 2004 – GREE is founded in Japan
• 2011 – establishes office in US
– Hosting games in traditional datacenters
– 2 weeks to procure and provision new servers + 1 week to setup application
– ITIL practices (Dev / Ops separation)
• 2012 – acquires Funzio
– AWS hosted
– Quick provisioning of servers (minutes) / but still manual setup (days)
– Hybrid hosting environment
• 2013 – consolidates in AWS
– Migrated games from traditional datacenter to AWS
– Automated application setup
– DevOps practices
(c) GREE
17. GREE Games
• All Mobile, all Free-to-Play
– iOS & Android smart phones
– Big focus on tablets
• Role Playing Games (RPG+)
– Multi-million dollar franchise, top-grossing titles
– Some of the oldest games on the App Store
• Hardcore
– Deeper more intense gameplay mechanics
• Real-Time Strategy (RTS)
– Fast action, small unit management
• Casino & Casual Games
– Familiar games, wider audience, casual play
(c) GREE
18. Some Scale
• Over 60 ELB endpoints hosted in AWS
– Games, shared services, analytics infrastructure
•
•
•
•
•
1200 Amazon EC2 instances
400 Amazon ElastiCache nodes
260 Amazon RDS database servers
1TB daily logs from app servers
Millions of monthly active users
(c) GREE
20. Caching Strategy
• Game architecture predates stable NoSQL
– We wanted similar performance at scale
– Keep combined average internal response times below 500ms
• Memcache Authoritative
– Still use an RDBMS; potential data loss is limited
• Allows for cheaper/simpler DB layer
– Always do full row replacements (ie: no current_row_value +1)
(c) GREE
21. Elastic Load Balancing
Data Flow
App
App
App
Cache
• Reads
App
Cache
Cache
Cache
Batch
Batch
RDS
RDS
– ELB App Cache
• Writes (Synchronous)
–
–
–
–
ELB App Cache DB
ELB App Cache Batch DB
Standard write-through
No blind writes; always fetch current ver.
• Writes (Asynchronous)
– Batch DB
– Batch writes to DB every 30 seconds
(c) GREE
RDS
22. Batch Processor
• 80% of game write traffic is asynchronous
• Ex: Player items (loot) after multiple quests
– 10 items in 30 sec; app server sends 10 writes downstream
– Batch processor sends last record with final item count to DB
• Greatly reduced writes on DB
– Shard at table and DB server level for larger games
(c) GREE
23. Memcache Writes - Key Facts
• App handles memcache key hashing & sharding
– DB rows are usually just a key, version, timestamp & JSON blob
– Look familiar?
• NEVER do blind writes
– Always fetch current value in MC, perform operation, then write
• If version collision, then simply fail
– Extremely rare; application will retry for some calls
(c) GREE
24. Memcache Writes – High Concurrency
• Player vs. Player Events (World Domination)
– These have much higher concurrency
– Match-making, battles/results, leaderboards
• Here we do relative updates at MC layer
– Yes, we contradict ourselves here a little
• If we get a version collision/failure
– App server reloads MC value and tries again, up to 5 times
– Usually on 2nd or 3rd try we succeed
– This happens VERY fast in the code
(c) GREE
25. Failure Scenarios
• Memcache node fails
– Go straight to the database; versioning is key here
• Hashing compartmentalizes impact
– During failure, only players assigned to that node are affected
– Usually only a small performance drop
• Node comes back online…
– Cache is refilled organically
– DB load for that subset of operations decreases over time
(c) GREE
26. Why Amazon ElastiCache?
• Fairly stable
– Fails less regularly than Amazon EC2
• Automatic node replacement
– Same node name/DNS
• Good performance
– Highest performance with larger instances (network layer)
• Configuration endpoint
– Application can dynamically add/remove nodes
– Automatically rebalance hashes to accommodate new nodes
– No more manual memcache migrations – YAY!
(c) GREE
27. Newer Games - Architecture
• MUCH more modern in terms of arch/tech
• Shift towards real-time games
– Longer play sessions; higher player engagement
– Will impact our caching model – less pools, but larger
• Streaming, queuing
– GO, nsqd
• Moving (finally) to memcached
– Had used old memcache libraries for long time
(c) GREE
28. Future Trends in Caching at GREE
• Check and Set tokens (CAS)
– A sort of internal versioning in memcached
– Ensures data is latest before updating
– Atomic transactions
• Investigate real NoSQL implementation
• Redis - Promising
– Need to see how I/O performance goes when hitting disk
(c) GREE
34. 2U Online Campus
Far more than what you think of as a
Learning Management System...
...the 2U Online Campus represents the
single hub for students’ asynchronous
study, live class sessions, and dynamic
social tools to create a rich, online
student community.
35. “No man ever steps in the
same river twice, for it’s not
the same river and he’s not
the same man.”
- Heraclitus
40. 2010
•
Amazon to the rescue!
•
Release of Amazon RDS for
databases
•
Release of Elastic Load
Balancing for load balancing
•
Caching helps students
communicate!
•
•
Memcache
No file redundancy
50. Please give us your feedback on this
presentation
DAT207
As a thank you, we will select prize
winners daily for completed surveys!
Want more caching: Attend Amazon ElastiCache Architecture and Design Patterns
Friday @ 11:30am – 12:30pm
Lido 3006