Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Redis to the Rescue?O’Reilly MySQL Conference2011-04-13
Who?• Tim Lossen / @tlossen• Berlin, Germany• backend developer at wooga
Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
What?• key-value-store• in-memory database• “data structure server”
Data Types• strings (integers)
Data Types• strings (integers)• lists• hashes
Data Types• strings (integers)• lists• hashes• sets• sorted sets
flickr.com/photos/atzu/2645776918
Performance• same for reads / writes
Performance• same for reads / writes• 50 K ops/second - regular notebook, EC2 instance
Performance• same for reads / writes• 50 K ops/second - regular notebook, EC2 instance• 200 K ops/second - intel core i7 X...
Durability• snapshots• append-only log
Other Features• master-slave replication• virtual memory• ...
Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
Daily Active UsersApril   May   June   July   Aug   Sept     Oct      2010
Daily Active UsersApril   May   June   July   Aug   Sept     Oct      2010
Challenge• traffic growing rapidly
Challenge• traffic growing rapidly• bottleneck: write throughput - EBS volumes pretty slow
Challenge• traffic growing rapidly• bottleneck: write throughput - EBS volumes pretty slow• MySQL already sharded - 4 x 2 ...
Idea• move write-itensive data to Redis
Idea• move write-itensive data to Redis• first candidate: inventory - integer fields - frequently changing
Solution• inventory = Redis hash - atomic increment / decrement !
Solution• inventory = Redis hash - atomic increment / decrement !• on-demand migration of users - with batch roll-up
Results• quick win - implemented in 2 weeks - 10% less load on MySQL servers
Results• quick win - implemented in 2 weeks - 10% less load on MySQL servers• decision: move over more data
But ...• “honeymoon soon over”
But ...• “honeymoon soon over”• growing memory usage  (fragmentation) -   servers need periodic “refresh” -   replication ...
Current Status• hybrid setup - 4 MySQL master-slave pairs - 4 Redis master-slave pairs
Current Status• hybrid setup - 4 MySQL master-slave pairs - 4 Redis master-slave pairs• evaluating other alternatives - Ri...
Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
Challenge• expected peak load: - 16000 concurrent users - 4000 requests/second - mostly writes
“Memory is the new Disk,  Disk is the new Tape.”             ⎯ Jim Gray
Idea• use Redis as main database - excellent (write) performance - virtual memory for capacity
Idea• use Redis as main database - excellent (write) performance - virtual memory for capacity• no sharding = simple opera...
Data Model• user = single Redis hash - each entity stored in hash field   (serialized to JSON)
Data Model• user = single Redis hash - each entity stored in hash field   (serialized to JSON)• custom Ruby mapping layer ...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
{“level”: 4,1220032045     u1                       “xp”: 241}             u1_pets   [“p7”, “p8”]                       {“...
Virtual Memory• server: 24 GB RAM, 500 GB disk - can only keep “hot data” in RAM
Virtual Memory• server: 24 GB RAM, 500 GB disk - can only keep “hot data” in RAM• 380 GB swap file - 50 mio. pages, 8 KB e...
Dec 2010: Crisis• memory usage growing fast
Dec 2010: Crisis• memory usage growing fast• cannot take snapshots any more - cannot start new slaves
Dec 2010: Crisis• memory usage growing fast• cannot take snapshots any more - cannot start new slaves• random crashes
Analysis• Redis virtual memory not  compatible with: -   persistence -   replication
Analysis• Redis virtual memory not  compatible with: -   persistence -   replication• need to implement our own!
Workaround• “dumper” process - tracks active users - every 10 minutes, writes them   into YAML file
ruby   redis   disk
ruby   redis   disk
ruby   redis   disk
ruby   redis   disk
ruby   redis   disk
Workaround• in case of Redis crash - start with empty database - restore users on demand from   YAML files
Real Solution• Redis “diskstore” - keeps all data on disk - swaps data into memory as   needed
Real Solution• Redis “diskstore” - keeps all data on disk - swaps data into memory as   needed• under development (expecte...
Results• average response time: 10 ms
Results• average response time: 10 ms• peak traffic: - 1500 requests/second - 15000 Redis ops/second
Current Status• very happy with setup - simple, robust, fast - easy to operate• still lots of spare capacity
Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
Advantages• order-of-magnitude performance improvement -   removes main bottleneck -   enables simple architecture
Disadvantages• main challenge: durability - diskstore very promising
Disadvantages• main challenge: durability - diskstore very promising• no ad-hoc queries - think hard about data model - hy...
Conclusion• Ruby + Redis = killer combo
Q&A
redis.iogithub.com/tlossen/remodelwooga.com/jobs
Redis to the Rescue?
Redis to the Rescue?
Redis to the Rescue?
Redis to the Rescue?
Redis to the Rescue?
Redis to the Rescue?
Redis to the Rescue?
Nächste SlideShare
Wird geladen in …5
×

Redis to the Rescue?

O'Reilly MySQL Conference 2011, Santa Clara

  • Loggen Sie sich ein, um Kommentare anzuzeigen.

Redis to the Rescue?

  1. 1. Redis to the Rescue?O’Reilly MySQL Conference2011-04-13
  2. 2. Who?• Tim Lossen / @tlossen• Berlin, Germany• backend developer at wooga
  3. 3. Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
  4. 4. Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
  5. 5. What?• key-value-store• in-memory database• “data structure server”
  6. 6. Data Types• strings (integers)
  7. 7. Data Types• strings (integers)• lists• hashes
  8. 8. Data Types• strings (integers)• lists• hashes• sets• sorted sets
  9. 9. flickr.com/photos/atzu/2645776918
  10. 10. Performance• same for reads / writes
  11. 11. Performance• same for reads / writes• 50 K ops/second - regular notebook, EC2 instance
  12. 12. Performance• same for reads / writes• 50 K ops/second - regular notebook, EC2 instance• 200 K ops/second - intel core i7 X980 (3.33 GHz)
  13. 13. Durability• snapshots• append-only log
  14. 14. Other Features• master-slave replication• virtual memory• ...
  15. 15. Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
  16. 16. Daily Active UsersApril May June July Aug Sept Oct 2010
  17. 17. Daily Active UsersApril May June July Aug Sept Oct 2010
  18. 18. Challenge• traffic growing rapidly
  19. 19. Challenge• traffic growing rapidly• bottleneck: write throughput - EBS volumes pretty slow
  20. 20. Challenge• traffic growing rapidly• bottleneck: write throughput - EBS volumes pretty slow• MySQL already sharded - 4 x 2 = 8 shards
  21. 21. Idea• move write-itensive data to Redis
  22. 22. Idea• move write-itensive data to Redis• first candidate: inventory - integer fields - frequently changing
  23. 23. Solution• inventory = Redis hash - atomic increment / decrement !
  24. 24. Solution• inventory = Redis hash - atomic increment / decrement !• on-demand migration of users - with batch roll-up
  25. 25. Results• quick win - implemented in 2 weeks - 10% less load on MySQL servers
  26. 26. Results• quick win - implemented in 2 weeks - 10% less load on MySQL servers• decision: move over more data
  27. 27. But ...• “honeymoon soon over”
  28. 28. But ...• “honeymoon soon over”• growing memory usage (fragmentation) - servers need periodic “refresh” - replication dance
  29. 29. Current Status• hybrid setup - 4 MySQL master-slave pairs - 4 Redis master-slave pairs
  30. 30. Current Status• hybrid setup - 4 MySQL master-slave pairs - 4 Redis master-slave pairs• evaluating other alternatives - Riak, Membase
  31. 31. Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
  32. 32. Challenge• expected peak load: - 16000 concurrent users - 4000 requests/second - mostly writes
  33. 33. “Memory is the new Disk, Disk is the new Tape.” ⎯ Jim Gray
  34. 34. Idea• use Redis as main database - excellent (write) performance - virtual memory for capacity
  35. 35. Idea• use Redis as main database - excellent (write) performance - virtual memory for capacity• no sharding = simple operations
  36. 36. Data Model• user = single Redis hash - each entity stored in hash field (serialized to JSON)
  37. 37. Data Model• user = single Redis hash - each entity stored in hash field (serialized to JSON)• custom Ruby mapping layer (“Remodel”)
  38. 38. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  39. 39. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  40. 40. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  41. 41. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  42. 42. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  43. 43. {“level”: 4,1220032045 u1 “xp”: 241} u1_pets [“p7”, “p8”] {“pet_type”: p7 “Cat”} {“pet_type”: p8 “Dog”} {“level”: 1,1234599660 u1 “xp”: 22} u1_pets [“p3”] ... ...
  44. 44. Virtual Memory• server: 24 GB RAM, 500 GB disk - can only keep “hot data” in RAM
  45. 45. Virtual Memory• server: 24 GB RAM, 500 GB disk - can only keep “hot data” in RAM• 380 GB swap file - 50 mio. pages, 8 KB each
  46. 46. Dec 2010: Crisis• memory usage growing fast
  47. 47. Dec 2010: Crisis• memory usage growing fast• cannot take snapshots any more - cannot start new slaves
  48. 48. Dec 2010: Crisis• memory usage growing fast• cannot take snapshots any more - cannot start new slaves• random crashes
  49. 49. Analysis• Redis virtual memory not compatible with: - persistence - replication
  50. 50. Analysis• Redis virtual memory not compatible with: - persistence - replication• need to implement our own!
  51. 51. Workaround• “dumper” process - tracks active users - every 10 minutes, writes them into YAML file
  52. 52. ruby redis disk
  53. 53. ruby redis disk
  54. 54. ruby redis disk
  55. 55. ruby redis disk
  56. 56. ruby redis disk
  57. 57. Workaround• in case of Redis crash - start with empty database - restore users on demand from YAML files
  58. 58. Real Solution• Redis “diskstore” - keeps all data on disk - swaps data into memory as needed
  59. 59. Real Solution• Redis “diskstore” - keeps all data on disk - swaps data into memory as needed• under development (expected Q2)
  60. 60. Results• average response time: 10 ms
  61. 61. Results• average response time: 10 ms• peak traffic: - 1500 requests/second - 15000 Redis ops/second
  62. 62. Current Status• very happy with setup - simple, robust, fast - easy to operate• still lots of spare capacity
  63. 63. Redis IntroCase 1: Monster WorldCase 2: Happy HospitalDiscussion
  64. 64. Advantages• order-of-magnitude performance improvement - removes main bottleneck - enables simple architecture
  65. 65. Disadvantages• main challenge: durability - diskstore very promising
  66. 66. Disadvantages• main challenge: durability - diskstore very promising• no ad-hoc queries - think hard about data model - hybrid approach?
  67. 67. Conclusion• Ruby + Redis = killer combo
  68. 68. Q&A
  69. 69. redis.iogithub.com/tlossen/remodelwooga.com/jobs

×