Obie's crash course intro to using Redis as a complement to a traditional relational database in your Ruby-based applications. Contains links to essential open-source libraries that will help you make the most of your Redis experience.
6. clues that you
should consider
using Redis
•SQL database seems overkill
•update-only data (only inserted)
•non-transactional needs
•no relations with other tables
11. :REDIS0001<FE>^@^C^RAttendee:27:events^B<C1><A8>^@
1335369284<C1><A6>^@
1335369244^C^TAttendee:1:events:11^A<C0>J
1335364980^@^PAttendee:10:name^SJorge H. Cuadrado
C^@ESCConference:8:breakevenpoint<C1><A0>^O^C^TAttendee:1:events:
13^B<C0>a
1335365085<C0>g
1335365112^@^PAttendee:22:name
Mark Bates^D^RConference:1:notes^A<C0>^A<C3>=@C^Xthese are my
notes, lala ^C@^A^E
mo ^^<A0>ESC^LI'll addd som 4^Bore^C^QAttendee:4:events^F<C0>^]
1335314963<C0>^_
1335314968<C0>D
1335335712<C0>E
RDB
1335335715<C0>^S
1335314271<C0>^T
1335314300^C^RAttendee:16:events^C<C0>k
1335365137<C0>^
1335365082<C0>V
files
1335365068^B^UAttendee:
11:followers^B<C0>^A<C0>^O^D^ZAttendeeRegisteredEvent:
65^B^Kattendee_id^A1^Mconference_id^A8^D^
AttendeeUnregisteredEvent:64^B^Kattendee_id^A1
^Mconference_id^A8^C^TAttendee:13:events:1^A<C0>a
1335365085^D^AttendeeUnregisteredEvent:
66^B^Kattendee_id^A1^Mconference_id^A8^D^]AttendeeUnregisteredEven
t:170^B^Kattendee_id^B25^Mconference_id^A8^C^UAttendee:21:events:
21^M<C1><97>^@
1335365767<C1><8F>^@
1335365733<C1><92>^@
1335365741<C1><9B>^@
1335365879<C1><93>^@
1335365742<C1><94>^@
1335365743<C1><95>^@
1335365744<C1><96>^@
1335365746<C1><98>^@
1335365783<C1><99>^@
12. ################################ SNAPSHOTTING
#################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and
# the given number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving at all commenting all the
# "save" lines.
save 900 1
save 300 10
save 60 10000
18. Nest
Object-oriented Keys for Redis
github.com/soveran/nest
• Creates a Redis connection by default
• Calls to_s for key representation
• Really simple code / hack it for your needs
34. Testing Tips
•Don’t bother mocking out Redis
•Select a different database number
so you don’t clobber anything
•Redis.current.flushdb is your
friend
Redis is an extremely fast, atomic key-value store. It allows the storage of strings, sets, sorted sets, lists and hashes. Redis keeps all the data in RAM, much like Memcached but unlike Memcached, Redis writes to disk, giving it persistence.\n
\n
\n
\n
to some degree this is criteria for a no-sql solution in general, so why pick Redis?\n
\n
Redis is written in very CPU-efficient C and does all its operations in memory. (Paging to virtual memory on disk is no longer supported in latest version.)\n\nredis-benchmark -q -n 100000\n\nLow-end boxes routinely get 30k to 40k operations per second. High-end server configurations can routinely achieve over 100k operations per second. It&#x2019;s hard to \n\nRedis has been benchmarked at more than 60000 connections, and was still able to sustain 50000 q/s in these conditions.\n\nIn many real world scenarios, Redis throughput is limited by the network well before being limited by the CPU. But being single-threaded, Redis favors fast CPUs with large caches and not many cores.\n\n\n
Atomic operations can be traded for speed\nTo really understand what is the potential of the two systems as caching systems, there is to take into account another aspect. Redis can run complex atomic operations against hashes, lists, sets and sorted sets at the same speed it can run GET and SET operations (you can check this fact using redis-benchmark). The atomic operations and more advanced data structures allow to do much more in a single operation, avoiding check-and-set operations most of the time. So I think that once we drop the limit of just using GET and SET and uncover the full power of the Redis semantic there is a lot you can do with Redis to run a lot of concurrent users with very little hardware.\n
Durability refers to the quality of databases with regards to not losing your data.\n\nIf you&#x2019;re using Redis as more than just a replacement for memcache, like we&#x2019;re going to discuss later in this talk then it&#x2019;s important to understand its durability parameters.\n\nRedis operates in memory and optionally persists its data to disk. If it wasn&#x2019;t persisted to disk then you&#x2019;d lose all of your data when restarting the server or in case of an outage.\n\nRedis has two different ways of persisting data.\n\n
Redis snapshotting is the simplest Redis persistence mode.\n\nRedis can produce point-in-time snapshots of the dataset when specific conditions are met, for instance if the previous snapshot was created more than 2 minutes ago and there are already at least 100 new writes, a new snapshot is created.\n\nSnapshots are produced as a single binary .rdb file that contains the whole dataset. \n
The durability of snapshotting is limited to what the user specified as save points.\n\nIf the dataset is saved every 15 minutes, than in the event of a Redis instance crash or a more catastrophic event, up to 15 minutes of writes can be lost. From the point of view of Redis transactions snapshotting guarantees that either a MULTI/EXEC transaction is fully written into a snapshot, or it is not present at all (as already said RDB files represent exactly point in time images of the dataset). \n\nThe RDB file can not get corrupted, because it is produced by a child process in an append-only way, starting from the image of data in the Redis memory. A new rdb snapshot is created as a temporary file, and gets renamed into the destination file using the atomic rename(2) system call once it was successfully generated by a child process (and only after it gets synched on disk using the fsync system call). \n\nRedis snapshotting does NOT provide good durability guarantees if up to a few minutes of data loss is not acceptable in case of incidents, so it's usage is limited to applications and environments where losing recent data is not very important. \n
The Append Only File, usually called simply AOF, is the main Redis persistence option. The way it works is extremely simple: every time a write operation that modifies the dataset in memory is performed, the operation gets logged. The log is produced exactly in the same format used by clients to communicate with Redis, so the AOF can be even piped via netcat to another instance, or easily parsed if needed. At restart Redis re-plays all the operations to reconstruct the dataset. \n\nAOF rewrites are generated only using sequential I/O operations, so the whole dump process is efficient even with rotational disks (no random I/O is performed). This is also true for RDB snapshots generation. The complete lack of Random I/O accesses is a rare feature among databases, and is possible mostly because Redis serves read operations from memory, so data on disk does not need to be organized for a random access pattern, but just for a sequential loading on restart.\n\n
What Redis implements when appendfsync is set to always is usually called group commit. This means that instead of using an fsync call for every write operation performed, Redis is able to group this commits in a single write+fsync operation performed before sending the request to the group of clients that issued a write operation during the latest event loop iteration. \n\nIn practical terms it means that you can have hundreds of clients performing write operations at the same time: the fsync operations will be factorized - so even in this mode Redis should be able to support a thousand of concurrent transactions per second while a rotational device can only sustain 100-200 write op/s. \n
\n\n
Let&#x2019;s see what it looks like to do some basic operations in Redis using the Ruby driver.\n
\n
\n
\n
\n
\n
\n
conference name\n
\n
Event class and subclasses save arbitrary details in hashes.\n
Graphing relationships between (User) objects with Redis sets\n\nattendee.rb\nfollowings.rb\n
\n
\n
Time-ordered activity feeds with Redis sorted sets\nevent.rb and subclasses\n\n
zunionstore...\nfollowing_events method in followings.rb\n\nFollowingsController\nhttp://localhost:3000/attendees/1/following\n\nzinterstore...\nAttendee#events\n