Redis is designed to be a blazingly fast key-value in-memory database that trades some durability for speed. And for most parts, it lives up to the billing. Setting up and using Redis has always been a breeze. The success and popularity of Redis has made it a very powerful tool in the software engineers’ arsenal; with its support of almost all data types and associated operations out of the box, it is a perfect solution for almost all persistence problems. As with everything, people have used it in ways without considering its fitness for the problem at hand, often to great success.

Redis is famed for being able to perform operations that is measured in sub milliseconds. And the key to this is how it is designed. It is designed to store data in memory. If your first-year computer science class is anything to go by, you will know this provides much faster read and write throughput compared to disk by miles. When we create data structures using variables, we often use arrays, hashes and maps. Redis supports these including esoteric ones like hyperlog, bloom filters and more. It is also efficient at manipulating them because it takes advantage of the underlying data structures that are optimized for in-memory storage without worrying about how to persist it to durable storage. It is also single-threaded, Though a performant single-thread system might sound counterintuitive, there are some peculiar performant advantages to it. And Redis takes advantage of this in a brilliant way to ensure consistency without any cost to performance. Redis’s single-thread will scale indefinitely in terms of I/O concurrency. It does this by using an I/O demultiplexing mechanism and a concise event loop designed by the author. Thus there is no synchronization to be done since all commands are serialized. It might look like the CPU might become a bottleneck with this design, but it turns out you will often hit a network bottleneck well before the CPU cannot keep up. The positive side effect of this design is that the atomicity of all operations comes at no extra cost. Redis also uses a proprietary protocol that is much more terse. Couple the isolated event loop with a proprietary communication protocol, and you have a blazing fast in-memory data store that scales indefinitely in theory, and you have a highly performant database.

These facts only hold when the size of your payload and the number of connections remain relatively small. This easily jumps out the window with ever-increasing load parameters. The threshold is, unfortunately, rather low at a high number of connections and increased payload sizes. Modern large-scale micro-services will easily have over 100 running instances at medium scale. And since most instances employ some pooling mechanism so as not to pay a connection cost for each request, a single Redis instance is going to do a bit of work in maintaining those connections, not to talk of serving requests as they come through. To improve performance at medium to high loads, some projects such as KeyDb, Snapchat’s drop-in replacement as an alternative to Redis, employ a multithreaded approach and a bit of magic to sustain some high workloads. This has been touted to provide 5x performance over Redis. Another solution which I have seen used is to employ a proxy that multiplexes over multiple redises. One such proxy is the twoemproxy developed at Twitter. Twoemproxy, or nutcracker as it is informally known, is in itself single-threaded and employs key hashing to store keys in shards of multiple redis instances, giving you proper multiplexing. While it may look like this is susceptible to the original problems of a single-threaded application, it is not necessarily the reality because twoemproxy employs a single thread for each Redis instance, turning the whole system into a multi-threaded system. Of course, this is still susceptible to hotkeys. These might look like ideal solutions, but setting up new infrastructure as an intermediary service introduces a new failure point, which is neither trivial nor ideal. But when done right, there are a lot of net positives.

Another major concern with Redis is durability. Redis out of the box does not persist data on disk, only in memory. This means when a server goes down, so does all your data. Durability is serious business and when it becomes a priority, this is where Redis starts to go backwards. Redis was never planned to provide durability beyond RAM. This is evident in the fact that disk persistence was never part of Redis until v0.04. Redis supports two types of persistence modes.

The first one is called snapshotting or RDB. When snapshotting is enabled, Redis will periodically write all your dataset in memory to disk. This is good for point-in-time recovery. But this also means you lose all the data between when the last snapshot was created versus when the failure occurred. For a moderately busy server, there are bound to be significant changes that happen between when snapshots are set, and losing it might not be a good thing. To combat this, some teams set the snapshot times as minimal as possible to combat the amount of data it is possible to lose. This can be a bad idea when your data set is considerably large. Writing a 1gig file to disk every 60 seconds is a recipe for disaster. What happens in the background is Redis forks a child process for background processing, serializing the dataset in memory, making it disk compatible, writing it to a temporary file in the background and rename the file atomically upon finish. Even though the overhead of creating a fork is zero in theory when the OS supports copy-on-write, you still need to turn on the overcommit_memory. This is because if the dataset between the parent process and the child process deviates, Redis will not keep track of the changes and will have to allocate just as much memory your data set has to the child process in order for the snapshot to be successful. In snapshotting mode, you must do everything to ensure that your datastore does not exceed half the RAM allocated to Redis, otherwise, your Redis server will implode with an OOM. This is the default persistence mode because it is simple and safe for small data sets. When your data set starts to increase considerably in size, think twice about snapshots.

The second more durable persistence mode is Append Only File, introduced in Redis 1.1 to solve the drawbacks of snapshotting. In this mode, every Redis command is appended to a file as a log. Very much like WAL logs for conventional RDBMSs. This way, you can build the entire database by replaying the entire file. It can be argued that sequential writes to files are significantly faster than random access but is is still significantly slower than writing in memory, and this goes against the essence of what Redis is, which is an in-memory data store. If you are going to be writing each command to a file, why not use a datastore that is designed for that in the first place? If you use AOF, it means Redis is going to call fsync at a point in time which can be configured in 1 of 3 ways. appendfsync always will call fsync on each command. This is very very safe. But with this option, you might as well throw Redis out the window because the performance becomes inferior to every database designed to fsync on write. If you use this mode without a Battery Backed Write Cache (BBWC) RAID controller, you will get fucked. Have fun figuring out what went wrong. appendfsync everysec will call fsync every second, which means, at most you lose a second of data. This might sound reasonable, but it is not without its drawbacks. If you have an update-intensive application like a counter that updates many times in a second, you end up with a needlessly huge AOF file when in essence, the data payload might only have a small footprint. That said, it is an easier choice and has been the new default since Redis 2.4. appendfsync no delegates the calling of fsync to the operating system. This is the fastest and the least safe method amongst all the appendfsync options. Normally Linux will flush data every 30 seconds with this configuration, but it’s up to the kernel’s exact tuning.

When I talk to other developers about systems design problems, most people are quick to suggest Redis as a solution to buffer data when producers can not keep up with consumers, leaning heavily on the high write-throughput of Redis. The solution often goes like this; use Redis as a cache layer between the two entities. While this design generally works, I often wonder if people truly understand the tradeoffs and risks involved here. The lack of proper durability in Redis makes the bandwidth of solutions it is a fit for a very narrow one; nonetheless, for that bandwidth, it offers tremendous advantages. If you will drop Redis into your stack, be sure it is absolutely what you need, or stay clear of it. Otherwise, you and your data are going to get burnt.