Handling Simple Messaging with Redis

There is no shortage of message brokers and queueing solutions available to software engineers. Full-featured enterprise solutions like RabbitMQ offer developers tremendous flexibility with which to solve their messaging needs. While lots of use cases require these advanced features, a solution like RabbitMQ, even without a support license, comes with significant costs. Running a RabbitMQ server adds operational overhead, and requires developers to fully understand the feature set available to them in order to make good engineering decisions.

Proprietary cloud solutions like AWS SQS also come with their own tradeoffs. SQS offers an excellent feature set, and reduces operational overhead by delegating service administration to Amazon. However, making use of these proprietary solutions introduces vendor lock-in; it will dramatically increase the complexity of migrating your system to a different cloud provider or a private cloud solution.

For simple message passing to one or more recipients, Redis is an excellent message broker. Let’s take a closer look at three possible use cases to illustrate this.

Simple Message Queueing

Consider the following situation: Service A, a public facing API server, needs to messages to Service B, which integrates certain API data with a suite of third-party business tools. Order is important here; we want to ensure that all customer data is as correct and up-to-date as possible. To meet this need, we can make use of an intuitive Redis data type: the list.

Lists are ordered data types containing string elements. We can treat this list as a queue with two commands: LPOP and RPUSH. LPOP removes the first element from the list and returns it. RPUSH adds a new element to the end of the list. To pass messages, have Service A RPUSH the message onto the list and instruct your Service B to LPOP from it.

Lots of open source solutions already exist for polling Redis lists, and the underlying implementation is fairly simple. In Python, we can build a generator that polls the queue for new data for an indefinite period of time. If you favor off-the-shelf solutions (as you probably should), RQ is a very mature project.

PubSub for Messaging Multiple Recipients

Let’s complicate this picture a little bit. Now, in addition to passing messages to Service B, Service A must pass identical messages to Service C, which transforms the data and sends it to a data warehouse for further processing and analysis. Here, we encounter a limitation of using lists to pass messages across services – services consume these events as they use them.

One potential solution is to use two separate lists, but this introduces additional complexity and doesn’t scale well. Fortunately, Redis supports the PubSub messaging paradigm, which allows multiple subscribers to listen on a given channel. Simply have each service subscribe to whichever channel(s) are relevant to them, and use the PUBLISH command to distribute messages to each subscriber.

PubSub is a well established and powerful messaging protocol, but it’s important to remember that these messages won’t persist after distribution to subscribers. If that’s a requirement, you can check out Redis’ new stream data type, which I’ll cover in detail on a later post.

Other Benefits and Downsides

It can often be a nontrivial task to get two services communicating locally, especially if you’re working outside of a containerized and orchestrated environment. Redis is simple to run locally and doesn’t require any special configuration to get working. Redis is also very fast. While performance isn’t critical for the use cases we outlined, more performance-critical messaging cases do exist. For these cases, using Redis might be a good choice.

That said, there are definitely use cases in which Redis is the inferior choice. Redis doesn’t allow for delayed message processing, and is not a good choice if you need massively concurrent message processing due to its single threaded nature.

For simple message passing without incurring a lot of operational overhead, consider Redis. It’s performant, easy to use and FOSS. It can reduce your operational overhead without incurring vendor lock-in.

Published by

Eric Eisberg

I'm a software engineer specializing in back end development, API design, DevOps tools and methodologies, and distributed systems. Love Python, Go, Elixir, SQL, Redis and coud infrastructure. Full-time remote worker. Autodidact. When not working, you can find me hiking with my dogs, lifting weights, playing video games or out dancing badly somewhere.

Leave a Reply

Your email address will not be published. Required fields are marked *