Using redis for job queues in Perl

There are alot of different ways to implement job queues in all kinds of programming languages. One approach which is very common is to use SQL database backends and a table. Every row contains a job to run and several job runners are connecting to the database, try to fetch a row and do the job. When they finished the job they delete the row or write them to a failure table. However, all you do in this case is to use a SQL database as a shared storage system.

There are also some performance drawbacks in this approach. All what your are doing is running in a permanent while loop inside your workers, running a select to see if a new job is available and sleep for a few seconds. Wouldn’t it be better to have a system which reacts when a job is entered? Redis provides a concept of lists which are perfect for this job. You can issue a block call which waits until something arrives and your job runner can start to process the job.

Here is something I am using for some things at my place and they work reasonably well. The ppm massbuild system uses this kind of queues to manage the build clients. Every client is running this script to get a CPAN module from a job queue and tries to build it. Easy and somewhat lightweight – and more scalable as using a MySQL database for the job.

Have a look at the code snippet. I’ve thrown in some more comments as usual so that you get an idea what’s going on in the code.

7 thoughts on “Using redis for job queues in Perl”

    1. Gearman provides a queue system – I wanted to show how a worker system can be implemented using redis. There is no pro or no con – just some code to use it with redis.

      From my personal experience I like the redis way as it seems to be very lightweight and is very fast. It has few dependencies and fits well into the world of web services.

      Gearman might solve the same problems – as I said, I wanted to show some redis/perl code as a recipe.

      1. Thanks for the explanation. Redis certainly has some very interesting properties and it’s nice to see a complete example of one of them!

  1. Yeah, or use Postgres, have a table of queue items, have a set of stored procedures to submit, retrieve and re-post those queue items and in the post sp, execute a NOTIFY with the queue item id.

    When the client starts up have it retrieve any unhandled queue items, and once those are handled, execute a LISTEN for the notifications, and use some asynchronous mechanism to handle the event of new notifications. Upon receiving a notification, retrieve the matching queue item and deal with it.

    In Oracle use advanced queueing, in MS SQL use CREATE QUEUE, SEND and RECEIVE, in SQLite use basically the schema above, except use UDP datagrams instead of LISTEN/NOTIFY.

  2. Thanks for the comment. Yep, postgres has some serious capabilities in this field, too. However – for the postgres approach I need to code stuff in DBI, probably not the lightweight approach of accessing stuff from the Perl world. But hej – if it solves your problems.

    I wanted to show the combination of redis and Perl – there you go :)

  3. For work, I’ve been writing some prototype FIFO queues based on Redis. Using two servers, the following bit of software (using the “naive” version of the software, Redis, and a lot of client processes) did on the order of 800k-1M transactions per second on a globally weakly ordered, locally strictly ordered cluster of Redis instances (many per machine).


  4. Correction: I meant to write “globally weakly ordered, locally strictly ordered queue running distributed on a cluster of Redis instances”.


Comments are closed.