There are alot of different ways to implement job queues in all kinds of programming languages. One approach which is very common is to use SQL database backends and a table. Every row contains a job to run and several job runners are connecting to the database, try to fetch a row and do the job. When they finished the job they delete the row or write them to a failure table. However, all you do in this case is to use a SQL database as a shared storage system.
There are also some performance drawbacks in this approach. All what your are doing is running in a permanent while loop inside your workers, running a select to see if a new job is available and sleep for a few seconds. Wouldn’t it be better to have a system which reacts when a job is entered? Redis provides a concept of lists which are perfect for this job. You can issue a block call which waits until something arrives and your job runner can start to process the job.
Here is something I am using for some things at my place and they work reasonably well. The ppm massbuild system uses this kind of queues to manage the build clients. Every client is running this script to get a CPAN module from a job queue and tries to build it. Easy and somewhat lightweight – and more scalable as using a MySQL database for the job.
Have a look at the code snippet. I’ve thrown in some more comments as usual so that you get an idea what’s going on in the code.