At work, I regularly use a computer cluster where jobs can be queued with the Sun Grid Engine system. It’s convenient because jobs are queued, then allocated as resources become freed up, and so individual users don’t have to worry about treading on each others’ toes.
I also have a local machine where I run smaller jobs, and wondered if there was something similar to queue my jobs, so I could just push a bunch of commands and have then run automatically as soon as the previous one finishes. After some shopping around online I found a great tool, called task spooler.
After using it for some time, I’m very satisfied with how it works and the design choices that went into it.
Using it is really easy. Typing
tsp (in other systems it’s also
ts) brings up the current queue. To see the live output of the currently-running job, type
tsp -t, and
ctrl-c to quit.
New jobs are queued behind older ones linearly by default, but you can make a job dependent on another one completing successfully (with error code 0), using
tsp -d (job number).
What I really like, though, is the support for multiple cores. Let’s say you have a 12-core machine and want to run jobs that will take up 4 cores each, then you could have a maximum of 3 jobs running at the same time. Queued jobs will be started depending on both queue order and the number of cores that are available. To do this, simply set the environment variable
TS_SLOTS e.g. in your bash profile, and then when you queue new jobs, specify the number of “slots” required, e.g.
tsp -N 4.
There are several other features, but these are the ones that I use most often. It’s really been a boost to my productivity!