Let’s explain the situation: you have a lot of analyses to compute on your SMP computer (same analysis on different files, many different analyses or same analysis many times, for simulation purpose, for instance). And, for different reasons, you cannot use or you don’t have access to a cluster with many nodes… First, forget to launch job one by one and forget also to launch all the jobs in one command (since your server may go down)!
You can perform such tasks by writing csh scripts, which will control your jobs and fill the job queue. Maybe you can also install some queue manager on your server. By looking at the web, I didn’t find a “simple” way to solve this problem, so I wrote a simple PERL script, which manages a queue jobs list. This script is based on the Proc::Simple module (available here). The full documentation of this script is available here. Basically, you can choose between 3 analysis mode: many different analyses, same analysis on different input datafiles and same analysis many times. You also have to specify the number of simultaneous jobs to run (i.e. the number of available cpu) and the script will manage your queue job list. I’m not sure it’s the best way but it works for me.
I hope it will be useful and feel free to comment this article.