| Job Specification | PBS/Torque | Slurm |
| Script directive | #PBS | #SBATCH |
| Queue | -q <queue> | -p <partition> |
| Node count | -l nodes=<count> | -N <min[-max]> |
| Cores(cpu) per node | -l ppn=<count> | -c <count> |
| Memory size | -l mem=16384 | –mem=16g OR –mem-per-cpu=2g |
| Wall clock limit | -l walltime=<hh:mm:ss> | -t <days-hh:mm:ss> |
| Standard output file | -o <file_name> | -o <file_name> |
| Standard error file | -e <file_name> | -e <file_name> |
| Combine stdout/err | -j oe | (use -o without -e) [standard behaviour] |
| Direct output to directory | -o <directory> | -o “directory/slurm-%j.out” |
| Event notification | -m abe | –mail-type=[BEGIN, END, FAIL, REQUEUE, or ALL] |
| Email address | -M <address> | –mail-user=<address> |
| Job name | -N <name> | –job-name=<name> |
| Job dependency | -W depend=afterok:<jobid> | –depend=C:<jobid> |
| Node preference | … | –nodelist=<nodes> AND/OR –exclude=<nodes> |
| Max jobs pool | -A [m16,m32,..,m512] | –qos=[max16jobs,max32jobs,..,max512jobs] |
| Account to charge | -W group_list=<account> | –account=<account> |