users:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
users:slurm [2016/03/31 20:13] – [Running 32 MPI jobs on two nodes] ecalore | users:slurm [2016/11/07 14:56] (current) – [Running 16 MPI job using 16 GPUs on one node] ecalore | ||
---|---|---|---|
Line 31: | Line 31: | ||
The examples below assume you are submitting the job from the same directory your program is located in, otherwise you need to give the full path. | The examples below assume you are submitting the job from the same directory your program is located in, otherwise you need to give the full path. | ||
- | ==== Running 4 MPI jobs on one node ==== | + | <WRAP center round important 80%> |
+ | All jobs by default will run on the // | ||
+ | </ | ||
+ | |||
+ | ==== Running an OpenMP job on one node ==== | ||
+ | |||
+ | In this script we request to launch 1 task (1 process), which will use 4 cores (4 threads). | ||
+ | |||
+ | <code bash> | ||
+ | # | ||
+ | #SBATCH --job-name=my_job_name | ||
+ | #SBATCH --error=my_job_name-%j.err | ||
+ | #SBATCH --output=my_job_name-%j.out | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --cpus-per-task=4 | ||
+ | #SBATCH --mem-per-cpu=100 | ||
+ | |||
+ | export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK | ||
+ | |||
+ | ./ | ||
+ | |||
+ | </ | ||
+ | |||
+ | ==== Running 4 MPI job on one node ==== | ||
In this script we request 4 tasks, all of them in the same node. | In this script we request 4 tasks, all of them in the same node. | ||
Line 45: | Line 68: | ||
#SBATCH --error=my_job_name-%j.err | #SBATCH --error=my_job_name-%j.err | ||
#SBATCH --output=my_job_name-%j.out | #SBATCH --output=my_job_name-%j.out | ||
- | #SBATCH --ntasks 4 | + | #SBATCH --ntasks=4 |
#SBATCH --ntasks-per-node=4 | #SBATCH --ntasks-per-node=4 | ||
- | #SBATCH --time=00: | ||
module load openmpi | module load openmpi | ||
Line 54: | Line 76: | ||
</ | </ | ||
- | ==== Running 32 MPI jobs on two nodes ==== | + | ==== Running 32 MPI job on two nodes ==== |
In this script we request 32 tasks, 16 of them in each node (i.e. we are requesting 2 nodes). We also load the openmpi [[users: | In this script we request 32 tasks, 16 of them in each node (i.e. we are requesting 2 nodes). We also load the openmpi [[users: | ||
Line 60: | Line 82: | ||
<code bash> | <code bash> | ||
#!/bin/bash | #!/bin/bash | ||
- | #SBATCH --ntasks 32 | + | #SBATCH --ntasks=32 |
#SBATCH --ntasks-per-node=16 | #SBATCH --ntasks-per-node=16 | ||
- | #SBATCH --time=00: | ||
module load openmpi | module load openmpi | ||
Line 69: | Line 90: | ||
</ | </ | ||
- | ==== Running 16 MPI jobs using 16 GPUs on one node ==== | + | ==== Running 16 MPI job using 16 GPUs on one node ==== |
In this script we request 16 GPUs and 16 tasks, all of them in the same node. Moreover we request the job to be enqueued in the //longrun// [[users: | In this script we request 16 GPUs and 16 tasks, all of them in the same node. Moreover we request the job to be enqueued in the //longrun// [[users: | ||
We also load the cuda and openmpi [[users: | We also load the cuda and openmpi [[users: | ||
- | Remember that SLURM will decide which GPUs to reserve for you, thus is your program duty to select the correct device IDs, otherwise GPUs could not be accessed. The list of the reserved device IDs is in the //CUDA_VISIBLE_DEVICES// environment variable. | + | Remember that SLURM will decide which GPUs to reserve for you, thus is your program duty to select the correct device IDs, otherwise GPUs could not be accessed. The list of the reserved device IDs is in the [[https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/ |
<code bash> | <code bash> | ||
Line 80: | Line 101: | ||
#SBATCH --error=gpu-test-%j.err | #SBATCH --error=gpu-test-%j.err | ||
#SBATCH --output=gpu-test-%j.out | #SBATCH --output=gpu-test-%j.out | ||
- | #SBATCH --ntasks 16 | + | #SBATCH --ntasks=16 |
#SBATCH --ntasks-per-node=16 | #SBATCH --ntasks-per-node=16 | ||
#SBATCH --partition=longrun | #SBATCH --partition=longrun | ||
Line 88: | Line 109: | ||
module load openmpi | module load openmpi | ||
- | module list | + | srun ./ |
- | + | ||
- | srun ./ | + | |
</ | </ | ||
+ | |||
+ | ===== Additional Examples ===== | ||
+ | |||
+ | You can find additional generic examples here: | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | [[http:// | ||
===== Meaning of the most common options ===== | ===== Meaning of the most common options ===== |
users/slurm.1459448007.txt.gz · Last modified: 2016/03/31 20:13 by ecalore