User Tools

Site Tools


users:slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
users:slurm [2016/04/04 16:04] ecaloreusers:slurm [2016/11/07 13:56] (current) – [Running 16 MPI job using 16 GPUs on one node] ecalore
Line 68: Line 68:
 #SBATCH --error=my_job_name-%j.err #SBATCH --error=my_job_name-%j.err
 #SBATCH --output=my_job_name-%j.out #SBATCH --output=my_job_name-%j.out
-#SBATCH --ntasks 4+#SBATCH --ntasks=4
 #SBATCH --ntasks-per-node=4 #SBATCH --ntasks-per-node=4
  
Line 82: Line 82:
 <code bash> <code bash>
 #!/bin/bash #!/bin/bash
-#SBATCH --ntasks 32+#SBATCH --ntasks=32
 #SBATCH --ntasks-per-node=16 #SBATCH --ntasks-per-node=16
  
Line 101: Line 101:
 #SBATCH --error=gpu-test-%j.err #SBATCH --error=gpu-test-%j.err
 #SBATCH --output=gpu-test-%j.out #SBATCH --output=gpu-test-%j.out
-#SBATCH --ntasks 16+#SBATCH --ntasks=16
 #SBATCH --ntasks-per-node=16 #SBATCH --ntasks-per-node=16
 #SBATCH --partition=longrun #SBATCH --partition=longrun
Line 117: Line 117:
  
 [[https://www.hpc2n.umu.se/batchsystem/examples_scripts|High Performance Computing Center North]] [[https://www.hpc2n.umu.se/batchsystem/examples_scripts|High Performance Computing Center North]]
 +
 [[http://www.ceci-hpc.be/slurm_faq.html|Consortium des Équipements de Calcul Intensif]] [[http://www.ceci-hpc.be/slurm_faq.html|Consortium des Équipements de Calcul Intensif]]
  
users/slurm.1459785879.txt.gz · Last modified: 2016/04/04 16:04 by ecalore