Crane has four types of GPUs available in the gpu partition. The type of GPU is configured as a SLURM feature, so you can specify a type of GPU in your job resource requirements if necessary.
|Description||SLURM feature||Available Hardware|
|Tesla M2070, non-IB||gpu_m2070||3 nodes - 2 GPUs per node|
|Tesla K20, with IB||gpu_k20||3 nodes - 3 GPUs per node|
|Tesla K40, with IB||gpu_k40|
5 nodes - 4 K40M GPUs per node
1 node - 2 K40C GPUs per node
|Tesla P100, with OPA||gpu_p100||2 nodes - 2 GPUs per node|
To run your job on the next available GPU regardless of type, add the following options to your srun or sbatch command:
To run on a specific type of GPU, you can constrain your job to require a feature. To run on K40 GPUs for example:
You may request multiple GPUs by changing the
--gres value to -
-gres=gpu:2. Note that this value is per node. For example,
--nodes=2 --gres=gpu:2 will request 2 nodes with 2 GPUs each, for a total of 4 GPUs.
Compilation of CUDA or OpenACC jobs must be performed on the GPU nodes. Therefore, you must run an interactive job to compile. An example command to compile in the gpu partition could be:
The above command will start a shell on a GPU node with 6 cores and 6GB of ram in order to compile a GPU job. The above command could also be useful if you want to run a test GPU job interactively.
CUDA and OpenACC submissions require running on GPU nodes.
OpenACC submissions require loading the PGI compiler (which is currently required to compile as well).