Skip to content

SLURM at IFB Core

The Partitions and resource limits#

To request access to partitions on demand such as the bigmem node or the gpu nodes, please submit your request on the community support platform: community support website specifying your login and project name.

⚠️ The values below can change. To check the current situation:

scontrol show partition
sacctmgr list qos format=Name,Priority,MaxTRESPU%20
Partitions Time out Max resources / user Purpose
fast <= 24 hours cpu=300, mem=1500GB Default - Regular jobs
long <= 30 days cpu=300, mem=1500GB Long jobs
bigmem <= 60 days mem=4000GB On demand - For jobs requiring a lot of RAM
gpu <= 3 days cpu=300, mem=1500GB On demand - Access GPU cards

The default values#

Param Default value
--mem 2GB
--cpus 1

IFB Core Cluster Computing nodes#

⚠️ The values below can change. To check the current situation:

sinfo -Ne --format "%.15N %.4c %.7z %.7m" -S c,m,N | uniq

(Last update: 20/11/2023)

CPU nodes#

Nbr CPU (Hyper-Thread) RAM (GB )
10 30 128
1 38 256
66 54 256
16 254 2041
1 124 3096

(Last update: 04/05/2022)

GPU nodes#

Nbr CPU RAM (GB) GPU Type Disk /tmp
3 62 515 2 NVIDIA Ampere A100 40GB 4 TB

GPU Cards have been partitioned into isolated GPU instances with Multi-Instance GPU (MIG).

GPU Instance Profiles#

⚠️ The values below can change. To check the current situation:

sinfo -Ne -p gpu --format "%.15N %.4c %.7m %G"
Profile Name GPU Memory GPU Compute Slice Number of Instances Available
1g.5gb 5GB 1 14
3g.20gb 20GB 3 2
7g.40gb 40GB 7 3