Skip to content

Cluster description

Hardware

68 compute nodes (28 cores, 256 GB RAM)
DELL C6320 on 17 nodes, 2x Intel Xeon E5-2695v3 (2.3GHz, 14 cores), 256GB RAM

1 "fat memory" compute node (64 cores, 3 TB RAM)
DELL R930, 4x Intel Xeon E7-8860v3 (2.2GHz, 16 cores), 3 To RAM

Storage based on MooseFS 200TB (locally replicated)
5/9 Dell R730xd, Intel Xeon E5-2630v4, 64 GB RAM, 2x 200Go SSD, 12x 8To 7200tpm

1 login and 1 admin node
DELL R630, 2x Intel Xeon E5-2620v4 (2.1GHz, 8 cores), 128 Go RAM

Local network Ethernet 10Gbits/s
DELL S6000 switch 10/40Gb Ethernet

Internet access 1Gbits/s

Total of 1968 cores, 20TB of RAM

Rack Admin node Storage (R730xd Storage (R730xd) Compute nodes (cpu-node-01-48) Compute nodes (cpu-node-49-68) Compute nodes (fat node, cpu-node-69) Switch Ethernet 10/40Gbits/s Switch Ethernet 10/40Gbits/s

Software layer

The cluster is managed by Slurm (version 17.11.7).

Scientific software and tools are available through Environment Modules and are mainly based on Conda packages or Singularity images.

Operating System: CentOS (cluster) and sometimes Ubuntu

Around the cluster management: Nagios Core, Proxmox VE, VMware ESX.

Deployment and configuration are powered by Ansible and GitLab Community Edition. Schema orchestration