Skip to content

Partitions

Our cluster has been partitioned according to its hardware capabilities.

CPU partitions

Here are brief descriptions on what sorts of jobs should run on each CPU partition:

  • generic: small jobs requiring at most 32 vCPUs and/or 123 GB of RAM.
  • hpc: medium jobs requiring a high speed inter-connect or high CPU/memory (> 32 vCPUs or > 123 GB RAM per job).
  • hydra: large jobs requiring more than 377 GB of RAM.

Please note that hydra is dedicated to running large jobs while the hpc partition is specifically designed for large MPI jobs. Therefore, if a small job is submitted to hydra or hpc, we reserve the right to block it or redirect it automatically to the generic partition.

Generic

Size 120 nodes of 3 different sizes
Processor AMD EPYC 7702
Memory 8, 32 or 128 GB per node (40 nodes for each)
Compute Cores 2 , 8 or 32 vCPU per node (40 nodes for each)

HPC

Size 18 nodes
Processor 2 x Intel Xeon Gold 6126 per node
Memory 384 GB per node
Compute Cores 48 vCPU per node

Hydra

Size 6 Nodes
Processor 4 x Intel Xeon CPU E7-4850 v4 per node
Memory 3TB per node
Compute Cores 128 vCores per node

GPU partitions

Here are descriptions on what sorts of jobs should run on each GPU partition:

  • vesta: jobs requiring GPUs (Nvidia K80 cards).
  • volta: jobs requiring GPUs (Nvidia V100 cards).

Vesta nodes have older Nvidia cards that are being gradually decommissioned. V100 cards can theoretically be around 3.5 and 5.2 times faster than the K80 cards for single and double precision calculations, respectively.

Vesta

Size 5 nodes
Processor 2x Intel Xeon E5 processors per node
GPU 8x NVIDIA Tesla K80 (16 GPU devices) per node
Memory Up to 48 GB of system memory per GPU device

Volta

Size 6 nodes
Processor 2x Intel Xeon Gold processors per node
GPU NVIDIA Tesla V100 (8 GPU devices) per node
Interconnect NVLink on node; Infiniband/NVLink between nodes
Configuration 1 2 nodes: V100 with 16 GB RAM (16 GPUs total)
48 GB system memory per GPU device
Configuration 2 4 nodes: V100 with 32 GB RAM (32 GPUs total)
96 GB system memory per GPU device

Partition selection

You can switch to a specific partition by loading one of the partition modules. For example, the following command selects the generic partition.

module load generic

Alternatively, you can add --partition parameter to your sbatch command or job script.

No partition is selected by default. So, if you do not load a partition module and do not specify the partition explicitly as an sbatch parameter, your job will be rejected. You can see the list of available partitions by listing all available modules with module avail or module av. Partitions will be listed in the section titled /sapps/etc/modules/start. The following command can be used to display the partitions that you can access.

sacctmgr show assoc format=partition,account%20 user=<username>

Unless extra partitions are explicitly requested, new users get access only to the generic partition. If you need access to a partition that is not listed for your account, please ask the account owner or the technical contact to send S3IT a request.

After loading a partition, you can also use module av command to see the list of software available on that partition.


Last update: March 21, 2022