Skip to content

MPI

MPI is a library of routines that can be used to create parallel programs. The MPI standard was developed to ameliorate interoperability problems between programming language constructs. It is a library that supplies commonly-available operating system services to create parallel processes and exchange information among these processes.

MPI is designed to allow users to create programs that can run efficiently on most parallel architectures. The design process included vendors (such as IBM, Intel, TMC, Cray, Convex, etc.), parallel library authors (involved in the development of PVM, Linda, etc.), and applications specialists. The final version for the draft standard became available in May of 1994.

There are various flavours of MPI available on the cluster, these all come with limited support. Currently the following are installed:

  • OpenMPI
  • IntelMPI

OpenMPI and IntelMPI and cluster modules

By default ethernet optimized versions of MPI are loaded when loading an MPI module without setting a hardware constraint. This is because ethernet is available on all nodes in the cluster. Infiniband is only available on HPC-nodes and a subset of VESTA-nodes.

  • Ethernet MPI: OpenMPI or IntelMPI
  • Infiniband MPI: OpenMPI
  • Infiniband MPI + CUDA: OpenMPI

Examples for loading various ethernet mpi variants:

# OpenMPI
module load openmpi

# IntelMPI
module load intel-oneapi-mpi
If one wants to use a version of MPI that is optimized for Infiniband:
module load infiniband

# OpenMPI
module load openmpi
or Infiniband with CUDA, you can choose the following:
module load multigpu

# OpenMPI
module load openmpi

Take note that the commands to load specific hardware resources (module load infiniband or module load multigpu) automatically constrains the resources on which the computation will be allowed to run (e.g. the MPI version is optimized for the hardware and should not run on other nodes).


Last update: March 7, 2023