Skip to content

Containers

Singularity

Singularity is free and open source container solution that has been created out of necessity, for scientific and application driven workloads. Singularity defines the "Mobility of Compute" concept as the ability to organize, create and maintain a workflow and be confident that the workflow can be executed on different hosts, operating systems (as long as it is Linux) and service providers. Being able to contain the entire software stack, from data files to library stack, and portably move it from system to system is true mobility.

Below are clarifications on various aspects of using Singularity.

Bound directories

The following directories are automatically bound and available from within each container:

  • /home
  • /scratch
  • /data
  • /shares
  • /sctmp
  • /apps

This is achieved by setting SINGULARITY_BINDPATH environmental variable for all users. The current setting is sufficient for the majority of cases.

Warning

We strongly recommend to keep the current settings. You could specify additional bind points if necessary. However, you should not redefine this list.

Default User

The default user within each container is the same the one you have on the cluster (i.e., your UZH Active Directory Account).

Building singularity containers

Building a container may entail actions that are not permitted on the login or compute nodes. However, this typically applies to containers built from custom definition files. If you are using an image from docker hub, you should be able to build it on ScienceCluster. We recommend to use an interactive session as building often requires more memory than available to users on the login nodes.

For example, you can create a rocker container with R and tidyverse packages as follows.

srun --pty -n 1 -c 2 --time=00:30:00 --mem=7G bash -l
module load singularityce
singularity build /data/$USER/tidyverse.sif docker://rocker/tidyverse

The first command opens an interactive session. The second loads the singularity module. The final command builds the container in the /data/$USER/tidyverse.sif file. Please note that the container file tidyverse.sif will be in your data directory. If you save containers in your home directory, you will exceed your quota very quickly.

If you need a specific version (tag) rather than the latest one, you can add it after the column.

singularity build /data/$USER/tidyverse.sif docker://rocker/tidyverse:4.3.1

If the building process fails due to insufficient permissions or if you need to use a custom definition file, you would have to build the container elsewhere, e.g. on a ScienceCloud node, then transfer it to the cluster.

One way to test the container might be with the shell command.

singularity shell /data/$USER/tidyverse.sif

This command starts a shell inside the container and you should be able to launch R from that shell.

Running jobs with singularity containers

Submitting jobs using containers is done in the very same way as for the normal jobs. An example of a sbatch script line calling singularity:

#!/usr/bin/bash -l
#SBATCH --time=00:30:00
#SBATCH --mem=7G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
singularity exec <CONTAINER> <COMMAND>

Alternatively if you've already edited the /singularity script in your container you can directly run the container, which by default will execute that script.

#!/usr/bin/bash -l
#SBATCH --time=01:00:00
#SBATCH --mem=7G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
singularity run <CONTAINER>

Directory binding

You can make additional directories available in the container using the --bind option. For example, in case you want to have your appX_input directory available as /input within the container, you should follow this example:

singularity exec --bind ./appX_input:/input <CONTAINER> <COMMAND>

Running MPI Jobs

Out of the box compatibility is reported for OpenMPI (v2.1) and Singularity. An example of calling an MPI enabled applications is as follows:

#!/usr/bin/bash -l
#SBATCH -n 32
#SBATCH -N 2
#SBATCH --time=00:10:00
#SBATCH --mem=15G
singularity exec mpi-hello-world.sif ./mpi_hello_world

Important

You have to load infiniband module before submitting such scripts.

Running GPU-accelerated jobs

Singularity supports GPUs. An example of calling a GPU enabled application is as follows:

#!/usr/bin/bash -l
#SBATCH --time=00:10:00
#SBATCH --gpus=2
#SBATCH --mem=3000
singularity exec --nv tensorflow-gpu.sif python ./tensorflow_app.py

Note

The --nv flag is necessary to enable the Nvidia GPU support.

Running Tensorflow jobs in a Singularity container

You can build a container from an officially supported Tensorflow image as follows.

srun --pty -n 1 -c 2 --time=00:30:00 --gpus=1 --mem=7G bash -l
module load singularityce
singularity build /data/$USER/tf_gpu.sif docker://tensorflow/tensorflow:latest-gpu

You can test it by starting a shell session in the container and running nvidia-smi as well as the Tensorflow GPU validation commands.

singularity shell --nv /data/$USER/tf_gpu.sif
nvidia-smi
python -c 'import tensorflow as tf; \
   print("Built with CUDA:", tf.test.is_built_with_cuda()); \
   print("Num GPUs Available:", len(tf.config.list_physical_devices("GPU"))); \
   print("TF version:", tf.__version__)'

Additional resources

If you are new to containers, and particularly to Singularity, you find more information here.