Skip to content

Containers (Singularity) Tutorial

This tutorial demonstrates how to use Singularity to create a container based software environment for use on ScienceCluster (or elsewhere).

Info

Singularity's cache can quickly use up all the storage space in your home folder. Consider changing the singularity cache directory to your /data directory by modifying your .bashrc file. This can be done by running the following command from a login node: echo "export SINGULARITY_CACHEDIR=/data/$USER/singularity" >> ~/.bashrc.

Workflow Overview

Before beginning, it's helpful to understand the basic parts of a Singularity workflow.

First, you need to create a container. You can either obtain a container from a container registry such as docker hub or build it from your own definition file. In the first case, it should be possible to create the container directly on ScienceCluster. However, if you need some customizations such as installing custom applications, combining multiple frameworks, or adding custom configuration, you would need to build a sif image elsewhere (e.g. on a ScienceCloud VM), transfer the image to ScienceCluster, and expand it (sandboxing).

Once your container is ready, you would need to prepare your code to run from the Singularity environment. This step will involve augmenting your Slurm submission script to use the Singularity container.

Building from container registry

Container registries such as docker hub offer a wide range of prebuilt containers. They do not need to be built on ScienceCluster but rather converted to the appropriate singularity format. This could typically be done directly on ScienceCluster. Popular containers available through docker hub include:

Once you've found the software that you want, copy the relevant container ID from the provided docker pull ... command. You should prepend docker:// to this ID and use it in the singularity build command as shown below. Using docker://rocker/tidyverse will pull the latest available version of the software installed in the container. You can view various available versions of tidyverse via the Tags subpage. For example, if you specifically need R 4.3.1, you can copy the ID from the corresponding docker pull ... command. Thus, the full string to use in the singularity build command will be docker://rocker/tidyverse:4.3.1.

For example, you can create a rocker container with R and tidyverse packages as follows. We recommend to use an interactive session as building often requires more memory than available to users on the login nodes.

srun --pty -n 1 -c 2 --time=00:30:00 --mem=7G bash -l
module load singularityce
singularity build --sandbox /data/$USER/tidyverse docker://rocker/tidyverse

The first command opens an interactive session. The second loads the singularity module. The final command builds the sandboxed container in /data/$USER/tidyverse. Please note that the container directory tidyverse will be in your data directory. If you save containers in your home directory, you will exceed your quota very quickly.

The --sandbox flag means that the container will be built in the form of a directory with files rather than a single file. The sandboxed containers should be preferred over the "single file", or more precisely unexpanded, containers such as sif files. Due to security reasons, singularity has to expand (sandbox) all unexpanded containers and this process may be quite slow. So, if you run multiple jobs using an unexpanded container, singularity will have to create a sandbox version for each job. This would not only make the jobs to run slower but also put a considerable strain on the cluster file systems.

If you need a specific version (tag) rather than the latest one, you can add it after the column.

singularity build --sandbox /data/$USER/tidyverse docker://rocker/tidyverse:4.3.1

If the building process fails due to insufficient permissions, you may have to build the container as a sif file, transfer it to ScienceCluster, and expand it as described below.

Building a custom image

Creating a custom image from a definition file

If you need to add software to a pre-existing Singularity Image file, you'll need to use a custom definition file that "bootstraps" a Singularity or a Docker image. "Bootstrapping" an image means that you will be using the image as a starting point for building a custom image with additional software or configuration. Because it requires superuser (sudo) privileges, this specific process needs to be done either on a ScienceCloud VM (using a source VM image that has Singularity preinstalled; e.g., ***Singularity on Ubuntu 20.04 (2023-11-06)) or on your own computer. The installation directions can be found here. After the *.sif file has been created, you would need to transfer it to ScienceCluster and expand it for production use.

Once you have Singularity available via a ScienceCloud VM or via your own laptop, you'll need to make a definition file. The definition file is a plain text file that selects the starting image that you'll bootstrap as well as additional commands that will add more software to the container.

An example Singularity definition file might look like the following:

Bootstrap: docker
From: tensorflow/tensorflow

%post
    pip install pandas

This example uses a TensorFlow container from DockerHub and includes a %post section to install the pandas package. In general, %post section allows users to define specific commands that can augment the container. The pip program is available as it was already installed in the Tensorflow container.

To find out what software is available in a container, you can either research the existing Docker Hub information on the container or use the singularity shell command to explore the container interactively. The singularity shell command can be used either directly on a Singularity image file or on a Singularity sandbox directory (described in the section below).

A more complex example of a Singularity definition file might look like the following:

Bootstrap: docker
From: rocker/tidyverse:4.0.3

%post
    apt-get update && . /etc/environment
    wget sourceforge.net/projects/mcmc-jags/files/JAGS/4.x/Source/JAGS-4.3.0.tar.gz -O jags.tar.gz
    tar -xf jags.tar.gz
    cd JAGS* && ./configure && make -j4 && make install
    cd ~
    apt-get update && . /etc/environment
    wget sourceforge.net/projects/jags-wiener/files/JAGS-WIENER-MODULE-1.1.tar.gz -O jagswiener.tar.gz
    tar -xf jagswiener.tar.gz
    cd JAGS-WIENER-MODULE-1.1 && ./configure && make -j4 && make install
    R -e "install.packages('runjags')"

Notice that this example uses many operating system commands to prepare/install system level packages; for example, apt-get update and make install. You can use these commands because the rocker/tidyverse:4.0.3 container is built using Ubuntu 20 as the operating system. To determine this, you can pull the prebuilt image file from Docker hub using singularity pull docker://rocker/tidyverse:4.0.3 then use singularity shell tidyverse_4.0.3.sif to open a Command Line directly within the Container. When in the command line, you can run lsb_release -a to find the operating system version.

Once you've saved your definition file as a text file (e.g., using the file name recipe.def), you can then try to build a Singularity Image File from it using:

sudo singularity build tensorflow.sif recipe.def

In this example, the outputted *.sif file will be named tensorflow.sif. However, you can edit this name to be whatever you'd like.

Note

The process of creating a Singularity Image File from a definition file will often take a significant amount of trial and error. Be patient and persistent. Use singularity shell on each of the image files you create to open your environment and confirm whether you can load your software/packages of interest. Do not run your code from the singularity shell; instead, see the final section below on how to augment your cluster submission script to run your code.

Once you've created a Singularity Image File, you should transfer it to ScienceCluster.

Unpacking the image

When the file is located on ScienceCluster, you should then create a "sandbox" directory from it. A "sandbox" directory is an "unpacked" Singularity Image File. In explanation: when you use a Singularity Image File without sandboxing it, your workflow will need to "unpack" the files to access and run them; if you create a sandbox directory from the image file, then you can "unpack" the Singularity Image File once (and only once) then save yourself from unpacking it during future uses of the software.

To unpack a Singularity Image file titled tensorflow.sif into a directory titled tensorflow, your command would be:

singularity build --sandbox tensorflow tensorflow.sif

Once this command has been run, which might take anywhere from several seconds to several minutes, you'll end up with a directory titled tensorflow in the place where you ran this command. At this point, you're ready to use (and re-use) your prepared Singularity container environment.

Prepare your code to run from the Singularity environment

The final step to a Singularity workflow is preparing your code to use the environment. This step involves editing the Slurm submission script. Take for example the following submission script (more examples on this documentation page):

#!/bin/bash
#SBATCH --time=00:10:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=3000
#SBATCH --gpus=1
module load anaconda3
source activate tf
python examplecode.py

This code assumes that the tf environment has already been created, as it uses the source activate tf line to activate that environment. Then, it runs the examplecode.py script using python. To change this submission script workflow to use a Singularity container environment, consider the following submission script:

#!/bin/bash
#SBATCH --time=00:10:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=3000
#SBATCH --gpus=1
module load singularityce
singularity exec --nv tensorflow python examplecode.py

The principal edits to this script are:

  • source activate tf has been replaced by module load singularityce, which shows that Anaconda is no longer being used as the environment manager; Singularity will instead be used.
  • Instead of simply using python examplecode.py to run the script of interest, there is now an extended singularity exec command. This command executes an arbitrary command of interest using the specified Singularity container environment. The flags to this command are crucial to understand:
    • The --nv flag allows the container to access NVIDIA drivers so that the container can take advantage of available GPU's (see here).
    • The tensorflow argument specifies the sandbox directory created from the Singularity Image File.

Once you've augmented your Slurm Submission script, your code is ready to be submitted using a standard submission workflow (i.e., via sbatch after loading modules of interest).

Using an interactive session to explore a Singularity Container

As mentioned previously, it's often helpful to explore a Singularity container's environment interactively rather than via a submission script—especially when creating the Singularity Image file. To do so, first request and receive an appropriate interactive session on a ScienceCluster node, which will ensure you don't use too many resources on a login node.

Once you've received an interactive session that meets your computational needs, and after you've loaded the required modules (e.g., module load gpu), you can then request a Singularity shell prompt from the Command Line that will open inside of the container environment. For example:

singularity shell --nv tensorflow

will open a shell in the sandboxed tensorflow Singularity container.

Singularity Cache Cleaning

If you commonly build Singularity Image Files on ScienceCluster, you may notice that your /home folder is filling up with data. This is likely due to the Singularity cache being filled with reference files from the building processes. To clean the cache, simply run:

singularity cache clean