Using Singularity¶
This guide provides general instructions for using Singularity to create and manage a container-based software environments.
Overview¶
Singularity is free and open source container solution that has been created out of necessity, for scientific and application driven workloads. Singularity, and its fork Apptainer, define the "Mobility of Compute" concept as the ability to organize, create and maintain a workflow and be confident that the workflow can be executed on different hosts, operating systems (as long as it is Linux) and service providers. Being able to contain the entire software stack, from data files to library stack, and portably move it from system to system is true mobility.
Tip
ScienceCluster users can load Singularity with:
module load singularityce
For detailed usage instructions, see the ScienceCluster Singularity guide.
ScienceCloud users can access Singularity by creating an instance using one of the latest Singularity images. The latest images have *** prefix in their names - for example, ***Singularity on Ubuntu 24.04 (2024-11-21)
.
Other users can check Singularity installation directions in the Official Singularity User Guide.
For more comprehensive information, refer to:
Building a Singularity container¶
Singularity allows you to create portable containers using two main approaches:
- Build directly from a container registry (e.g., Docker Hub).
- Build a custom image using a definition file.
Let’s compare these two approaches and provide detailed instructions for each.
Feature | From Container Registry | From Definition File |
---|---|---|
Source Image | Docker Hub or another container registry | Docker image or another Singularity image |
Customization | Limited (cannot modify during build) | Full customization via %post , %environment , etc. |
Use Case | Quick start with standard tools | Add software, configurations, or build custom environments |
Build Command | singularity build <output.sif> docker://<image> | sudo singularity build <output.sif> <definition.def> |
Building from a container registry¶
Container registries such as Docker Hub offer a wide range of prebuilt images you can use with Singularity, for example:
Once you've found the software that you want, you can build the .sif
file from a Docker image using this syntax:
singularity build <output.sif> docker://<image>:<tag>
If you omit the <tag>
, it defaults to the latest
available version.
Example 1: R and Tidyverse with Rocker container¶
For example, to build a Rocker container with the most recent version of R and the tidyverse packages:
singularity build tidyverse.sif docker://rocker/tidyverse
If you need a specific version—for instance, R 4.3.1—you can specify the tag accordingly:
singularity build tidyverse.sif docker://rocker/tidyverse:4.3.1
You can browse available versions (tags) on the rocker/tidyverse Tags page.
To explore container interactively run shell within a container:
singularity shell tidyverse.sif
This command starts a shell inside the container, where you can launch R.
Example 2: CPU-only and GPU-enabled TensorFlow container¶
To build TensorFlow container from Docker Hub, you'd write:
singularity build tf.sif docker://tensorflow/tensorflow
To build GPU-enabled TensorFlow container, use the -gpu
tag in the container name:
singularity build tf-gpu.sif docker://tensorflow/tensorflow:2.15.0-gpu
Note
The latest-gpu
tag of the official TensorFlow Docker image does not currently support GPU acceleration as expected. As a workaround, we recommend using the tensorflow:2.15.0-gpu
version, which recognizes and utilizes available GPU resources.
Creating a custom image from a definition file¶
If you need to add software to a pre-existing Singularity Image File, you'll need to use a custom definition file to 'bootstrap' from a Singularity or Docker image. "Bootstrapping" an image means that you will be using the image as a starting point for building a custom image with additional software or configuration. The definition file .def
is a plain text file that selects the starting image that you'll bootstrap as well as additional commands that will add more software to the container.
Once you've saved your definition file as a text file with a .def
extension, you can then build a Singularity Image File .sif
from it using the following command:
sudo singularity build <output.sif> <definition.def>
Example 1: TensorFlow container + pandas
installation¶
Create the CPU-only Singularity definition file (e.g. tensorflow-pandas.def
):
Bootstrap: docker
From: tensorflow/tensorflow
%post
pip install pandas
This example uses a TensorFlow container from Docker Hub and includes a %post
section to install the pandas
package. In general, %post
section allows users to define specific commands that can augment the container. The pip
program is available as it was already installed in the TensorFlow container.
To create an image called tensorflow.sif
based on definition file tensorflow-pandas.def
, you would run:
sudo singularity build tensorflow.sif tensorflow-pandas.def
This will generate a file named tensorflow.sif
, but you can choose any name you prefer for the output file.
You can test the container and the version of installed pandas
library:
singularity exec tensorflow.sif python -c "import pandas as pd; print(pd.__version__)"
pandas
printed to the terminal. Example 2: GPU-enabled TensorFlow container + pandas
installation¶
An equivalent GPU capable Singularity definition file (e.g. tensorflow-pandas-gpu.def
) might look like this:
Bootstrap: docker
From: tensorflow/tensorflow:2.15.0-gpu
%post
pip install pandas
Build the GPU-enabled Singularity container:
sudo singularity build tensorflow-gpu.sif tensorflow-pandas-gpu.def
Test the container and pandas
version from shell:
singularity exec --nv tensorflow-gpu.sif python -c "import pandas as pd; print(pd.__version__)"
--nv
flag is required to enable access to NVIDIA GPU resources from within the container. Furthermore, you can check the installed GPU devices recognized inside the container environment by running:
singularity shell --nv tensorflow-gpu.sif
nvidia-smi
python -c 'import tensorflow as tf; \
print("Built with CUDA:", tf.test.is_built_with_cuda()); \
print("Num GPUs Available:", len(tf.config.list_physical_devices("GPU"))); \
print("TF version:", tf.__version__)'
Example 3: R and Tidyverse with Rocker container + custom software installation¶
A more complex example of a Singularity definition file (e.g. tidyverse-jags.def
) might look like the following:
Bootstrap: docker
From: rocker/tidyverse:4.0.3
%post
apt-get update && . /etc/environment
wget sourceforge.net/projects/mcmc-jags/files/JAGS/4.x/Source/JAGS-4.3.0.tar.gz -O jags.tar.gz
tar -xf jags.tar.gz
cd JAGS* && ./configure && make -j4 && make install
cd ~
apt-get update && . /etc/environment
wget sourceforge.net/projects/jags-wiener/files/JAGS-WIENER-MODULE-1.1.tar.gz -O jagswiener.tar.gz
tar -xf jagswiener.tar.gz
cd JAGS-WIENER-MODULE-1.1 && ./configure && make -j4 && make install
R -e "install.packages('runjags')"
Build the container:
sudo singularity build tidyverse-jags.sif tidyverse-jags.def
Notice that this example uses many operating system commands to prepare/install system level packages; for example, apt-get update
and make install
. You can use these commands because the rocker/tidyverse:4.0.3
container is built using Ubuntu 20 as its underlying operating system.
To confirm the base OS of a container, open a shell inside the container:
singularity shell tidyverse-jags.sif
Once inside the container, run the following command to check the operating system details:
lsb_release -a
This will output information such as the distribution name, version, and codename (e.g., Ubuntu 20.04), helping you verify which system-level package manager and commands are appropriate to use in your definition file.
Note
The process of creating a Singularity Image File from a definition file will often take a significant amount of trial and error. Be patient and persistent. Explore Singularity container interactively using singularity shell
on each of the image files you create to open your environment and confirm whether you can load your software/packages of interest.
Exploring a Singularity container using an interactive shell¶
It is often helpful to explore a Singularity container's environment interactively, especially when creating or testing a Singularity Image File .sif
. For example, this allows you to see what software is available inside the container. You can also refer to the container's information on Docker Hub if it was built from a Docker image.
To launch an interactive shell within the container, run:
singularity shell <container>
For GPU-enabled containers, add --nv
flag allowing the container to access GPU resources if available:
singularity shell --nv <container>
This command opens a shell inside the <container>
.
Running commands inside a Singularity container¶
To run a specific command inside a container:
singularity exec <container> <command>
For example, to execute command python script.py
inside the tensorflow.sif
container:
singularity exec tensorflow.sif python script.py
Directory binding¶
You can make additional directories available in the container using the --bind
option as follows:
singularity exec --bind <host_directory>:<container_directory> <container> <command>
For example, if you want your project_data/
directory on your host machine to be available (mounted as /input
) within the container, use:
singularity exec --bind project_data/:/input tensorflow.sif bash
This command starts a bash shell inside the container, and ensures that /input
directory inside the container mirrors the contents of project_data/
from your host machine.
Singularity cache¶
If you frequently build Singularity Image Files, your $HOME
folder may fill up with data due to the Singularity cache storing reference files from the build process.
Cleaning the cache¶
To free up space by clearing cached data, run:
singularity cache clean
Cleaning the cache does not delete built images.
For more details, refer to the official documentation:
📘 Singularity cache clean - User Guide
Default cache directory¶
By default, Singularity stores cached files in the following directory:
$HOME/.singularity/cache
To verify your current cache directory, use:
echo $SINGULARITY_CACHEDIR
Using a custom cache directory¶
To avoid filling up your $HOME
space, you can set a custom cache directory to any other high-capacity location provided by your system:
export SINGULARITY_CACHEDIR=custom_directory/
To make this change permanent, so that it is set each time you log in, add it to your .bashrc
:
echo "export SINGULARITY_CACHEDIR=custom_directory/" >> ~/.bashrc
And reload the .bashrc
settings:
source ~/.bashrc
Then confirm the new cache path:
echo $SINGULARITY_CACHEDIR
More info here: