Using Singularity on the ScienceCluster¶
Singularity is one of the supported environment management tools on ScienceCluster and can be loaded as a module. To make it available in your session run:
module load singularityce
Info
Please refer to the generic guide Using Singularity for instructions about using Singularity.
Default user¶
The default user within each container is the same as your cluster user account (i.e., your UZH Active Directory user account).
Bound directories¶
The following directories are automatically bound and available from within each container:
/home
/scratch
/data
/shares
/sctmp
/apps
This is achieved by setting the SINGULARITY_BINDPATH
environment variable for all users. The default configuration is sufficient for the majority of cases.
Warning
We strongly recommend to keep the current settings. You may specify additional bind points if necessary, but you should not redefine the default list.
For guidance on binding additional directories, see the directory binding section of the generic guide Using Singularity.
Cache location¶
Singularity's cache can quickly use up all the storage space in your home
folder. Consider changing the Singularity cache directory to another location with more storage capacity, e.g., /data
or /scratch
, by modifying your ~/.bashrc
file.
echo "export SINGULARITY_CACHEDIR=/data/$USER/" >> ~/.bashrc
source ~/.bashrc
This command sets a custom cache location to /data/$USER/
and makes this change permanent, so that it is set each time you log in.
For further options, including how to clean Singularity's cache in order to reduce storage space usage, refer to the custom cache setup section.
Workflow overview¶
On ScienceCluster, the basic parts of a Singularity workflow are:
-
Creating the container image.
You can either obtain a container image from a container registry or build it from your own definition file. In the first case, you can usually convert the image directly on ScienceCluster into the appropriate Singularity format (e.g., a
.sif
file) using thesingularity build
command. Make sure to use an interactive session as this process often requires more memory than available to users on the login nodes. However, if your build process requires superuser (sudo
) privileges or involves custom steps such as installing additional applications, you will need to build the image elsewhere (e.g., on a ScienceCloud instance) and then transfer it to the cluster.Guidance on how to create or obtain containers is provided in the Building a Singularity container section of the general guide Using Singularity.
-
Preparing your code to run with Singularity.
Once your container image is ready, you will need to prepare your code to run from the Singularity environment. This step involves adjusting your Slurm submission script to execute your code within the container.
Guidance on how to do this is provided in the section below.
Running jobs with Singularity containers¶
Submitting jobs using Singularity containers is done in the same way as for normal jobs. An example of a sbatch
script line calling Singularity:
#!/usr/bin/bash -l
#SBATCH --time=00:30:00
#SBATCH --mem=7G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
singularity exec <container> <command>
Alternatively, if you've already edited the /singularity
script in your container, you can directly run the container, which by default will execute that script.
#!/usr/bin/bash -l
#SBATCH --time=01:00:00
#SBATCH --mem=7G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
singularity run <container>
Tip
It is often helpful to explore a Singularity container's environment interactively rather than through a submission script. If you want to inspect or test the container environment, request an interactive session on a ScienceCluster node, load any required modules (e.g., module load gpu
), and then launch a Singularity shell. For detailed steps, see this section of the generic guide Using Singularity.
However, if you want to run your actual code, we do not suggest to run it directly within the Singularity shell. Instead, update your Slurm submission script as described earlier.
Running MPI jobs¶
Out-of-the-box compatibility is reported for OpenMPI (v2.1) and Singularity. An example of calling an MPI-enabled application is as follows:
#!/usr/bin/bash -l
#SBATCH -n 32
#SBATCH -N 2
#SBATCH --time=00:10:00
#SBATCH --mem=15G
singularity exec mpi-hello-world.sif ./mpi_hello_world
Important
You have to load infiniband
module before submitting such scripts.
Running GPU-accelerated jobs¶
Singularity supports GPUs. An example of calling a GPU enabled application is as follows:
#!/usr/bin/bash -l
#SBATCH --time=00:10:00
#SBATCH --gpus=2
#SBATCH --mem=3000
singularity exec --nv tensorflow-gpu.sif python ./tensorflow_app.py
Note
The --nv
flag is necessary to enable NVIDIA GPU support.