Singularity is free and open source container solution that has been created out of necessity, for scientific and application driven workloads. Singularity defines the "Mobility of Compute" concept as the ability to organize, create and maintain a workflow and be confident that the workflow can be executed on different hosts, operating systems (as long as it is Linux) and service providers. Being able to contain the entire software stack, from data files to library stack, and portably move it from system to system is true mobility.
Below are clarifications on various aspects of using Singularity.
The following directories are automatically bound and available from within each container:
The default user within each container is the same the one you have on the cluster (i.e., your UZH Active Directory Account).
Due to permissions on ScienceCluster, you will need to use the
-u argument with many of your Singularity commands as demonstrated below.
It is not permitted to create Singularity container files on the login or the compute nodes. You must therefore first copy or pull a ready-to-use container file to the login node before you build your container environment.
In addition, you should also expand the container image file before running anything in the container. This is especially important if you plan to submit multiple jobs that use the same container. The reason behind it is that Singularity can only run expanded containers under a regular user account. If you do not expand it yourself, Singularity will perform the expansion and save the resultant files in the temporary directory on the compute node. Since automatically expanded containers are not re-used each job invocation will create a separate expanded image, which eventually fills the temporary directory partition and causes job failures.
An image can be expanded using the following command:
module load singularityce singularity build --sandbox my_container_dir my_container_file.simg
my_container_diris the target directory to which the image should be expanded.
my_container_file.simgis the container image file to be expanded. (Your file may have a different extension; e.g.,
Submitting jobs using containers is done in the very same way as for the normal jobs. An example of a sbatch script line calling singularity:
#!/bin/bash #SBATCH --time=01:00:00 #SBATCH --mem=256G #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 srun singularity exec -u <CONTAINER> <COMMAND>
Alternatively if you've already edited the
/singularity script in your container you can directly run the container, which by default will execute that script.
#!/bin/bash #SBATCH --time=01:00:00 #SBATCH --mem=256G #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 srun singularity run -u <CONTAINER>
You can make additional directories available in the container using the
--bind option. For example, in case you want to have your
appX_input directory available as
/input within the container, you should follow this example:
srun singularity exec -u --bind ./appX_input:/input <CONTAINER> <COMMAND>
Running MPI Jobs¶
Out of the box compatibility is reported for OpenMPI (v2.1) and Singularity. An example of calling an MPI enabled applications is as follows:
#SBATCH -n 32 #SBATCH -N 2 #SBATCH --time=00:10:00 #SBATCH --mem=20G module load infiniband srun singularity exec -u mpi-hello-world ./mpi_hello_world
Singularity supports GPUs. An example of calling a GPU enabled application is as follows:
#!/bin/bash #SBATCH --time=00:10:00 #SBATCH --gres gpu:4 #SBATCH --mem 3000 module load a100 srun singularity exec -u --nv tensorflow-gpu python ./tensorflow_app.py
--nv flag is necessary to enable the experimental Nvidia support.
To run TensorFlow from an officially supported container registry, use the following lines of code to pull the container file then build the sandbox directory.
We recommend that you store your Singularity image files and expanded sandbox directories in the
/data directory. I.e., run
cd ~/data to move to your
/data directory before running the code below.
module load singularityce singularity pull docker://tensorflow/tensorflow:latest-gpu singularity build --sandbox tensorflow_latest-gpu tensorflow_latest-gpu.sif
You should then have a
tensorflow_latest-gpu.sif file and a
tensorflow_latest-gpu sandbox directory.
You can then run the following lines of code to (1) request an interactive session where you will have access to a GPU and (2) receive a Bash command line input using the Singularity container environment that you just created.
# Request an interactive session on vesta with a GPU module load t4 srun --pty -n 1 -c 4 --time=01:00:00 --gres gpu:1 --mem=8G bash -l # Request a Bash command line within / using the container environment singularity shell --nv tensorflow_latest-gpu
Once you've opened the Bash command line in the Singularity container, the text before your computer's command line prompt will change to
Singularity>. You can then run
python to open an interactive Python instance and begin running code with TensorFlow (i.e.,
import tensorflow as tf).
If you are new to containers, and particularly to Singularity, you find more information here.