Skip to content

Change Log

Updates to ScienceCloud and ScienceCluster are available on this page. For the Supercomputer service, please see the Alps (Eiger) User Guide.




  • ScienceCluster:
    • Upgraded Slurm to version 24.05
    • New released modules: cmake 3.29.6, cudnn (with cuda 12.4.1), gcc 14.1.0 (with cuda 12.4.1), openmpi 5.0.3, fftw 3.3.10 and hdf5 1.14.3, both recompiled with openmpi 5.0.3



  • ScienceCluster:

    • The default value of 1MB is now set for the SLURM memory parameters. This will prevent the whole node allocation when no memory parameters is specified while running sbatch/srun commands.

      Important: adjust your SLURM submission scripts so that one of the memory parameters is specified (--mem, --mem-per-cpu, --mem-per-gpu), otherwise the default value of 1MB will be used.

    • New released modules: python 3.12.3, anaconda3 2024.02-1, mamba 24.3.0-0, cuda 12.4.1, cudnn, jupyter 2024.06 (4.2.1), RStudio Server 2024.06 (R 4.4)

    • Removed unused and old modules: boost 1.78.0, boost 1.83.0, python 3.11.0, mamba 4.14.0-0, mamba 22.9.0-2, mamba 23.1.0-1, nvhpc 22.7, rclone 1.62.2
    • Hardware changes: replacement of the large memory nodes and several CPU nodes of medium memory added.



  • ScienceCluster:
    • System upgrades for improved stability and security
    • singularityce 4.1.0 is now the default version



  • ScienceCloud:
    • Access to the the cloud console ( has been restricted to UZH-only networks (wired, wifi, and VPN). The cloud console is now dropping external non-UZH connection attempts.



  • ScienceCluster:
    • Users can now use the command "ls -lh" to print the content size for each folder/file: this allows to quickly find large folders in case of over-quota issues, including hidden ones (e.g. ".local").
    • Upgrade: nvidia driver
    • New released modules: cuda 12.3.0, openmpi 5.0.1, boost 1.84.0, mamba 23.11.0-0, rclone 1.65.1, singularityce 4.1.0




  • ScienceCluster:
    • Updates to modules: python 3.12, anaconda3 2023.09-0, mamba 23.3.1-1, nvhpc 23.9, singularityce 4.0.2, cuda 12.2.1, nccl 2.19.3-1, cudnn, cudnn, gcc 13.2.0, openmpi 4.1.6


  • ScienceCluster:
    • Matlab R2023b module released
  • ScienceCloud:
    • Images updated; see here for the list of active images
      • MatLab and Gaussian users should use a "***Singularity..." image then follow these MatLab and/or Gaussian instructions


  • ScienceCluster:
    • OnDemand (for ScienceApps) upgraded for improved stability and security
    • Upgraded kernel version and firmware on compute nodes for improved stability and security
    • Users can now take advantage of all GPU nodes via the lowprio partition
    • To fix long-standing issues, some changes will be made to the user environments. Because of that, we recommend performing the following actions.

      Important: this should be done when no jobs are running; pending jobs are fine.

      • 1 - Move $HOME/.local to your data directory and symlink it. But first, check if the destination already exists:
        stat /data/$USER/.local
      • 2 - If it shows "stat: cannot stat ..." then run:
        mv $HOME/.local /data/$USER
        ln -s /data/$USER/.local $HOME/.local
      • 3 - Otherwise, you'll need to merge these two directories. Since each user case would be unique, we are unable to provide specific instructions. Please contact Science IT if you have any specific questions.