Change Log¶
Updates to ScienceCloud and ScienceCluster are available on this page. For the Supercomputer service, please see the Alps (Eiger) User Guide.
2025¶
January¶
2025-01-16¶
- ScienceCluster:
- scrontab has been enabled for scheduling Slurm jobs, and regular crontab for users has been deactivated. See, for example, this external guide for documentation on how to use scrontab.
2024¶
November¶
2024-11-06¶
- ScienceCluster:
- Users can get the updated list of modules and avoid module load errors by removing the
lmod
cache. We therefore suggest to run the following command:rm .cache/lmod/*
- New released modules: VS Code 1.93 (code-server 4.93.1), jupyter 2024-10-22, RStudio Server with R 4.4.2, mamba 24.9.0-0, rclone 1.68.1, singularityce 4.2.1, boost 1.86.0, perl 5.40.0, cmake 3.30.5, gsl 2.8, python 3.13.0, nvhpc 24.9, cuda 12.6.2, nccl 2.22.3-1, cudnn 9.5.1.17-12, gcc 14.2.0, openmpi 5.0.5, hdf5 1.14.5, intel-oneapi-compilers 2025.0.0, intel-oneapi-mkl 2025.0.0, intel-oneapi-tbb 2022.0.0, intel-oneapi-mpi 2021.14
- Removed unused and old modules: Matlab 2021b, VS Code Server 3.10.2, anaconda3 2022.05, 2022.10, mamba 23.3.1-1, perl 5.34.1, rclone 1.59.1, singularity 3.10.2, 3.11.3, 4.0.2, 4.1.0, cmake 3.26.3
- Upgrade: nvidia driver
- Users can get the updated list of modules and avoid module load errors by removing the
August¶
2024-08-22¶
- ScienceCluster:
- (ScienceApps) Upgraded VSCode. Added support for interactive GPU sessions.
July¶
2024-07-03¶
- ScienceCluster:
- Upgraded Slurm to version 24.05
- New released modules: cmake 3.29.6, cudnn 9.2.0.82-12 (with cuda 12.4.1), gcc 14.1.0 (with cuda 12.4.1), openmpi 5.0.3, fftw 3.3.10 and hdf5 1.14.3, both recompiled with openmpi 5.0.3
June¶
2024-06-05¶
-
ScienceCluster:
-
The default value of 1MB is now set for the SLURM memory parameters. This will prevent the whole node allocation when no memory parameters is specified while running sbatch/srun commands.
Important: adjust your SLURM submission scripts so that one of the memory parameters is specified (
--mem
,--mem-per-cpu
,--mem-per-gpu
), otherwise the default value of 1MB will be used. -
New released modules: python 3.12.3, anaconda3 2024.02-1, mamba 24.3.0-0, cuda 12.4.1, cudnn 8.9.7.29-12, jupyter 2024.06 (4.2.1), RStudio Server 2024.06 (R 4.4)
- Removed unused and old modules: boost 1.78.0, boost 1.83.0, python 3.11.0, mamba 4.14.0-0, mamba 22.9.0-2, mamba 23.1.0-1, nvhpc 22.7, rclone 1.62.2
- Hardware changes: replacement of the large memory nodes and several CPU nodes of medium memory added.
-
May¶
2024-05-08¶
- ScienceCluster:
- System upgrades for improved stability and security
- singularityce 4.1.0 is now the default version
March¶
2024-03-06¶
- ScienceCloud:
- Access to the the cloud console (cloud.s3it.uzh.ch) has been restricted to UZH-only networks (wired, wifi, and VPN). The cloud console is now dropping external non-UZH connection attempts.
February¶
2024-02-07¶
- ScienceCluster:
- Users can now use the command "ls -lh" to print the content size for each folder/file: this allows to quickly find large folders in case of over-quota issues, including hidden ones (e.g. ".local").
- Upgrade: nvidia driver
- New released modules: cuda 12.3.0, openmpi 5.0.1, boost 1.84.0, mamba 23.11.0-0, rclone 1.65.1, singularityce 4.1.0
2023¶
December¶
2023-12-04¶
- ScienceCluster:
- Updates to modules: python 3.12, anaconda3 2023.09-0, mamba 23.3.1-1, nvhpc 23.9, singularityce 4.0.2, cuda 12.2.1, nccl 2.19.3-1, cudnn 8.9.5.30-12.2, cudnn 8.9.5.30-11.8, gcc 13.2.0, openmpi 4.1.6
2023-11-09¶
- ScienceCluster:
- Matlab R2023b module released
- ScienceCloud:
2023-11-01¶
- ScienceCluster:
- OnDemand (for ScienceApps) upgraded for improved stability and security
- Upgraded kernel version and firmware on compute nodes for improved stability and security
- Users can now take advantage of all GPU nodes via the
lowprio
partition -
To fix long-standing issues, some changes will be made to the user environments. Because of that, we recommend performing the following actions.
Important: this should be done when no jobs are running; pending jobs are fine.
- 1 - Move
$HOME/.local
to your data directory and symlink it. But first, check if the destination already exists:stat /data/$USER/.local
- 2 - If it shows "stat: cannot stat ..." then run:
mv $HOME/.local /data/$USER ln -s /data/$USER/.local $HOME/.local
- 3 - Otherwise, you'll need to merge these two directories. Since each user case would be unique, we are unable to provide specific instructions. Please contact Science IT if you have any specific questions.
- 1 - Move