VS Code on ScienceCluster¶
Whether you prefer VS Code, Cursor, PyCharm, or Zed - ScienceCluster offers multiple ways to bring your favorite IDE directly to your research workflow for high-performance computing.
Performance-First Development¶
To ensure your development environment is responsive, we recommend these simple optimizations:
- Dedicated Compute Nodes: For intensive, interactive work, use a compute node. This gives you exclusive access to the CPU, memory resources, and optional GPU that you need for high-performance tasks.
- Smart File Watching: Exclude large folders from VS Code's background scanner. Open Settings, search for exclude, and add filepath patterns like
venv,mydata, or.gitthat may contain many files.
Fair Usage for a Better Experience
The ScienceCluster is a shared community resource. Login nodes are provided for lightweight file management, quick edits, and job submissions; please run your development, interactive, and production workloads on compute nodes. Certain software and hardware restrictions are in place on login nodes to ensure the ScienceCluster's shared entry point remains responsive for you and your colleagues.
Connection Methods¶
Science IT supports various connection methods for your use of VS Code and similar IDEs on compute nodes.
ScienceApps Code Server¶
ScienceApps provides a Code Server app to launch VS Code in the browser.
This is the fastest and easiest way to run VS Code on ScienceCluster.
ScienceApps Remote Desktop Environments¶
Another option from ScienceApps is to use the MATE or Xfce Desktop Environments (currently in Beta), which offer Linux remote desktops from the browser via VNC. VS Code comes pre-installed with these apps from version 24.04-2025c or newer. It's also possible to install custom software with customized Apptainer containers. See our page on Remote Desktop Environments for more info.
Connecting to a Compute Node (Advanced)¶
Support Disclaimer
Due to the wide variety of possible client-side configurations and plugins, we cannot provide detailed technical support for this connection method.
The most user-involved method is to connect directly to compute nodes (which ensures the shared login nodes are not swamped with daemons and file watchers). This approach can be used for VS Code as well as other IDEs, such as PyCharm. This method requires the user has configured passwordless authentication.
One way to do so is as follows:
-
Add this text to your local computer's SSH config (
~/.ssh/config). Edit the file, specifically for your<shortname>and the path to your SSH key, as noted in the!!comments below.Host cluster_jh HostName cluster.s3it.uzh.ch User <shortname> # !! IdentityFile ~/.ssh/id_ed25519 # !! the SSH key used to connect to the cluster ControlMaster auto ControlPath ~/.ssh/master-%r@%h:%p ControlPersist yes Host cluster_node ProxyCommand ssh cluster_jh "nc \$(squeue --me --name=tunnel --states=R -h -O NodeList,Comment)" StrictHostKeyChecking no UserKnownHostsFile /dev/null IdentitiesOnly yes PreferredAuthentications publickey -
Create a file called
tunnel.sbatchon the cluster filesystem with the following content. The minimal example script requests 4 hours of compute time, with 4 cpus and 16GB total system memory. You should update as needed. Add#SBATCH --gpus=1to request a single GPU device.Specific GPU types can be requested using#!/bin/bash #SBATCH --output="tunnel.log" #SBATCH --job-name="tunnel" #SBATCH --time=4:00:00 #SBATCH --cpus-per-task=4 #SBATCH --mem=16G # Find an open port PORT=$(python -c 'import socket; s=socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.bind(("", 0)); print(s.getsockname()[1]); s.close()') scontrol update JobId="$SLURM_JOB_ID" Comment="$PORT" echo "Tunnel active on port: $PORT" # generate the job key pair HOST_KEY="${TMPDIR:-/tmp}/tmp_ed25519_${SLURM_JOB_ID}" echo "Generating temporary host key: ${HOST_KEY}" ssh-keygen -t ed25519 -f "$HOST_KEY" -N "" -q chmod 600 "$HOST_KEY" # Start the sshd server on the available port /usr/sbin/sshd -D \ -p ${PORT} \ -f /dev/null \ -h "${HOST_KEY}" \ -E ${HOME}/tunnel_sshd.log \ -o "PidFile=/dev/null" \ -o "StrictModes=no" \ -o "UsePAM=no" \ -o "PrintLastLog=no" \ -o "AllowTcpForwarding=yes" \ -o "AllowStreamLocalForwarding=yes" \ -o "GatewayPorts=yes" \ -o "Subsystem sftp internal-sftp"#SBATCH --gpus=H100:1(this requests 1 GPU of type H100). The standard guidelines for modifying Slurm scripts when submitting GPU jobs apply; for details, see section GPU jobs. -
Submit the job to the Slurm queue.
Another way to request a specific GPU is to specify the number of GPU devices intunnel.sbatch(e.g.,#SBATCH --gpus=1) and then load the desired GPU type module (e.g.,l4,a100,h100,h200) before submitting the job. -
Test the tunnel connection by running
ssh -v cluster_nodefrom a terminal on your machine. If it is configured correctly, you will be brought directly to the compute node. You can enterexitto close this connection - this step is only a test and not needed to remain active when connecting through VS Code remote session. -
Start a remote connection to host
cluster_nodefrom your VS Code or other IDE. -
In the integrated terminal of VS Code (or another IDE), your prompt will show
<username>@<hostname>, where<hostname>is the hostname of the allocated compute node (cluster_node), indicating that a remote SSH connection has been established. All software modules (e.g.,apptainerorminiforge3) are available.a. If you are using Apptainer, load the module:
module load apptainer. You can then continue with your usual Apptainer workflow (e.g., running or managing containers).b. If you are using Conda, load the module:
module load miniforge3. You can then proceed with your usual Conda workflow (e.g., creating, activating, or managing environments).
Info
Some users have noticed that if you open a folder with many files, the remote connection becomes unstable. Please only open folders with few files if you should encounter this issue.