VS Code on ScienceCluster¶
VS Code is a popular IDE among researchers. While it supports remote connections for developing code on servers, its design has specific implications that require users to exercise caution when working on shared systems.
Running VS Code with a remote connection to the login nodes is not supported; doing so puts pressure on the shared filesystem and may negatively affect all users. We provide support for the following options for using VS Code on the ScienceCluster.
Important
VS Code and similar IDEs with remote connections use resource-heavy file watchers that may degrade network filesystem performance for all users. Please be considerate and update your settings to exclude folders with many files: (1) open Settings, (2) search exclude, and (3) add patterns for folders with large amounts of files, such as venv and .git.
ScienceApps Code Server¶
ScienceApps provides a Code Server app to launch a lightweight version of VS code in the browser.
This is the fastest and easiest way to run VS Code on ScienceCluster, but as Code Server may not offer the full set of features, we provide two other solutions below.
ScienceApps Remote Desktop Environments¶
Another option from ScienceApps—coming soon—is to use the MATE or Xfce Desktop Environments, which offer full Linux environments within the browser via VNC. ScienceCluster users can run a more complete version of VS Code from these apps, including additional features and extensions. It's also possible to install custom software with customized Apptainer containers. See our page on Remote Desktop Environments for more info.
Note
We will release the new desktop environments with VS Code as soon as possible after the cluster migration. Stay tuned!
Connecting to a Compute Node¶
The most user-involved method is to connect directly to compute nodes (which ensures the shared login nodes are not swamped with daemons and file watchers). This approach can be used for VS Code as well as other IDEs, such as PyCharm.
The process works as follows:
-
Add this text to your local computer's SSH config (
.ssh/config). Adjust the identity files (i.e., the private part of your SSH keys, as noted in the!!comments below) as necessary.Host cluster cluster_node User <yourusername> Host cluster IdentityFile ~/.ssh/id_ed25519 # !! the SSH key used to connect to the cluster HostName cluster.s3it.uzh.ch ControlMaster auto ControlPath ~/.ssh/master-%r@%h:%p ControlPersist yes Host cluster_node IdentityFile ~/.ssh/id_rsa # !! your SSH key created on the cluster; step 3 below ProxyCommand ssh cluster 'nc $(squeue --me --name=tunnel --states=R -h -O NodeList,Comment)' StrictHostKeyChecking no IdentitiesOnly yes PreferredAuthentications publickey -
Create a file called "tunnel.sbatch" on the cluster filesystem with the following content. Adjust the Slurm values and modules to be loaded accordingly.
#!/bin/bash #SBATCH --output="tunnel.log" #SBATCH --job-name="tunnel" #SBATCH --time=0:30:00 #SBATCH --cpus-per-task=2 #SBATCH --gpus=1 #SBATCH --mem-per-cpu=4G module load gpu module load miniforge3 # Load specific environments for your research workflow conda activate gpuenv # Find an open port PORT=$(python -c 'import socket; s=socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.bind(("", 0)); print(s.getsockname()[1]); s.close()') scontrol update JobId="$SLURM_JOB_ID" Comment="$PORT" # Start the sshd server on the available port echo "Starting sshd on port $PORT" /usr/sbin/sshd -D \ -p ${PORT} \ -f /dev/null \ -h ${HOME}/.ssh/id_ed25519 \ -E ${HOME}/tunnel_sshd.log \ -o "PidFile=/dev/null" \ -o "Subsystem sftp /usr/lib/openssh/sftp-server" -
Create a key on the cluster if you haven't yet done so. Append that key to your
authorized_keysfile on the cluster for passwordless connections between cluster nodes. -
Submit the the job to the Slurm queue. You can adjust the requested resources at this point (e.g., by adjusting specific flags, like memory
--mem=128G).# Submit via: sbatch tunnel.sbatch -
When the job is running, start a remote session to "cluster_node" from your local computer.