User Guide¶
ScienceApps allows you to interactively run and manage ScienceCluster sessions from the browser.
Interactive Apps¶
To create a new interactive session, click "Interactive Apps" on the top menu bar then select which App you would like to start.
The following interactive apps are available, where you can analyze data, develop algorithms, and create models:
Desktop Environments¶
Learn more about custom containers.
Beta Interactive Apps¶
The following apps are available in Beta, so only with very limited support:
- Apache Spark
- Code Server to run VS Code
- TensorBoard
Launching a Session¶
Once you have selected the App, complete the web form to create your session:
- Version: Application version
- Hours: The number of hours your interactive session should be available. You can always delete your interactive session at any point to stop the allocation. The maximum duration for a single session is one week (168 hours).
- Cores: Number of vCPUs to allocate for your session.
- RAM (system memory): Amount of memory to allocate for your session.
- GPU: (not available for all ScienceApps) Allows you to request a GPU either first available or of a specific type. Note: GPUs have their own memory, all of which is allocated to the session, independent of the above RAM setting.
- Project (Slurm account): (Optional) In most cases, this field should be left blank. If you are a member of multiple research groups, and the cost contribution needs to be assigned to a non-default project, you would then specify the name of the Science IT project that will fund your cost contribution.
- Partition: (GPU jobs only). Select the ScienceCluster partition that you want to use. This value is only applied when a GPU is requested. See lowprio for more info.
- Email notifications: Check the box "Receive email on all job state changes" if you want to receive email notifications when your job starts, fails, or ends.
My Interactive Sessions¶
This gives an overview of currently running interactive Apps. Here you can do the following:
- Connect to the web interface of an existing session
- View and manage queued sessions
- Delete running sessions to release the allocated resources
Files¶
You can interact with the filesystem through the web browser.
/home/$USERis where you store your configuration files, datasets and output files (limited in size)./scratch/$USERis for temporary storage of potentially big data sets. Data that is not accessed or modified in more than 30 days will be automatically removed./shares/<PROJECT>is a scalable group storage with cost contribution.
A full description of the ScienceCluster filesystem is available here. Reminder: Backing up or archiving your files to protect against data loss is the responsibility of the user.
Job Interaction¶
Here you can view and manage your current cluster jobs (active or in the queue).
With the Job Composer you can create jobs based on templates.
Advanced Topics¶
Cluster Shells¶
Start an interactive ssh shell on the frontend node of the cluster, similar to having an interactive session like in this article. You can use this tool to create custom Jupyter kernels as described above.
Info
For the best experience, please use Chrome or Firefox.
Custom Kernels in Jupyter¶
To use custom packages and dependencies in Jupyter on ScienceCluster, you can create your own kernel using either uv (for Python-only environments) or conda (for environments that include non-Python dependencies).
Both tools require you to access the ScienceCluster from a terminal. First, ssh to the cluster from a terminal, or open an interactive terminal under the Cluster shells section.
Once a custom kernel is registered, it becomes available in ScienceApps (Jupyter). While it’s possible to install additional packages from within a Jupyter notebook, we recommend doing this from the terminal instead for stability and reproducibility.
uv¶
If your dependencies are only Python, e.g. available from PyPi and installed via pip, then you can use uv to create a custom kernel that can be used in ScienceApps (Jupyter).
First create a virtual environment with uv, following these steps. Then install ipykernel using these steps.
source myenv/bin/activate
uv pip install ipykernel
uv run ipython kernel install --user --name myenv
Info
If you plan to use a GPU in ScienceApps, you should create your environment on a GPU compute node to ensure compatibility. See this example for details.
conda¶
For environments with non-Python dependencies, you will need to first create a Conda virtual environment. For more information about using Conda, see here.
# Load conda via miniforge3
module load miniforge3
# Create and activate your environment
conda create --name myenv
source activate myenv
# Add packages to your environment, for example, numpy
conda install --name myenv numpy
# Install the tools to add a custom kernel
conda install --name myenv ipykernel
# Add your environment to the kernel list
ipython kernel install --user --name myenv
Warning
Only use pip inside a Conda environment when the package isn't available via conda install. Install pip packages after all conda packages. Also, ensure your Conda environment uses a compatible Python version.
Remove a kernel¶
In case you need to remove a kernel, you can use the following command from within the same environment:
jupyter kernelspec remove myenv
Tensorboard¶
Tensorboard can be used to monitor the status of a Tensorflow model either in real-time or after a workflow has been completed.
For example, if you run the code from this page on the cluster, you'll create a logs folder within your current working directory. At that point (or at any point thereafter) you can begin a Tensorboard ScienceApps session by providing the absolute path to the logs folder via the "Log Directory" input (e.g., /scratch/$USER/logs if the logs directory is located in scratch).
Code Server¶
Code Server launches VS Code sessions which allows to develop directly on a compute node.
Note
If VSCode in ScienceApps does not satisfy your needs, the alternative would be to install VS Code on your machine along with all the necessary extensions, connect to a login node using remote ssh and then from the terminal of VS Code, start an interactive session to a compute node, i.e. to a GPU compute node.