Skip to content
Snippets Groups Projects
Commit 2a7a350d authored by Noah Löwer's avatar Noah Löwer
Browse files

Barnard/cluster update (so far) applied to quickstart

parent a8981e22
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
# Quick Start # Quick Start
This page is intended to provide the key information on starting to work on the ZIH High This page will give new users guidance through the steps needed to submit a High Performance
Performance Computing (HPC) system and is of particular importance to new users. Computing (HPC) job:
It is a map of the compendium as it provides an overview of the most relevant topics and
directs to the corresponding detailed articles within the compendium.
The topics covered include: * Applying for the ZIH HPC login
* Accessing the ZIH HPC systems
* Applying for the ZIH HPC login: things to know about obtaining access to the ZIH HPC * Transferring code/data to ZIH HPC systems
* Accessing the ZIH HPC system: the list of options and corresponding instructions * Accessing software
* Handling Data: the do's and don'ts of importing, transferring, managing data of your project * Running a parallel HPC job
* Accessing software: understanding ZIH HPC software options for your software needs
* Running a job: linking all of the above together to successfully setup and execute your code/program
## Introductory Instructions ## Introductory Instructions
The ZIH HPC system is a Linux system (as most HPC systems). Some basic Linux knowledge is The ZIH HPC systems are Linux systems (as most HPC systems). Basic Linux knowledge will
therefore needed. In preparation, explore the [collection](https://hpc-wiki.info/hpc/Shell) be needed. Being familiar with this [collection](https://hpc-wiki.info/hpc/Shell)
of the most important Linux commands needed on the HPC system. of the most important Linux commands is helpful.
To work on the ZIH HPC system and to follow the instructions on this page as well as other To work on the ZIH HPC systems and to follow the instructions on this page as well as other
compendium pages, it is important to be familiar with the compendium pages, it is important to be familiar with the
[basic terminology](https://hpc-wiki.info/hpc/HPC-Dictionary) such as [basic terminology](https://hpc-wiki.info/hpc/HPC-Dictionary) in HPC such as
[SSH](https://hpc-wiki.info/hpc/SSH), [cluster](https://hpc-wiki.info/hpc/HPC-Dictionary#Cluster), [SSH](https://hpc-wiki.info/hpc/SSH), [cluster](https://hpc-wiki.info/hpc/HPC-Dictionary#Cluster),
[login node](https://hpc-wiki.info/hpc/HPC-Dictionary#Login_Node), [login node](https://hpc-wiki.info/hpc/HPC-Dictionary#Login_Node),
[compute node](https://hpc-wiki.info/hpc/HPC-Dictionary#Backend_Node), [compute node](https://hpc-wiki.info/hpc/HPC-Dictionary#Backend_Node),
...@@ -31,38 +27,35 @@ compendium pages, it is important to be familiar with the ...@@ -31,38 +27,35 @@ compendium pages, it is important to be familiar with the
If you are new to HPC, we recommend visiting the introductory article about HPC at If you are new to HPC, we recommend visiting the introductory article about HPC at
[https://hpc-wiki.info/hpc/Getting_Started](https://hpc-wiki.info/hpc/Getting_Started). [https://hpc-wiki.info/hpc/Getting_Started](https://hpc-wiki.info/hpc/Getting_Started).
Throughout the compendium `marie@login` is used as an indication of working on the ZIH HPC command Throughout the compendium, `marie@login` is used as an indication of working on the ZIH HPC command
line and `marie@local` as working on your local machine's command line. `marie` stands-in for your line and `marie@local` as working on your local machine's command line. `marie` stands-in for your
username. username.
## Obtaining Access ## Obtaining Access
To use the ZIH HPC system, an ZIH HPC login is needed. It is different from the ZIH login (which A ZIH HPC login is needed to use the systems. It is different from the ZIH login (which
members of the TU Dresden have), but has the same credentials. members of the TU Dresden have), but has the same credentials. Apply for it via the
[HPC login application form](https://selfservice.zih.tu-dresden.de/index.php/hpclogin/noLogin).
The ZIH HPC system is structured by so-called HPC projects. To work on the ZIH HPC system, there Since HPC is structured in projects, there are two possibilities to work on the ZIH HPC systems:
are two possibilities:
* Creating a [new project](../application/project_request_form.md) * Creating a [new project](../application/project_request_form.md)
* Joining an existing project: e.g. new researchers in an existing project, students in projects for * Joining an existing project: e.g. new researchers in an existing project, students in projects for
teaching purposes. The details will be provided to you by the project administrator. teaching purposes. The details will be provided to you by the project administrator.
A HPC project on the ZIH HPC system includes: a project directory, project group, project members A HPC project on the ZIH HPC systems includes: a project directory, project group, project members
(at least admin and manager), and resource quotas for compute time (CPU/GPU hours) and storage. (at least admin and manager), and resource quotas for compute time (CPU/GPU hours) and storage.
One important aspect for HPC projects is a collaborative working style (research groups, student It is essential to grant appropriate file permissions so that newly added users can access a
groups for teaching purposes). Thus, granting appropriate file permissions and creating a unified project appropriately.
and consistent software environment for multiple users is essential.
This aspect is considered for all the following recommendations.
## Accessing the ZIH HPC System ## Accessing ZIH HPC Systems
The ZIH HPC system can be accessed only within the TU Dresden campus networks. ZIH provides five homogeneous compute systems, called clusters. These can only be accessed
Access from outside is possible by establishing a within the TU Dresden campus networks. Access from outside is possible by establishing a
[VPN connection](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn#section-4). [VPN connection](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn#section-4).
Each of these clusters can be accessed in the three ways described below, depending on the user's
There are different ways to access the ZIH HPC system (which are described in more detail below), needs and previous knowledge:
depending on the user's needs and previous knowledge:
* [JupyterHub](../access/jupyterhub.md): browser based connection, easiest way for beginners * [JupyterHub](../access/jupyterhub.md): browser based connection, easiest way for beginners
* [SSH connection](../access/ssh_login.md) (command line/terminal/console): "classical" connection, * [SSH connection](../access/ssh_login.md) (command line/terminal/console): "classical" connection,
...@@ -75,7 +68,7 @@ Next, the mentioned access methods are described step by step. ...@@ -75,7 +68,7 @@ Next, the mentioned access methods are described step by step.
### JupyterHub ### JupyterHub
1. Access JupyterHub here [https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter). 1. Access JupyterHub at [https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter) (not yet available for barnard).
1. Start by clicking on the button `Start My Server` and you will see two Spawner Options, 1. Start by clicking on the button `Start My Server` and you will see two Spawner Options,
`Simple` and `Advanced`. `Simple` and `Advanced`.
1. The `Simple` view offers a minimal selection of parameters to choose from. The `Advanced` 1. The `Simple` view offers a minimal selection of parameters to choose from. The `Advanced`
...@@ -87,10 +80,10 @@ for choice of parameters and then click `Spawn` ...@@ -87,10 +80,10 @@ for choice of parameters and then click `Spawn`
![Spawning](misc/jupyterhub-spawning.jpg) ![Spawning](misc/jupyterhub-spawning.jpg)
1. Once it loads, you will see the possibility between opening a `Notebook`, `Console` or `Other`. 1. Once it loads, you will see the possibility between opening a `Notebook`, `Console` or `Other`.
Note that you will now be working in your home directory as opposed to a specific workspace Note that you will now be working in your home directory as opposed to a specific workspace
(see [Data Management and Data Transfer](#data-management-and-data-transfer) section below for more details). (see [Data Management and Data Transfer](#data-transfer-and-data-management) section below for more details).
!!! caution "Stopping session on JupyterHub" !!! caution "Stopping session on JupyterHub"
Once you are done with your work on the ZIH HPC system, explicitly stop the session by logging Once you are done with your work on the ZIH HPC systems, explicitly stop the session by logging
out by clicking `File` &#8594 `Log Out` &#8594 `Stop My Server`. out by clicking `File` &#8594 `Log Out` &#8594 `Stop My Server`.
Alternatively, choose `File` &#8594 `Hub Control Panel` &#8594 `Stop My Server`. Alternatively, choose `File` &#8594 `Hub Control Panel` &#8594 `Stop My Server`.
...@@ -100,14 +93,14 @@ Explore the [JupyterHub](../access/jupyterhub.md) page for more information. ...@@ -100,14 +93,14 @@ Explore the [JupyterHub](../access/jupyterhub.md) page for more information.
The more "classical" way to work with HPC is based on the command line. After following The more "classical" way to work with HPC is based on the command line. After following
the instructions below, you will be on one of the login nodes. the instructions below, you will be on one of the login nodes.
This is the starting point for many tasks such as running programs and data management. This is the starting point for many tasks such as launching jobs and doing data management.
!!! hint "Using SSH key pair" !!! hint "Using SSH key pair"
We recommend to create an SSH key pair by following the We recommend to create an SSH key pair by following the
[instructions here](../access/ssh_login.md#before-your-first-connection). [instructions here](../access/ssh_login.md#before-your-first-connection).
Using an SSH key pair is beneficial for security reasons, although it is not necessary to work Using an SSH key pair is beneficial for security reasons, although it is not necessary to work
with the ZIH HPC system. with ZIH HPC systems.
=== "Windows 10 and higher/Mac/Linux users" === "Windows 10 and higher/Mac/Linux users"
...@@ -115,7 +108,7 @@ This is the starting point for many tasks such as running programs and data mana ...@@ -115,7 +108,7 @@ This is the starting point for many tasks such as running programs and data mana
1. Open a terminal/shell/console and type in 1. Open a terminal/shell/console and type in
```console ```console
marie@local$ ssh marie@taurus.hrsk.tu-dresden.de marie@local$ ssh marie@login2.barnard.hpc.tu-dresden.de
``` ```
1. After typing in your password, you end up seeing something like the following image. 1. After typing in your password, you end up seeing something like the following image.
...@@ -131,29 +124,29 @@ For more information explore the [access compendium page](../access/ssh_login.md ...@@ -131,29 +124,29 @@ For more information explore the [access compendium page](../access/ssh_login.md
[Configuring default parameters](../access/ssh_login.md#configuring-default-parameters-for-ssh) [Configuring default parameters](../access/ssh_login.md#configuring-default-parameters-for-ssh)
makes connecting more comfortable. makes connecting more comfortable.
## Data Management and Data Transfer ## Data Transfer and Data Management
First, it is shown how to create a workspace, then how to transfer data within and to/from the ZIH First, it is shown how to create a workspace, then how to transfer data within and to/from the ZIH
HPC system. Also keep in mind to set the file permissions when collaborating with other researchers. HPC system. Also keep in mind to set the file permissions when collaborating with other researchers.
### Create a Workspace ### Create a Workspace
There are different areas for storing your data on the ZIH HPC system, called [Filesystems](../data_lifecycle/file_systems.md). There are different places for storing your data on ZIH HPC systems, called [Filesystems](../data_lifecycle/file_systems.md).
You need to create a [workspace](../data_lifecycle/workspaces.md) for your data (see example You need to create a [workspace](../data_lifecycle/workspaces.md) for your data on one of these
below) on one of these filesystems. (see example below).
The filesystems have different [properties](../data_lifecycle/file_systems.md) (available space, The filesystems have different [properties](../data_lifecycle/file_systems.md) (available space,
storage time limit, permission rights). Therefore, choose the one that fits your project best. storage time limit, permission rights). Therefore, choose the one that fits your project best.
To start we recommend the Lustre filesystem **scratch**. To start we recommend the Lustre filesystem **horse**.
!!! example "Creating a workspace on Lustre filesystem scratch" !!! example "Creating a workspace on Lustre filesystem horse"
The following command creates a workspace The following command creates a workspace
```console ```console
marie@login$ ws_allocate -F scratch -r 7 -m marie.testuser@tu-dresden.de -n test-workspace -d 90 marie@login$ ws_allocate -F horse -r 7 -m marie.testuser@tu-dresden.de -n test-workspace -d 90
Info: creating workspace. Info: creating workspace.
/scratch/ws/marie-test-workspace /data/horse/ws/marie-test-workspace
remaining extensions : 10 remaining extensions : 10
remaining time in days: 90 remaining time in days: 90
``` ```
...@@ -161,17 +154,17 @@ To start we recommend the Lustre filesystem **scratch**. ...@@ -161,17 +154,17 @@ To start we recommend the Lustre filesystem **scratch**.
To explain: To explain:
- `ws_allocate` - command to allocate - `ws_allocate` - command to allocate
- `-F scratch` - on the scratch filesystem - `-F horse` - on the horse filesystem
- `-r 7 -m marie.testuser@tu-dresden.de` - send a reminder to `marie.testuser@tu-dresden.de` 7 days before expiration - `-r 7 -m marie.testuser@tu-dresden.de` - send a reminder to `marie.testuser@tu-dresden.de` 7 days before expiration
- `-n test-workspace` - workspace's name - `-n test-workspace` - workspace name
- `-d 90` - a life time of 90 days - `-d 90` - a life time of 90 days
The path to this workspace is `/scratch/ws/marie-test-workspace`. You will need it when The path to this workspace is `/data/horse/ws/marie-test-workspace`. You will need it when
transferring data or running jobs. transferring data or running jobs.
Find more [information on workspaces in the compendium](../data_lifecycle/workspaces.md). Find more [information on workspaces in the compendium](../data_lifecycle/workspaces.md).
### Transferring Data **Within** the ZIH HPC System ### Transferring Data *Within* ZIH HPC Systems
The approach depends on the data volume: up to 100 MB or above. The approach depends on the data volume: up to 100 MB or above.
...@@ -180,7 +173,7 @@ The approach depends on the data volume: up to 100 MB or above. ...@@ -180,7 +173,7 @@ The approach depends on the data volume: up to 100 MB or above.
Use the command `cp` to copy the file `example.R` from your ZIH home directory to a workspace: Use the command `cp` to copy the file `example.R` from your ZIH home directory to a workspace:
```console ```console
marie@login$ cp /home/marie/example.R /scratch/ws/marie-test-workspace marie@login$ cp /home/marie/example.R /data/horse/ws/marie-test-workspace
``` ```
Analogously use command `mv` to move a file. Analogously use command `mv` to move a file.
...@@ -190,47 +183,46 @@ The approach depends on the data volume: up to 100 MB or above. ...@@ -190,47 +183,46 @@ The approach depends on the data volume: up to 100 MB or above.
???+ example "`dtcp`/`dtmv` for medium to large data (above 100 MB)" ???+ example "`dtcp`/`dtmv` for medium to large data (above 100 MB)"
Use the command `dtcp` to copy the directory `/warm_archive/ws/large-dataset` from one Use the command `dtcp` to copy the directory `/walrus/ws/large-dataset` from one
filesystem location to another: filesystem location to another:
```console ```console
marie@login$ dtcp -r /warm_archive/ws/large-dataset /scratch/ws/marie-test-workspace/data marie@login$ dtcp -r /walrus/ws/large-dataset /data/horse/ws/marie-test-workspace/data
``` ```
Analogously use the command `dtmv` to move a file. Analogously use the command `dtmv` to move a file or folder.
More details on the [datamover](../data_transfer/datamover.md) are available in the data More details on the [datamover](../data_transfer/datamover.md) are available in the data
transfer section. transfer section.
### Transferring Data **To/From** the ZIH HPC System ### Transferring Data *To/From* ZIH HPC Systems
???+ example "`scp` for transferring data to the ZIH HPC system" ???+ example "`scp` for transferring data to ZIH HPC systems"
Copy the file `example.R` from your local machine to a workspace on the ZIH system: Copy the file `example.R` from your local machine to a workspace on the ZIH systems:
```console ```console
marie@local$ scp /home/marie/Documents/example.R marie@taurusexport.hrsk.tu-dresden.de:/scratch/ws/0/your_workspace/ marie@local$ scp /home/marie/Documents/example.R marie@export.hpc.tu-dresden.de:/data/horse/ws/your_workspace/
Password: Password:
example.R 100% 312 32.2KB/s 00:00`` example.R 100% 312 32.2KB/s 00:00``
``` ```
Note, the target path contains `taurusexport.hrsk.tu-dresden.de`, which is one of the Note, the target path contains `export.hpc.tu-dresden.de`, which is one of the
so called [export nodes](../data_transfer/export_nodes.md) that allows for data transfer from/to the outside. so called [export nodes](../data_transfer/export_nodes.md) that allows for data transfer from/to the outside.
???+ example "`scp` to transfer data from the ZIH HPC system to local machine" ???+ example "`scp` to transfer data from ZIH HPC systems to local machine"
Copy the file `results.csv` from a workspace on the ZIH HPC system to your local machine: Copy the file `results.csv` from a workspace on the ZIH HPC systems to your local machine:
```console ```console
marie@local$ scp marie@taurusexport.hrsk.tu-dresden.de:/scratch/ws/0/marie-test-workspace/results.csv /home/marie/Documents/ marie@local$ scp marie@export.hpc.tu-dresden.de:/data/horse/ws/marie-test-workspace/results.csv /home/marie/Documents/
``` ```
Feel free to explore further [examples](http://bropages.org/scp) of the `scp` command. Feel free to explore further [examples](http://bropages.org/scp) of the `scp` command
Furthermore, checkout other possibilities on the compendium for working with the and possibilities of the [export nodes](../data_transfer/export_nodes.md).
[export nodes](../data_transfer/export_nodes.md).
!!! caution "Terabytes of data" !!! caution "Terabytes of data"
If you are planning to move terabytes or even more from an outside machine into the ZIH system, If you are planning to move terabytes or even more from an outside machine into ZIH systems,
please contact the ZIH [HPC support](mailto:hpc-support@tu-dresden.de) in advance. please contact the ZIH [HPC support](mailto:hpc-support@tu-dresden.de) in advance.
### Permission Rights ### Permission Rights
...@@ -252,13 +244,13 @@ in Linux. ...@@ -252,13 +244,13 @@ in Linux.
permissions for write access for the group (`chmod g+w`). permissions for write access for the group (`chmod g+w`).
```console ```console
marie@login$ ls -la /scratch/ws/0/marie-training-data/dataset.csv # list file permissions marie@login$ ls -la /data/horse/ws/marie-training-data/dataset.csv # list file permissions
-rw-r--r-- 1 marie p_number_crunch 0 12. Jan 15:11 /scratch/ws/0/marie-training-data/dataset.csv -rw-r--r-- 1 marie p_number_crunch 0 12. Jan 15:11 /data/horse/ws/marie-training-data/dataset.csv
marie@login$ chmod g+w /scratch/ws/0/marie-training-data/dataset.csv # add write permissions marie@login$ chmod g+w /data/horse/ws/marie-training-data/dataset.csv # add write permissions
marie@login$ ls -la /scratch/ws/0/marie-training-data/dataset.csv # list file permissions again marie@login$ ls -la /data/horse/ws/marie-training-data/dataset.csv # list file permissions again
-rw-rw-r-- 1 marie p_number_crunch 0 12. Jan 15:11 /scratch/ws/0/marie-training-data/dataset.csv -rw-rw-r-- 1 marie p_number_crunch 0 12. Jan 15:11 /data/horse/ws/marie-training-data/dataset.csv
``` ```
??? hint "GUI-based data management" ??? hint "GUI-based data management"
...@@ -276,7 +268,7 @@ in Linux. ...@@ -276,7 +268,7 @@ in Linux.
## Software Environment ## Software Environment
The [software](../software/overview.md) on the ZIH HPC system is not installed system-wide, The [software](../software/overview.md) on the ZIH HPC systems is not installed system-wide,
but is provided within so-called [modules](../software/modules.md). but is provided within so-called [modules](../software/modules.md).
In order to use specific software you need to "load" the respective module. In order to use specific software you need to "load" the respective module.
This modifies the current environment (so only for the current user in the current session) This modifies the current environment (so only for the current user in the current session)
...@@ -284,10 +276,11 @@ such that the software becomes available. ...@@ -284,10 +276,11 @@ such that the software becomes available.
!!! note !!! note
Different partitions might have different versions available of the same software. Different clusters (HPC systems) have different software or might have different versions of
See [software](../software/overview.md) for more details. the same available software. See [software](../software/overview.md) for more details.
- Use `module spider <software>` command to check all available versions of the software. Use the command `module spider <software>` to check all available versions of a software that is
available on the one specific system you are currently on:
```console ```console
marie@login$ module spider Python marie@login$ module spider Python
...@@ -320,9 +313,9 @@ marie@login$ module spider Python ...@@ -320,9 +313,9 @@ marie@login$ module spider Python
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------
``` ```
We now see the list of versions of Python that are available. We now see the list of available Python versions.
- To get information on a specific module, use `module spider <software>/<version>` call. - To get information on a specific module, use `module spider <software>/<version>`:
```console hl_lines="9 10 11" ```console hl_lines="9 10 11"
marie@login$ module spider Python/3.9.5 marie@login$ module spider Python/3.9.5
...@@ -396,10 +389,10 @@ For additional information refer to the detailed documentation on [modules](../s ...@@ -396,10 +389,10 @@ For additional information refer to the detailed documentation on [modules](../s
e.g. `numpy`, `tensorflow` or `pytorch`. e.g. `numpy`, `tensorflow` or `pytorch`.
Those modules may provide much better performance than the packages found on PyPi Those modules may provide much better performance than the packages found on PyPi
(installed via `pip`) which have to work on any system while our installation is optimized for (installed via `pip`) which have to work on any system while our installation is optimized for
the ZIH system to make the best use of the specific CPUs and GPUs found here. each ZIH system to make the best use of the specific CPUs and GPUs found here.
However the Python package ecosystem (like others) is very heterogeneous and dynamic, However the Python package ecosystem (like others) is very heterogeneous and dynamic,
with daily updates. with daily updates.
The central update cycle for software on the ZIH HPC system is approximately every six months. The central update cycle for software on ZIH HPC systems is approximately every six months.
So the software installed as modules might be a bit older. So the software installed as modules might be a bit older.
!!! warning !!! warning
...@@ -412,25 +405,25 @@ For additional information refer to the detailed documentation on [modules](../s ...@@ -412,25 +405,25 @@ For additional information refer to the detailed documentation on [modules](../s
At HPC systems, computational work and resource requirements are encapsulated into so-called jobs. At HPC systems, computational work and resource requirements are encapsulated into so-called jobs.
Since all computational resources are shared with other users, these resources need to be Since all computational resources are shared with other users, these resources need to be
allocated. For managing these allocations a so-called job scheduler or a batch system is used. allocated. For managing these allocations a so-called job scheduler or a batch system is used -
On the ZIH system, the job scheduler used is [Slurm](https://slurm.schedmd.com/quickstart.html). on ZIH systems this is [Slurm](https://slurm.schedmd.com/quickstart.html).
It is possible to run a job [interactively](../jobs_and_resources/slurm.md#interactive-jobs) It is possible to run a job [interactively](../jobs_and_resources/slurm.md#interactive-jobs)
(real time execution) or submit a [batch job](../jobs_and_resources/slurm.md#batch-jobs) (real time execution) or to submit it as a [batch job](../jobs_and_resources/slurm.md#batch-jobs)
(scheduled execution). (scheduled execution).
For beginners, we highly advise to run the job interactively. To do so, use the `srun` command. For beginners, we highly advise to run the job interactively. To do so, use the `srun` command
on any of the ZIH HPC clusters (systems).
Here, among the other options it is possible to define a partition you would like to work on For this `srun` command, it is possible to define options like the number of tasks (`--ntasks`),
(`--partition`), the number of tasks (`--ntasks`), number of CPUs per task (`--cpus-per-task`), number of CPUs per task (`--cpus-per-task`),
the amount of time you would like to keep this interactive session open (`--time`), memory per the amount of time you would like to keep this interactive session open (`--time`), memory per
CPU (`--mem-per-cpu`) and many others. CPU (`--mem-per-cpu`) and many others.
See [Slurm documentation](../jobs_and_resources/slurm.md#interactive-jobs) for more details. See [Slurm documentation](../jobs_and_resources/slurm.md#interactive-jobs) for more details.
```console ```console
marie@login$ srun --partition=haswell --ntasks=1 --cpus-per-task=4 --time=1:00:00 --mem-per-cpu=1700 --pty bash -l #allocate 4 cores for the interactive job marie@login$ srun --ntasks=1 --cpus-per-task=4 --time=1:00:00 --mem-per-cpu=1700 --pty bash -l #allocate 4 cores for the interactive job
marie@haswell$ module load Python #load necessary packages marie@compute$ module load Python #load necessary packages
marie@haswell$ cd /scratch/ws/0/marie-test-workspace/ #go to your created workspace marie@compute$ cd /data/horse/ws/marie-test-workspace/ #go to your created workspace
marie@haswell$ python test.py #execute your file marie@compute$ python test.py #execute your file
Hello, World! Hello, World!
``` ```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment