Skip to content
Snippets Groups Projects
Commit 6ea61302 authored by LocNgu's avatar LocNgu
Browse files

add admonition for longer examples

parent 1cf4b70d
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!248review cfd.md
......@@ -28,23 +28,22 @@ marie@login$ source $FOAM_BASH
marie@login$ # source $FOAM_CSH
```
Example for OpenFOAM job script:
```bash
#!/bin/bash
#SBATCH --time=12:00:00 # walltime
#SBATCH --ntasks=60 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=500M # memory per CPU core
#SBATCH -job-name="Test" # job name
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
OUTFILE="Output"
module load OpenFOAM
source $FOAM_BASH
cd /scratch/ws/1/marie-example-workspace # work directory using workspace
srun pimpleFoam -parallel > "$OUTFILE"
```
???+ example "Example for OpenFOAM job script:"
```bash
#!/bin/bash
#SBATCH --time=12:00:00 # walltime
#SBATCH --ntasks=60 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=500M # memory per CPU core
#SBATCH -job-name="Test" # job name
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
OUTFILE="Output"
module load OpenFOAM
source $FOAM_BASH
cd /scratch/ws/1/marie-example-workspace # work directory using workspace
srun pimpleFoam -parallel > "$OUTFILE"
```
## Ansys CFX
......@@ -52,38 +51,36 @@ Ansys CFX is a powerful finite-volume-based program package for modeling general
complex geometries. The main components of the CFX package are the flow solver cfx5solve, the
geometry and mesh generator cfx5pre, and the post-processor cfx5post.
Example for CFX job script:
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=4 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
???+ example "Example for CFX job script:"
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=4 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load ANSYS
cd /scratch/ws/1/marie-example-workspace # work directory using workspace
cfx-parallel.sh -double -def StaticMixer.def
```
module load ANSYS
cd /scratch/ws/1/marie-example-workspace # work directory using workspace
cfx-parallel.sh -double -def StaticMixer.def
```
## Ansys Fluent
Fluent needs the host names and can be run in parallel like this:
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=4 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load ANSYS
???+ example "Fluent needs the host names and can be run in parallel like this:"
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=4 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load ANSYS
nodeset -e $SLURM_JOB_NODELIST | xargs -n1 > hostsfile_job_$SLURM_JOBID.txt
nodeset -e $SLURM_JOB_NODELIST | xargs -n1 > hostsfile_job_$SLURM_JOBID.txt
fluent 2ddp -t$SLURM_NTASKS -g -mpi=intel -pinfiniband -cnf=hostsfile_job_$SLURM_JOBID.txt < input.in
```
fluent 2ddp -t$SLURM_NTASKS -g -mpi=intel -pinfiniband -cnf=hostsfile_job_$SLURM_JOBID.txt < input.in
```
To use fluent interactively, please try:
......@@ -95,24 +92,25 @@ marie@login$ fluent &
## STAR-CCM+
Note: you have to use your own license in order to run STAR-CCM+ on ZIH systems, so you have to specify
the parameters `-licpath` and `-podkey`, see the example below.
!!! note
You have to use your own license in order to run STAR-CCM+ on ZIH systems, so you have to specify the parameters `-licpath` and `-podkey`, see the example below.
Our installation provides a script `create_rankfile -f CCM` that generates a host list from the
Slurm job environment that can be passed to `starccm+`, enabling it to run across multiple nodes.
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=32 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=2500M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load STAR-CCM+
LICPATH="port@host"
PODKEY="your podkey"
INPUT_FILE="your_simulation.sim"
starccm+ -collab -rsh ssh -cpubind off -np $SLURM_NTASKS -on $(/sw/taurus/tools/slurmtools/default/bin/create_rankfile -f CCM) -batch -power -licpath $LICPATH -podkey $PODKEY $INPUT_FILE
```
???+ example
```bash
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=32 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=2500M # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load STAR-CCM+
LICPATH="port@host"
PODKEY="your podkey"
INPUT_FILE="your_simulation.sim"
starccm+ -collab -rsh ssh -cpubind off -np $SLURM_NTASKS -on $(/sw/taurus/tools/slurmtools/default/bin/create_rankfile -f CCM) -batch -power -licpath $LICPATH -podkey $PODKEY $INPUT_FILE
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment