Skip to content
Snippets Groups Projects
Commit d135edf6 authored by LocNgu's avatar LocNgu
Browse files

Add missing markdown fixes

parent 3271ce94
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!89Fix links and markdown in FEMSoftware.md
...@@ -24,30 +24,56 @@ The detailed Abaqus documentation can be found at ...@@ -24,30 +24,56 @@ The detailed Abaqus documentation can be found at
abaqus **TODO LINK MISSING** (only accessible from within the abaqus **TODO LINK MISSING** (only accessible from within the
TU Dresden campus net). TU Dresden campus net).
\*Example - Thanks to Benjamin Groeger, Inst. f. Leichtbau und **Example - Thanks to Benjamin Groeger, Inst. f. Leichtbau und
Kunststofftechnik) \* Kunststofftechnik) **
1\. Prepare an Abaqus input-file (here the input example from Benjamin) 1. Prepare an Abaqus input-file (here the input example from Benjamin)
Rot-modell-BenjaminGroeger.inp **TODO LINK** (%ATTACHURL%/Rot-modell-BenjaminGroeger.inp) Rot-modell-BenjaminGroeger.inp **TODO LINK** (%ATTACHURL%/Rot-modell-BenjaminGroeger.inp)
2\. Prepare a batch script on taurus like this 2. Prepare a batch script on taurus like this
``` ```
#!/bin/bash<br /><br />### Thanks to Benjamin Groeger, Institut fuer Leichtbau und Kunststofftechnik, 38748<br />### runs on taurus and needs ca 20sec with 4cpu<br />### generates files:<br />### yyyy.com<br />### yyyy.dat<br />### yyyy.msg<br />### yyyy.odb<br />### yyyy.prt<br />### yyyy.sim<br />### yyyy.sta<br /><br />#SBATCH --nodes=1 ### with &gt;1 node abaqus needs a nodeliste<br />#SBATCH --ntasks-per-node=4<br />#SBATCH --mem=500 ### memory (sum)<br />#SBATCH --time=00:04:00<br />### give a name, what ever you want<br />#SBATCH --job-name=yyyy<br />### you get emails when the job will finished or failed<br />### set your right email<br />#SBATCH --mail-type=END,FAIL<br />#SBATCH --mail-user=xxxxx.yyyyyy@mailbox.tu-dresden.de<br />### set your project<br />#SBATCH -A p_xxxxxxx<br /><br />### Abaqus have its own MPI<br />unset SLURM_GTIDS<br /><br />### load and start<br />module load ABAQUS/2019<br />abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi<br /><br /> #!/bin/bash<br>
### Thanks to Benjamin Groeger, Institut fuer Leichtbau und Kunststofftechnik, 38748<br />### runs on taurus and needs ca 20sec with 4cpu<br />### generates files:
### yyyy.com
### yyyy.dat
### yyyy.msg
### yyyy.odb
### yyyy.prt
### yyyy.sim
### yyyy.sta
#SBATCH --nodes=1 ### with &gt;1 node abaqus needs a nodeliste
#SBATCH --ntasks-per-node=4
#SBATCH --mem=500 ### memory (sum)
#SBATCH --time=00:04:00
### give a name, what ever you want
#SBATCH --job-name=yyyy
### you get emails when the job will finished or failed
### set your right email
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=xxxxx.yyyyyy@mailbox.tu-dresden.de
### set your project
#SBATCH -A p_xxxxxxx
### Abaqus have its own MPI
unset SLURM_GTIDS
### load and start
module load ABAQUS/2019
abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi
``` ```
3\. Start the batch script (name of our script is 3. Start the batch script (name of our script is
"batch-Rot-modell-BenjaminGroeger") "batch-Rot-modell-BenjaminGroeger")
``` ```
sbatch batch-Rot-modell-BenjaminGroeger ---&gt; you will get a jobnumber = JobID (for example 3130522) sbatch batch-Rot-modell-BenjaminGroeger --->; you will get a jobnumber = JobID (for example 3130522)
``` ```
4\. Control the status of the job 4. Control the status of the job
``` ```
squeue -u your_login --&gt; in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources) squeue -u your_login -->; in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources)
``` ```
## ANSYS ## ANSYS
...@@ -64,7 +90,9 @@ modules **TODO LINK** (RuntimeEnvironment). To list the available versions and l ...@@ -64,7 +90,9 @@ modules **TODO LINK** (RuntimeEnvironment). To list the available versions and l
particular ANSYS version, type particular ANSYS version, type
``` ```
module avail ANSYS<br />...<br />module load ANSYS/VERSION module avail ANSYS
...
module load ANSYS/VERSION
``` ```
In general, HPC-systems are not designed for interactive "GUI-working". In general, HPC-systems are not designed for interactive "GUI-working".
...@@ -80,7 +108,8 @@ the SSH connection. For OpenSSH this option is '-X' and it is valuable ...@@ -80,7 +108,8 @@ the SSH connection. For OpenSSH this option is '-X' and it is valuable
to use compression of all data via '-C'. to use compression of all data via '-C'.
``` ```
# Connect to taurus, e.g. ssh -CX<br />module load ANSYS/VERSION # Connect to taurus, e.g. ssh -CX
module load ANSYS/VERSION
runwb2 runwb2
``` ```
...@@ -100,10 +129,20 @@ together with dcv works as follows: ...@@ -100,10 +129,20 @@ together with dcv works as follows:
- Follow the instructions within virtual - Follow the instructions within virtual
desktops **TODO LINK** (Compendium.VirtualDesktops) desktops **TODO LINK** (Compendium.VirtualDesktops)
- \<pre> module load ANSYS\</pre>
- \<pre> unset SLURM_GTIDS\</pre> ```
module load ANSYS
```
```
unset SLURM_GTIDS
```
- Note the hints w.r.t. GPU support on dcv side - Note the hints w.r.t. GPU support on dcv side
- \<pre>runwb2\</pre>
```
runwb2
```
### Using Workbench in Batch Mode ### Using Workbench in Batch Mode
...@@ -145,12 +184,12 @@ via DCV **TODO LINK** (DesktopCloudVisualization), so we recommend you simply ed ...@@ -145,12 +184,12 @@ via DCV **TODO LINK** (DesktopCloudVisualization), so we recommend you simply ed
the XML file directly with a text editor of your choice. It is located the XML file directly with a text editor of your choice. It is located
under: under:
'*$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml*' '$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml'
(mind the space in there.) You might have to adjust the ANSYS Version (mind the space in there.) You might have to adjust the ANSYS Version
(v181) in the path. In this file, you can find the parameter (v181) in the path. In this file, you can find the parameter
&lt;MaxNumberProcessors&gt;2&lt;/MaxNumberProcessors&gt; <MaxNumberProcessors>2</MaxNumberProcessors>
that you can simply change to something like 16 oder 24. For now, you that you can simply change to something like 16 oder 24. For now, you
should stay within single-node boundaries, because multi-node should stay within single-node boundaries, because multi-node
...@@ -159,13 +198,11 @@ match your used --cpus-per-task parameter in your sbatch script. ...@@ -159,13 +198,11 @@ match your used --cpus-per-task parameter in your sbatch script.
## COMSOL Multiphysics ## COMSOL Multiphysics
" [COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a "[COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a
finite element analysis, solver and Simulation software package for finite element analysis, solver and Simulation software package for
various physics and engineering applications, especially coupled various physics and engineering applications, especially coupled
phenomena, or multiphysics." phenomena, or multiphysics."
[\[http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia] [\[http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia\]](http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia)
([http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia)
\]
Comsol may be used remotely on ZIH machines or locally on the desktop, Comsol may be used remotely on ZIH machines or locally on the desktop,
using ZIH license server. using ZIH license server.
...@@ -224,4 +261,10 @@ are installed on all machines. ...@@ -224,4 +261,10 @@ are installed on all machines.
To run the MPI version on Taurus or Venus you need a batchfile (sumbmit To run the MPI version on Taurus or Venus you need a batchfile (sumbmit
with `sbatch <filename>`) like: with `sbatch <filename>`) like:
#!/bin/bash<br />#SBATCH --time=01:00:00 # walltime<br />#SBATCH --ntasks=16 # number of processor cores (i.e. tasks)<br />#SBATCH --mem-per-cpu=1900M # memory per CPU core<br /><br />module load ls-dyna<br />srun mpp-dyna i=neon_refined01_30ms.k memory=120000000 #!/bin/bash
#SBATCH --time=01:00:00 # walltime
#SBATCH --ntasks=16 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
module load ls-dyna
srun mpp-dyna i=neon_refined01_30ms.k memory=120000000
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment