Skip to content
Snippets Groups Projects
Commit 88f81b64 authored by Michael Müller's avatar Michael Müller
Browse files

Merge branch 'containers' into 'preview'

Containers: Moves pages to mkdocs and fix checks

See merge request zih/hpc-compendium/hpc-compendium!117
parents 0cd2e4b0 d2a847de
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!117Containers: Moves pages to mkdocs and fix checks
# Singularity Example Definitions
## Basic example
A usual workflow to create Singularity Definition consists of the
following steps:
- Start from base image
- Install dependencies
- Package manager
- Other sources
- Build & Install own binaries
- Provide entrypoints & metadata
An example doing all this:
```Bash
Bootstrap: docker
From: alpine
%post
. /.singularity.d/env/10-docker*.sh
apk add g++ gcc make wget cmake
wget https://github.com/fmtlib/fmt/archive/5.3.0.tar.gz
tar -xf 5.3.0.tar.gz
mkdir build && cd build
cmake ../fmt-5.3.0 -DFMT_TEST=OFF
make -j$(nproc) install
cd ..
rm -r fmt-5.3.0*
cat hello.cpp
#include <fmt/format.h>
int main(int argc, char** argv){
if(argc == 1) fmt::print("No arguments passed!\n");
else fmt::print("Hello {}!\n", argv[1]);
}
EOF
g++ hello.cpp -o hello -lfmt
mv hello /usr/bin/hello
%runscript
hello "$@"
%labels
Author Alexander Grund
Version 1.0.0
%help
Display a greeting using the fmt library
Usage:
./hello
```
## CUDA + CuDNN + OpenMPI
- Chosen CUDA version depends on installed driver of host
- OpenMPI needs PMI for SLURM integration
- OpenMPI needs CUDA for GPU copy-support
- OpenMPI needs ibverbs libs for Infiniband
- openmpi-mca-params.conf required to avoid warnings on fork (OK on
taurus)
- Environment variables SLURM_VERSION, OPENMPI_VERSION can be set to
choose different version when building the container
```
Bootstrap: docker
From: nvidia/cuda-ppc64le:10.1-cudnn7-devel-ubuntu18.04
%labels
Author ZIH
Requires CUDA driver 418.39+.
%post
. /.singularity.d/env/10-docker*.sh
apt-get update
apt-get install -y cuda-compat-10.1
apt-get install -y libibverbs-dev ibverbs-utils
# Install basic development tools
apt-get install -y gcc g++ make wget python
apt-get autoremove; apt-get clean
cd /tmp
: ${SLURM_VERSION:=17-02-11-1}
wget https://github.com/SchedMD/slurm/archive/slurm-${SLURM_VERSION}.tar.gz
tar -xf slurm-${SLURM_VERSION}.tar.gz
cd slurm-slurm-${SLURM_VERSION}
./configure --prefix=/usr/ --sysconfdir=/etc/slurm --localstatedir=/var --disable-debug
make -C contribs/pmi2 -j$(nproc) install
cd ..
rm -rf slurm-*
: ${OPENMPI_VERSION:=3.1.4}
wget https://download.open-mpi.org/release/open-mpi/v${OPENMPI_VERSION%.*}/openmpi-${OPENMPI_VERSION}.tar.gz
tar -xf openmpi-${OPENMPI_VERSION}.tar.gz
cd openmpi-${OPENMPI_VERSION}/
./configure --prefix=/usr/ --with-pmi --with-verbs --with-cuda
make -j$(nproc) install
echo "mpi_warn_on_fork = 0" >> /usr/etc/openmpi-mca-params.conf
echo "btl_openib_warn_default_gid_prefix = 0" >> /usr/etc/openmpi-mca-params.conf
cd ..
rm -rf openmpi-*
```
# Singularity Recipe Hints
## GUI (X11) applications
Running GUI applications inside a singularity container is possible out of the box. Check the
following definition:
```Bash
Bootstrap: docker
From: centos:7
%post
yum install -y xeyes
```
This image may be run with
```Bash
singularity exec xeyes.sif xeyes.
```
This works because all the magic is done by singularity already like setting $DISPLAY to the outside
display and mounting $HOME so $HOME/.Xauthority (X11 authentification cookie) is found. When you are
using \`--contain\` or \`--no-home\` you have to set that cookie yourself or mount/copy it inside
the container. Similar for \`--cleanenv\` you have to set $DISPLAY e.g. via
```Bash
export SINGULARITY_DISPLAY=$DISPLAY
```
When you run a container as root (via \`sudo\`) you may need to allow root for your local display
port: `xhost +local:root\`
### Hardware acceleration
If you want hardware acceleration you **may** need [VirtualGL](https://virtualgl.org). An example
definition file is as follows:
```Bash
Bootstrap: docker
From: centos:7
%post
yum install -y glx-utils # for glxgears example app
yum install -y curl
VIRTUALGL_VERSION=2.6.2 # Replace by required (e.g. latest) version
curl -sSL https://downloads.sourceforge.net/project/virtualgl/"${VIRTUALGL_VERSION}"/VirtualGL-"${VIRTUALGL_VERSION}".x86_64.rpm -o VirtualGL-"${VIRTUALGL_VERSION}".x86_64.rpm
yum install -y VirtualGL*.rpm
/opt/VirtualGL/bin/vglserver_config -config +s +f -t
rm VirtualGL-*.rpm
# Install video drivers AFTER VirtualGL to avoid them being overwritten
yum install -y mesa-dri-drivers # for e.g. intel integrated GPU drivers. Replace by your driver
```
You can now run the application with vglrun:
```Bash
singularity exec vgl.sif vglrun glxgears
```
**Attention:**Using VirtualGL may not be required at all and could even decrease the performance. To
check install e.g. glxgears as above and your graphics driver (or use the VirtualGL image from
above) and disable vsync:
```
vblank_mode=0 singularity exec vgl.sif glxgears
```
Compare the FPS output with the glxgears prefixed by vglrun (see above) to see which produces more
FPS (or runs at all).
**NVIDIA GPUs** need the `--nv` parameter for the singularity command:
``Bash
singularity exec --nv vgl.sif glxgears
```
# Singularity on Power9 / ml partition
Building Singularity containers from a recipe on Taurus is normally not
possible due to the requirement of root (administrator) rights, see
[Containers](Containers). For obvious reasons users on Taurus cannot be
granted root permissions.
The solution is to build your container on your local Linux machine by
executing something like
sudo singularity build myContainer.sif myDefinition.def
Then you can copy the resulting myContainer.sif to Taurus and execute it
there.
This does **not** work on the ml partition as it uses the Power9
architecture which your laptop likely doesn't.
For this we provide a Virtual Machine (VM) on the ml partition which
allows users to gain root permissions in an isolated environment. The
workflow to use this manually is described at [another page](Cloud) but
is quite cumbersome.
To make this easier two programs are provided: `buildSingularityImage`
and `startInVM` which do what they say. The latter is for more advanced
use cases so you should be fine using *buildSingularityImage*, see the
following section.
**IMPORTANT:** You need to have your default SSH key without a password
for the scripts to work as entering a password through the scripts is
not supported.
**The recommended workflow** is to create and test a definition file
locally. You usually start from a base Docker container. Those typically
exist for different architectures but with a common name (e.g.
'ubuntu:18.04'). Singularity automatically uses the correct Docker
container for your current architecture when building. So in most cases
you can write your definition file, build it and test it locally, then
move it to Taurus and build it on Power9 without any further changes.
However, sometimes Docker containers for different architectures have
different suffixes, in which case you'd need to change that when moving
to Taurus.
Building Singularity containers from a recipe on Taurus is normally not possible due to the
requirement of root (administrator) rights, see [Containers](containers.md). For obvious reasons
users on Taurus cannot be granted root permissions.
The solution is to build your container on your local Linux machine by executing something like
```Bash
sudo singularity build myContainer.sif myDefinition.def
```
Then you can copy the resulting myContainer.sif to Taurus and execute it there.
This does **not** work on the ml partition as it uses the Power9 architecture which your laptop
likely doesn't.
For this we provide a Virtual Machine (VM) on the ml partition which allows users to gain root
permissions in an isolated environment. The workflow to use this manually is described at
[another page](Cloud.md) but is quite cumbersome.
To make this easier two programs are provided: `buildSingularityImage` and `startInVM` which do what
they say. The latter is for more advanced use cases so you should be fine using
*buildSingularityImage*, see the following section.
**IMPORTANT:** You need to have your default SSH key without a password for the scripts to work as
entering a password through the scripts is not supported.
**The recommended workflow** is to create and test a definition file locally. You usually start from
a base Docker container. Those typically exist for different architectures but with a common name
(e.g. 'ubuntu:18.04'). Singularity automatically uses the correct Docker container for your current
architecture when building. So in most cases you can write your definition file, build it and test
it locally, then move it to Taurus and build it on Power9 without any further changes. However,
sometimes Docker containers for different architectures have different suffixes, in which case you'd
need to change that when moving to Taurus.
## Building a Singularity container in a job
To build a singularity container on Taurus simply run:
buildSingularityImage --arch=power9 myContainer.sif myDefinition.def
This command will submit a batch job and immediately return. Note that
while "power9" is currently the only supported architecture, the
parameter is still required. If you want it to block while the image is
built and see live output, use the parameter `--interactive`:
buildSingularityImage --arch=power9 --interactive myContainer.sif myDefinition.def
There are more options available which can be shown by running
`buildSingularityImage --help`. All have reasonable defaults.The most
important ones are:
- `--time <time>`: Set a higher job time if the default time is not
enough to build your image and your job is cancelled before
completing. The format is the same as for SLURM.
- `--tmp-size=<size in GB>`: Set a size used for the temporary
location of the Singularity container. Basically the size of the
extracted container.
- `--output=<file>`: Path to a file used for (log) output generated
while building your container.
- Various singularity options are passed through. E.g.
`--notest, --force, --update`. See, e.g., `singularity --help` for
details.
For **advanced users** it is also possible to manually request a job
with a VM (`srun -p ml --cloud=kvm ...`) and then use this script to
build a Singularity container from within the job. In this case the
`--arch` and other SLURM related parameters are not required. The
advantage of using this script is that it automates the waiting for the
VM and mounting of host directories into it (can also be done with
`startInVM`) and creates a temporary directory usable with Singularity
inside the VM controlled by the `--tmp-size` parameter.
```Bash
buildSingularityImage --arch=power9 myContainer.sif myDefinition.def
```
This command will submit a batch job and immediately return. Note that while "power9" is currently
the only supported architecture, the parameter is still required. If you want it to block while the
image is built and see live output, use the parameter `--interactive`:
```Bash
buildSingularityImage --arch=power9 --interactive myContainer.sif myDefinition.def
```
There are more options available which can be shown by running `buildSingularityImage --help`. All
have reasonable defaults.The most important ones are:
- `--time <time>`: Set a higher job time if the default time is not
enough to build your image and your job is cancelled before completing. The format is the same
as for SLURM.
- `--tmp-size=<size in GB>`: Set a size used for the temporary
location of the Singularity container. Basically the size of the extracted container.
- `--output=<file>`: Path to a file used for (log) output generated
while building your container.
- Various singularity options are passed through. E.g.
`--notest, --force, --update`. See, e.g., `singularity --help` for details.
For **advanced users** it is also possible to manually request a job with a VM (`srun -p ml
--cloud=kvm ...`) and then use this script to build a Singularity container from within the job. In
this case the `--arch` and other SLURM related parameters are not required. The advantage of using
this script is that it automates the waiting for the VM and mounting of host directories into it
(can also be done with `startInVM`) and creates a temporary directory usable with Singularity inside
the VM controlled by the `--tmp-size` parameter.
## Filesystem
**Read here if you have problems like "File not found".**
As the build starts in a VM you may not have access to all your files.
It is usually bad practice to refer to local files from inside a
definition file anyway as this reduces reproducibility. However common
directories are available by default. For others, care must be taken. In
short:
- /home/$USER, /scratch/$USER are available and should be used
- /scratch/\<group> also works for all groups the users is in
- /projects/\<group> similar, but is read-only! So don't use this to
store your generated container directly, but rather move it here
afterwards
- /tmp is the VM local temporary directory. All files put here will be
lost!
If the current directory is inside (or equal to) one of the above
(except /tmp), then relative paths for container and definition work as
the script changes to the VM equivalent of the current directory.
Otherwise you need to use absolute paths. Using `~` in place of `$HOME`
does work too.
Under the hood, the filesystem of Taurus is mounted via SSHFS at
/host_data, so if you need any other files they can be found there.
There is also a new SSH key named "kvm" which is created by the scripts
and authorized inside the VM to allow for password-less access to SSHFS.
This is stored at `~/.ssh/kvm` and regenerated if it does not exist. It
is also added to `~/.ssh/authorized_keys`. Note that removing the key
file does not remove it from `authorized_keys`, so remove it manually if
you need to. It can be easily identified by the comment on the key.
However, removing this key is **NOT** recommended, as it needs to be
re-generated on every script run.
As the build starts in a VM you may not have access to all your files. It is usually bad practice
to refer to local files from inside a definition file anyway as this reduces reproducibility.
However common directories are available by default. For others, care must be taken. In short:
- `/home/$USER`, `/scratch/$USER` are available and should be used `/scratch/\<group>` also works for
- all groups the users is in `/projects/\<group>` similar, but is read-only! So don't use this to
store your generated container directly, but rather move it here afterwards
- /tmp is the VM local temporary directory. All files put here will be lost!
If the current directory is inside (or equal to) one of the above (except `/tmp`), then relative paths
for container and definition work as the script changes to the VM equivalent of the current
directory. Otherwise you need to use absolute paths. Using `~` in place of `$HOME` does work too.
Under the hood, the filesystem of Taurus is mounted via SSHFS at `/host_data`, so if you need any
other files they can be found there.
There is also a new SSH key named "kvm" which is created by the scripts and authorized inside the VM
to allow for password-less access to SSHFS. This is stored at `~/.ssh/kvm` and regenerated if it
does not exist. It is also added to `~/.ssh/authorized_keys`. Note that removing the key file does
not remove it from `authorized_keys`, so remove it manually if you need to. It can be easily
identified by the comment on the key. However, removing this key is **NOT** recommended, as it
needs to be re-generated on every script run.
## Starting a Job in a VM
Especially when developing a Singularity definition file it might be
useful to get a shell directly on a VM. To do so simply run:
startInVM --arch=power9
This will execute an `srun` command with the `--cloud=kvm` parameter,
wait till the VM is ready, mount all folders (just like
`buildSingularityImage`, see the Filesystem section above) and come back
with a bash inside the VM. Inside that you are root, so you can directly
execute `singularity build` commands.
As usual more options can be shown by running `startInVM --help`, the
most important one being `--time`.
There are 2 special use cases for this script: 1 Execute an arbitrary
command inside the VM instead of getting a bash by appending the command
to the script. Example: \<pre>startInVM --arch=power9 singularity build
\~/myContainer.sif \~/myDefinition.def\</pre> 1 Use the script in a job
manually allocated via srun/sbatch. This will work the same as when
running outside a job but will **not** start a new job. This is useful
for using it inside batch scripts, when you already have an allocation
or need special arguments for the job system. Again you can run an
arbitrary command by passing it to the script.
Especially when developing a Singularity definition file it might be useful to get a shell directly
on a VM. To do so simply run:
```Bash
startInVM --arch=power9
```
This will execute an `srun` command with the `--cloud=kvm` parameter, wait till the VM is ready,
mount all folders (just like `buildSingularityImage`, see the Filesystem section above) and come
back with a bash inside the VM. Inside that you are root, so you can directly execute `singularity
build` commands.
As usual more options can be shown by running `startInVM --help`, the most important one being
`--time`.
There are 2 special use cases for this script: 1 Execute an arbitrary command inside the VM instead
of getting a bash by appending the command to the script. Example: \<pre>startInVM --arch=power9
singularity build \~/myContainer.sif \~/myDefinition.def\</pre> 1 Use the script in a job manually
allocated via srun/sbatch. This will work the same as when running outside a job but will **not**
start a new job. This is useful for using it inside batch scripts, when you already have an
allocation or need special arguments for the job system. Again you can run an arbitrary command by
passing it to the script.
# Use of Containers
[Containerization]**todo link** encapsulating or packaging up software code and all its dependencies
to run uniformly and consistently on any infrastructure. On Taurus [Singularity]**todo link** used
as a standard container solution. Singularity enables users to have full control of their
environment. This means that you don’t have to ask an HPC support to install anything for you - you
can put it in a Singularity container and run! As opposed to Docker (the most famous container
solution), Singularity is much more suited to being used in an HPC environment and more efficient in
many cases. Docker containers can easily be used in Singularity. Information about the use of
Singularity on Taurus can be found [here]**todo link**.
[Containerization](https://www.ibm.com/cloud/learn/containerization) encapsulating or packaging up
software code and all its dependencies to run uniformly and consistently on any infrastructure. On
Taurus [Singularity](https://sylabs.io/) used as a standard container solution. Singularity enables
users to have full control of their environment. This means that you don’t have to ask an HPC
support to install anything for you - you can put it in a Singularity container and run! As opposed
to Docker (the most famous container solution), Singularity is much more suited to being used in an
HPC environment and more efficient in many cases. Docker containers can easily be used in
Singularity. Information about the use of Singularity on Taurus can be found [here]**todo link**.
In some cases using Singularity requires a Linux machine with root privileges (e.g. using the ml
partition), the same architecture and a compatible kernel. For many reasons, users on Taurus cannot
......@@ -15,11 +15,303 @@ be granted root permissions. A solution is a Virtual Machine (VM) on the ml part
users to gain root permissions in an isolated environment. There are two main options on how to work
with VM on Taurus:
1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
Additional Information: Examples of the definition for the Singularity container ([here]**todo
link**) and some hints ([here]**todo link**).
## Singularity
Useful links: [Containers]**todo link**, [Custom EasyBuild Environment]**todo link**, [Virtual
machine on Taurus]**todo link**
If you wish to containerize your workflow/applications, you can use Singularity containers on
Taurus. As opposed to Docker, this solution is much more suited to being used in an HPC environment.
Existing Docker containers can easily be converted.
ZIH wiki sites:
- [Example Definitions](SingularityExampleDefinitions.md)
- [Building Singularity images on Taurus](VMTools.md)
- [Hints on Advanced usage](SingularityRecipeHints.md)
It is available on Taurus without loading any module.
### Local installation
One advantage of containers is that you can create one on a local machine (e.g. your laptop) and
move it to the HPC system to execute it there. This requires a local installation of singularity.
The easiest way to do so is:
1. Check if go is installed by executing `go version`. If it is **not**:
```Bash
wget <https://storage.googleapis.com/golang/getgo/installer_linux> && chmod +x
installer_linux && ./installer_linux && source $HOME/.bash_profile
```
1. Follow the instructions to [install Singularity](https://github.com/sylabs/singularity/blob/master/INSTALL.md#clone-the-repo)
clone the repo
```Bash
mkdir -p ${GOPATH}/src/github.com/sylabs && cd ${GOPATH}/src/github.com/sylabs && git clone <https://github.com/sylabs/singularity.git> && cd
singularity
```
Checkout the version you want (see the [Github releases page](https://github.com/sylabs/singularity/releases)
for available releases), e.g.
```Bash
git checkout v3.2.1\
```
Build and install
```Bash
cd ${GOPATH}/src/github.com/sylabs/singularity && ./mconfig && cd ./builddir && make && sudo
make install
```
### Container creation
Since creating a new container requires access to system-level tools and thus root privileges, it is
not possible for users to generate new custom containers on Taurus directly. You can, however,
import an existing container from, e.g., Docker.
In case you wish to create a new container, you can do so on your own local machine where you have
the necessary privileges and then simply copy your container file to Taurus and use it there.
This does not work on our **ml** partition, as it uses Power9 as its architecture which is
different to the x86 architecture in common computers/laptops. For that you can use the
[VM Tools](VMTools.md).
#### Creating a container
Creating a container is done by writing a definition file and passing it to
```Bash
singularity build myContainer.sif myDefinition.def
```
NOTE: This must be done on a machine (or [VM](Cloud.md) with root rights.
A definition file contains a bootstrap
[header](https://sylabs.io/guides/3.2/user-guide/definition_files.html#header)
where you choose the base and
[sections](https://sylabs.io/guides/3.2/user-guide/definition_files.html#sections)
where you install your software.
The most common approach is to start from an existing docker image from DockerHub. For example, to
start from an [Ubuntu image](https://hub.docker.com/_/ubuntu) copy the following into a new file
called ubuntu.def (or any other filename of your choosing)
```Bash
Bootstrap: docker<br />From: ubuntu:trusty<br /><br />%runscript<br /> echo "This is what happens when you run the container..."<br /><br />%post<br /> apt-get install g++
```
Then you can call:
```Bash
singularity build ubuntu.sif ubuntu.def
```
And it will install Ubuntu with g++ inside your container, according to your definition file.
More bootstrap options are available. The following example, for instance, bootstraps a basic CentOS
7 image.
```Bash
BootStrap: yum
OSVersion: 7
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum
%runscript
echo "This is what happens when you run the container..."
%post
echo "Hello from inside the container"
yum -y install vim-minimal
```
More examples of definition files can be found at
https://github.com/singularityware/singularity/tree/master/examples
#### Importing a docker container
You can import an image directly from the Docker repository (Docker Hub):
```Bash
singularity build my-container.sif docker://ubuntu:latest
```
As opposed to bootstrapping a container, importing from Docker does **not require root privileges**
and therefore works on Taurus directly.
Creating a singularity container directly from a local docker image is possible but not recommended.
Steps:
```Bash
# Start a docker registry
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
# Push local docker container to it
$ docker tag alpine localhost:5000/alpine
$ docker push localhost:5000/alpine
# Create def file for singularity like this...
$ cat example.def
Bootstrap: docker
Registry: <a href="http://localhost:5000" rel="nofollow" target="_blank">http://localhost:5000</a>
From: alpine
# Build singularity container
$ singularity build --nohttps alpine.sif example.def
```
#### Starting from a Dockerfile
As singularity definition files and Dockerfiles are very similar you can start creating a definition
file from an existing Dockerfile by "translating" each section.
There are tools to automate this. One of them is \<a
href="<https://github.com/singularityhub/singularity-cli>"
target="\_blank">spython\</a> which can be installed with \`pip\` (add
\`--user\` if you don't want to install it system-wide):
`pip3 install -U spython`
With this you can simply issue the following command to convert a
Dockerfile in the current folder into a singularity definition file:
`spython recipe Dockerfile myDefinition.def<br />`
Now please **verify** your generated defintion and adjust where
required!
There are some notable changes between singularity definitions and
Dockerfiles: 1 Command chains in Dockerfiles (\`apt-get update &&
apt-get install foo\`) must be split into separate commands (\`apt-get
update; apt-get install foo). Otherwise a failing command before the
ampersand is considered "checked" and does not fail the build. 1 The
environment variables section in Singularity is only set on execution of
the final image, not during the build as with Docker. So \`*ENV*\`
sections from Docker must be translated to an entry in the
*%environment* section and **additionally** set in the *%runscript*
section if the variable is used there. 1 \`*VOLUME*\` sections from
Docker cannot be represented in Singularity containers. Use the runtime
option \`-B\` to bind folders manually. 1 *\`CMD\`* and *\`ENTRYPOINT\`*
from Docker do not have a direct representation in Singularity. The
closest is to check if any arguments are given in the *%runscript*
section and call the command from \`*ENTRYPOINT*\` with those, if none
are given call \`*ENTRYPOINT*\` with the arguments of \`*CMD*\`:
\<verbatim>if \[ $# -gt 0 \]; then \<ENTRYPOINT> "$@" else \<ENTRYPOINT>
\<CMD> fi\</verbatim>
### Using the containers
#### Entering a shell in your container
A read-only shell can be entered as follows:
```Bash
singularity shell my-container.sif
```
**IMPORTANT:** In contrast to, for instance, Docker, this will mount various folders from the host
system including $HOME. This may lead to problems with, e.g., Python that stores local packages in
the home folder, which may not work inside the container. It also makes reproducibility harder. It
is therefore recommended to use `--contain/-c` to not bind $HOME (and others like `/tmp`)
automatically and instead set up your binds manually via `-B` parameter. Example:
```Bash
singularity shell --contain -B /scratch,/my/folder-on-host:/folder-in-container my-container.sif
```
You can write into those folders by default. If this is not desired, add an `:ro` for read-only to
the bind specification (e.g. `-B /scratch:/scratch:ro\`). Note that we already defined bind paths
for `/scratch`, `/projects` and `/sw` in our global `singularity.conf`, so you needn't use the `-B`
parameter for those.
If you wish, for instance, to install additional packages, you have to use the `-w` parameter to
enter your container with it being writable. This, again, must be done on a system where you have
the necessary privileges, otherwise you can only edit files that your user has the permissions for.
E.g:
```Bash
singularity shell -w my-container.sif
Singularity.my-container.sif> yum install htop
```
The `-w` parameter should only be used to make permanent changes to your container, not for your
productive runs (it can only be used writeable by one user at the same time). You should write your
output to the usual Taurus file systems like `/scratch`. Launching applications in your container
#### Running a command inside the container
While the "shell" command can be useful for tests and setup, you can also launch your applications
inside the container directly using "exec":
```Bash
singularity exec my-container.img /opt/myapplication/bin/run_myapp
```
This can be useful if you wish to create a wrapper script that transparently calls a containerized
application for you. E.g.:
```Bash
#!/bin/bash
X=`which singularity 2>/dev/null`
if [ "z$X" = "z" ] ; then
echo "Singularity not found. Is the module loaded?"
exit 1
fi
singularity exec /scratch/p_myproject/my-container.sif /opt/myapplication/run_myapp "$@"
The better approach for that however is to use `singularity run` for that, which executes whatever was set in the _%runscript_ section of the definition file with the arguments you pass to it.
Example:
Build the following definition file into an image:
Bootstrap: docker
From: ubuntu:trusty
%post
apt-get install -y g++
echo '#include <iostream>' > main.cpp
echo 'int main(int argc, char** argv){ std::cout << argc << " args for " << argv[0] << std::endl; }' >> main.cpp
g++ main.cpp -o myCoolApp
mv myCoolApp /usr/local/bin/myCoolApp
%runscript
myCoolApp "$@
singularity build my-container.sif example.def
```
Then you can run your application via
```Bash
singularity run my-container.sif first_arg 2nd_arg
```
Alternatively you can execute the container directly which is
equivalent:
```Bash
./my-container.sif first_arg 2nd_arg
```
With this you can even masquerade an application with a singularity container as if it was an actual
program by naming the container just like the binary:
```Bash
mv my-container.sif myCoolAp
```
### Use-cases
One common use-case for containers is that you need an operating system with a newer GLIBC version
than what is available on Taurus. E.g., the bullx Linux on Taurus used to be based on RHEL6 having a
rather dated GLIBC version 2.12, some binary-distributed applications didn't work on that anymore.
You can use one of our pre-made CentOS 7 container images (`/scratch/singularity/centos7.img`) to
circumvent this problem. Example:
```Bash
$ singularity exec /scratch/singularity/centos7.img ldd --version
ldd (GNU libc) 2.17
```
......@@ -18,7 +18,10 @@ nav:
- Modules: software/modules.md
- JupyterHub: software/JupyterHub.md
- JupyterHub for Teaching: software/JupyterHubForTeaching.md
- Containers: software/containers.md
- Containers:
- Singularity: software/containers.md
- Singularity Recicpe Hints: software/SingularityRecipeHints.md
- Singularity Example Definitions: software/SingularityExampleDefinitions.md
- Custom Easy Build Modules: software/CustomEasyBuildEnvironment.md
- Get started with HPC-DA: software/GetStartedWithHPCDA.md
- Mathematics: software/Mathematics.md
......
# Singularity Example Definitions
## Basic example
A usual workflow to create Singularity Definition consists of the
following steps:
- Start from base image
- Install dependencies
- Package manager
- Other sources
- Build & Install own binaries
- Provide entrypoints & metadata
An example doing all this:
Bootstrap: docker
From: alpine
%post
. /.singularity.d/env/10-docker*.sh
apk add g++ gcc make wget cmake
wget https://github.com/fmtlib/fmt/archive/5.3.0.tar.gz
tar -xf 5.3.0.tar.gz
mkdir build && cd build
cmake ../fmt-5.3.0 -DFMT_TEST=OFF
make -j$(nproc) install
cd ..
rm -r fmt-5.3.0*
cat hello.cpp
#include &lt;fmt/format.h&gt;
int main(int argc, char** argv){
if(argc == 1) fmt::print("No arguments passed!\n");
else fmt::print("Hello {}!\n", argv[1]);
}
EOF
g++ hello.cpp -o hello -lfmt
mv hello /usr/bin/hello
%runscript
hello "$@"
%labels
Author Alexander Grund
Version 1.0.0
%help
Display a greeting using the fmt library
Usage:
./hello
## CUDA + CuDNN + OpenMPI
- Chosen CUDA version depends on installed driver of host
- OpenMPI needs PMI for SLURM integration
- OpenMPI needs CUDA for GPU copy-support
- OpenMPI needs ibverbs libs for Infiniband
- openmpi-mca-params.conf required to avoid warnings on fork (OK on
taurus)
- Environment variables SLURM_VERSION, OPENMPI_VERSION can be set to
choose different version when building the container
<!-- -->
Bootstrap: docker
From: nvidia/cuda-ppc64le:10.1-cudnn7-devel-ubuntu18.04
%labels
Author ZIH
Requires CUDA driver 418.39+.
%post
. /.singularity.d/env/10-docker*.sh
apt-get update
apt-get install -y cuda-compat-10.1
apt-get install -y libibverbs-dev ibverbs-utils
# Install basic development tools
apt-get install -y gcc g++ make wget python
apt-get autoremove; apt-get clean
cd /tmp
: ${SLURM_VERSION:=17-02-11-1}
wget https://github.com/SchedMD/slurm/archive/slurm-${SLURM_VERSION}.tar.gz
tar -xf slurm-${SLURM_VERSION}.tar.gz
cd slurm-slurm-${SLURM_VERSION}
./configure --prefix=/usr/ --sysconfdir=/etc/slurm --localstatedir=/var --disable-debug
make -C contribs/pmi2 -j$(nproc) install
cd ..
rm -rf slurm-*
: ${OPENMPI_VERSION:=3.1.4}
wget https://download.open-mpi.org/release/open-mpi/v${OPENMPI_VERSION%.*}/openmpi-${OPENMPI_VERSION}.tar.gz
tar -xf openmpi-${OPENMPI_VERSION}.tar.gz
cd openmpi-${OPENMPI_VERSION}/
./configure --prefix=/usr/ --with-pmi --with-verbs --with-cuda
make -j$(nproc) install
echo "mpi_warn_on_fork = 0" >> /usr/etc/openmpi-mca-params.conf
echo "btl_openib_warn_default_gid_prefix = 0" >> /usr/etc/openmpi-mca-params.conf
cd ..
rm -rf openmpi-*
# Singularity Recipe Hints
## Index
[GUI (X11) applications](#X11)
------------------------------------------------------------------------
### \<a name="X11">\</a>GUI (X11) applications
Running GUI applications inside a singularity container is possible out
of the box. Check the following definition:
Bootstrap: docker
From: centos:7
%post
yum install -y xeyes
This image may be run with
singularity exec xeyes.sif xeyes.
This works because all the magic is done by singularity already like
setting $DISPLAY to the outside display and mounting $HOME so
$HOME/.Xauthority (X11 authentification cookie) is found. When you are
using \`--contain\` or \`--no-home\` you have to set that cookie
yourself or mount/copy it inside the container. Similar for
\`--cleanenv\` you have to set $DISPLAY e.g. via
export SINGULARITY_DISPLAY=$DISPLAY
When you run a container as root (via \`sudo\`) you may need to allow
root for your local display port: \<verbatim>xhost
+local:root\</verbatim>
#### Hardware acceleration
If you want hardware acceleration you **may** need
[VirtualGL](https://virtualgl.org). An example definition file is as
follows:
Bootstrap: docker
From: centos:7
%post
yum install -y glx-utils # for glxgears example app
yum install -y curl
VIRTUALGL_VERSION=2.6.2 # Replace by required (e.g. latest) version
curl -sSL https://downloads.sourceforge.net/project/virtualgl/"${VIRTUALGL_VERSION}"/VirtualGL-"${VIRTUALGL_VERSION}".x86_64.rpm -o VirtualGL-"${VIRTUALGL_VERSION}".x86_64.rpm
yum install -y VirtualGL*.rpm
/opt/VirtualGL/bin/vglserver_config -config +s +f -t
rm VirtualGL-*.rpm
# Install video drivers AFTER VirtualGL to avoid them being overwritten
yum install -y mesa-dri-drivers # for e.g. intel integrated GPU drivers. Replace by your driver
You can now run the application with vglrun:
singularity exec vgl.sif vglrun glxgears
**Attention:**Using VirtualGL may not be required at all and could even
decrease the performance. To check install e.g. glxgears as above and
your graphics driver (or use the VirtualGL image from above) and disable
vsync:
vblank_mode=0 singularity exec vgl.sif glxgears
Compare the FPS output with the glxgears prefixed by vglrun (see above)
to see which produces more FPS (or runs at all).
**NVIDIA GPUs** need the \`--nv\` parameter for the singularity command:
\`singularity exec --nv vgl.sif glxgears\`
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment