Skip to content
Snippets Groups Projects

Contrib structure

Merged Ulf Markwardt requested to merge contrib_structure into preview
7 files
+ 435
378
Compare changes
  • Side-by-side
  • Inline
Files
7
@@ -2,94 +2,112 @@
@@ -2,94 +2,112 @@
[Containerization](https://www.ibm.com/cloud/learn/containerization) encapsulating or packaging up
[Containerization](https://www.ibm.com/cloud/learn/containerization) encapsulating or packaging up
software code and all its dependencies to run uniformly and consistently on any infrastructure. On
software code and all its dependencies to run uniformly and consistently on any infrastructure. On
Taurus [Singularity](https://sylabs.io/) used as a standard container solution. Singularity enables
ZIH systems [Singularity](https://sylabs.io/) is used as a standard container solution. Singularity
users to have full control of their environment. This means that you don’t have to ask an HPC
enables users to have full control of their environment. This means that you don’t have to ask the
support to install anything for you - you can put it in a Singularity container and run! As opposed
HPC support to install anything for you - you can put it in a Singularity container and run! As
to Docker (the most famous container solution), Singularity is much more suited to being used in an
opposed to Docker (the most famous container solution), Singularity is much more suited to being
HPC environment and more efficient in many cases. Docker containers can easily be used in
used in an HPC environment and more efficient in many cases. Docker containers can easily be used in
Singularity. Information about the use of Singularity on Taurus can be found [here]**todo link**.
Singularity. Information about the use of Singularity on ZIH systems can be found on this page.
In some cases using Singularity requires a Linux machine with root privileges (e.g. using the ml
In some cases using Singularity requires a Linux machine with root privileges (e.g. using the
partition), the same architecture and a compatible kernel. For many reasons, users on Taurus cannot
partition `ml`), the same architecture and a compatible kernel. For many reasons, users on ZIH
be granted root permissions. A solution is a Virtual Machine (VM) on the ml partition which allows
systems cannot be granted root permissions. A solution is a Virtual Machine (VM) on the partition
users to gain root permissions in an isolated environment. There are two main options on how to work
`ml` which allows users to gain root permissions in an isolated environment. There are two main
with VM on Taurus:
options on how to work with Virtual Machines on ZIH systems:
1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
1. [VM tools](virtual_machines_tools.md): Automative algorithms for using virtual machines;
1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
1. [Manual method](virtual_machines.md): It requires more operations but gives you more flexibility
 
and reliability.
## Singularity
## Singularity
If you wish to containerize your workflow/applications, you can use Singularity containers on
If you wish to containerize your workflow and/or applications, you can use Singularity containers on
Taurus. As opposed to Docker, this solution is much more suited to being used in an HPC environment.
ZIH systems. As opposed to Docker, this solution is much more suited to being used in an HPC
Existing Docker containers can easily be converted.
environment.
ZIH wiki sites:
!!! note
- [Example Definitions](singularity_example_definitions.md)
It is not possible for users to generate new custom containers on ZIH systems directly, because
- [Building Singularity images on Taurus](vm_tools.md)
creating a new container requires root privileges.
- [Hints on Advanced usage](singularity_recipe_hints.md)
It is available on Taurus without loading any module.
However, new containers can be created on your local workstation and moved to ZIH systems for
 
execution. Follow the instructions for [locally install Singularity](#local-installation) and
 
[container creation](#container-creation). Moreover, existing Docker container can easily be
 
converted, which is documented [here](#importing-a-docker-container).
### Local installation
If you are already familar with Singularity, you might be more intressted in our [singularity
 
recipes and hints](singularity_recipe_hints.md).
One advantage of containers is that you can create one on a local machine (e.g. your laptop) and
### Local Installation
move it to the HPC system to execute it there. This requires a local installation of singularity.
The easiest way to do so is:
1. Check if go is installed by executing `go version`. If it is **not**:
The local installation of Singularity comprises two steps: Make `go` available and then follow the
 
instructions from the official documentation to install Singularity.
```Bash
1. Check if `go` is installed by executing `go version`. If it is **not**:
wget <https://storage.googleapis.com/golang/getgo/installer_linux> && chmod +x
installer_linux && ./installer_linux && source $HOME/.bash_profile
```
1. Follow the instructions to [install Singularity](https://github.com/sylabs/singularity/blob/master/INSTALL.md#clone-the-repo)
```console
 
marie@local$ wget <https://storage.googleapis.com/golang/getgo/installer_linux> && chmod +x
 
installer_linux && ./installer_linux && source $HOME/.bash_profile
 
```
clone the repo
1. Instructions to
 
[install Singularity](https://github.com/sylabs/singularity/blob/master/INSTALL.md#clone-the-repo)
 
from the official documentation:
```Bash
Clone the repository
mkdir -p ${GOPATH}/src/github.com/sylabs && cd ${GOPATH}/src/github.com/sylabs && git clone <https://github.com/sylabs/singularity.git> && cd
singularity
```
Checkout the version you want (see the [Github releases page](https://github.com/sylabs/singularity/releases)
```console
for available releases), e.g.
marie@local$ mkdir -p ${GOPATH}/src/github.com/sylabs
 
marie@local$ cd ${GOPATH}/src/github.com/sylabs
 
marie@local$ git clone https://github.com/sylabs/singularity.git
 
marie@local$ cd singularity
 
```
```Bash
Checkout the version you want (see the [GitHub releases page](https://github.com/sylabs/singularity/releases)
git checkout v3.2.1\
for available releases), e.g.
```
Build and install
```console
 
marie@local$ git checkout v3.2.1
 
```
```Bash
Build and install
cd ${GOPATH}/src/github.com/sylabs/singularity && ./mconfig && cd ./builddir && make && sudo
make install
```
### Container creation
```console
 
marie@local$ cd ${GOPATH}/src/github.com/sylabs/singularity
 
marie@local$ ./mconfig && cd ./builddir && make
 
marie@local$ sudo make install
 
```
Since creating a new container requires access to system-level tools and thus root privileges, it is
### Container Creation
not possible for users to generate new custom containers on Taurus directly. You can, however,
import an existing container from, e.g., Docker.
In case you wish to create a new container, you can do so on your own local machine where you have
!!! note
the necessary privileges and then simply copy your container file to Taurus and use it there.
This does not work on our **ml** partition, as it uses Power9 as its architecture which is
It is not possible for users to generate new custom containers on ZIH systems directly, because
different to the x86 architecture in common computers/laptops. For that you can use the
creating a new container requires root privileges.
[VM Tools](vm_tools.md).
#### Creating a container
There are two possibilities:
Creating a container is done by writing a definition file and passing it to
1. Create a new container on your local workstation (where you have the necessary privileges), and
 
then copy the container file to ZIH systems for execution.
 
1. You can, however, import an existing container from, e.g., Docker.
```Bash
Both methods are outlined in the following.
singularity build myContainer.sif myDefinition.def
```
#### New Custom Container
 
 
You can create a new custom container on your workstation, if you have root rights.
 
 
!!! attention "Respect the micro-architectures"
NOTE: This must be done on a machine (or [VM](virtual_machines.md) with root rights.
You cannot create containers for the partition `ml`, as it bases on Power9 micro-architecture
 
which is different to the x86 architecture in common computers/laptops. For that you can use
 
the [VM Tools](virtual_machines_tools.md).
 
 
Creating a container is done by writing a **definition file** and passing it to
 
 
```console
 
marie@local$ singularity build myContainer.sif <myDefinition.def>
 
```
A definition file contains a bootstrap
A definition file contains a bootstrap
[header](https://sylabs.io/guides/3.2/user-guide/definition_files.html#header)
[header](https://sylabs.io/guides/3.2/user-guide/definition_files.html#header)
@@ -99,20 +117,26 @@ where you install your software.
@@ -99,20 +117,26 @@ where you install your software.
The most common approach is to start from an existing docker image from DockerHub. For example, to
The most common approach is to start from an existing docker image from DockerHub. For example, to
start from an [Ubuntu image](https://hub.docker.com/_/ubuntu) copy the following into a new file
start from an [Ubuntu image](https://hub.docker.com/_/ubuntu) copy the following into a new file
called ubuntu.def (or any other filename of your choosing)
called `ubuntu.def` (or any other filename of your choice)
```Bash
```bash
Bootstrap: docker<br />From: ubuntu:trusty<br /><br />%runscript<br /> echo "This is what happens when you run the container..."<br /><br />%post<br /> apt-get install g++
Bootstrap: docker
 
From: ubuntu:trusty
 
 
%runscript
 
echo "This is what happens when you run the container..."
 
 
%post
 
apt-get install g++
```
```
Then you can call:
Then you can call
```Bash
```console
singularity build ubuntu.sif ubuntu.def
marie@local$ singularity build ubuntu.sif ubuntu.def
```
```
And it will install Ubuntu with g++ inside your container, according to your definition file.
And it will install Ubuntu with g++ inside your container, according to your definition file.
More bootstrap options are available. The following example, for instance, bootstraps a basic CentOS
More bootstrap options are available. The following example, for instance, bootstraps a basic CentOS
7 image.
7 image.
@@ -131,23 +155,25 @@ Include: yum
@@ -131,23 +155,25 @@ Include: yum
```
```
More examples of definition files can be found at
More examples of definition files can be found at
https://github.com/singularityware/singularity/tree/master/examples
https://github.com/singularityware/singularity/tree/master/examples.
 
 
#### Import a Docker Container
 
 
!!! hint
#### Importing a docker container
As opposed to bootstrapping a container, importing from Docker does **not require root
 
privileges** and therefore works on ZIH systems directly.
You can import an image directly from the Docker repository (Docker Hub):
You can import an image directly from the Docker repository (Docker Hub):
```Bash
```console
singularity build my-container.sif docker://ubuntu:latest
marie@local$ singularity build my-container.sif docker://ubuntu:latest
```
```
As opposed to bootstrapping a container, importing from Docker does **not require root privileges**
Creating a singularity container directly from a local docker image is possible but not
and therefore works on Taurus directly.
recommended. The steps are:
Creating a singularity container directly from a local docker image is possible but not recommended.
Steps:
```Bash
```console
# Start a docker registry
# Start a docker registry
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
@@ -165,109 +191,122 @@ From: alpine
@@ -165,109 +191,122 @@ From: alpine
$ singularity build --nohttps alpine.sif example.def
$ singularity build --nohttps alpine.sif example.def
```
```
#### Starting from a Dockerfile
#### Start from a Dockerfile
As singularity definition files and Dockerfiles are very similar you can start creating a definition
As Singularity definition files and Dockerfiles are very similar you can start creating a definition
file from an existing Dockerfile by "translating" each section.
file from an existing Dockerfile by "translating" each section.
There are tools to automate this. One of them is \<a
There are tools to automate this. One of them is
href="<https://github.com/singularityhub/singularity-cli>"
[spython](https://github.com/singularityhub/singularity-cli) which can be installed with `pip`
target="\_blank">spython\</a> which can be installed with \`pip\` (add
(add `--user` if you don't want to install it system-wide):
\`--user\` if you don't want to install it system-wide):
`pip3 install -U spython`
```console
 
marie@local$ pip3 install -U spython
 
```
 
 
With this you can simply issue the following command to convert a Dockerfile in the current folder
 
into a singularity definition file:
 
 
```console
 
marie@local$ spython recipe Dockerfile myDefinition.def
 
```
With this you can simply issue the following command to convert a
Please **verify** your generated definition and adjust where required!
Dockerfile in the current folder into a singularity definition file:
`spython recipe Dockerfile myDefinition.def<br />`
There are some notable changes between Singularity definitions and Dockerfiles:
Now please **verify** your generated definition and adjust where
1. Command chains in Dockerfiles (`apt-get update && apt-get install foo`) must be split into
required!
separate commands (`apt-get update; apt-get install foo`). Otherwise a failing command before the
 
ampersand is considered "checked" and does not fail the build.
 
1. The environment variables section in Singularity is only set on execution of the final image, not
 
during the build as with Docker. So `*ENV*` sections from Docker must be translated to an entry
 
in the `%environment` section and **additionally** set in the `%runscript` section if the
 
variable is used there.
 
1. `*VOLUME*` sections from Docker cannot be represented in Singularity containers. Use the runtime
 
option \`-B\` to bind folders manually.
 
1. `CMD` and `ENTRYPOINT` from Docker do not have a direct representation in Singularity.
 
The closest is to check if any arguments are given in the `%runscript` section and call the
 
command from `ENTRYPOINT` with those, if none are given call `ENTRYPOINT` with the
 
arguments of `CMD`:
There are some notable changes between singularity definitions and
```bash
Dockerfiles: 1 Command chains in Dockerfiles (\`apt-get update &&
if [ $# -gt 0 ]; then
apt-get install foo\`) must be split into separate commands (\`apt-get
<ENTRYPOINT> "$@"
update; apt-get install foo). Otherwise a failing command before the
else
ampersand is considered "checked" and does not fail the build. 1 The
<ENTRYPOINT> <CMD>
environment variables section in Singularity is only set on execution of
fi
the final image, not during the build as with Docker. So \`*ENV*\`
```
sections from Docker must be translated to an entry in the
*%environment* section and **additionally** set in the *%runscript*
section if the variable is used there. 1 \`*VOLUME*\` sections from
Docker cannot be represented in Singularity containers. Use the runtime
option \`-B\` to bind folders manually. 1 *\`CMD\`* and *\`ENTRYPOINT\`*
from Docker do not have a direct representation in Singularity. The
closest is to check if any arguments are given in the *%runscript*
section and call the command from \`*ENTRYPOINT*\` with those, if none
are given call \`*ENTRYPOINT*\` with the arguments of \`*CMD*\`:
\<verbatim>if \[ $# -gt 0 \]; then \<ENTRYPOINT> "$@" else \<ENTRYPOINT>
\<CMD> fi\</verbatim>
### Using the containers
### Use the Containers
#### Entering a shell in your container
#### Enter a Shell in Your Container
A read-only shell can be entered as follows:
A read-only shell can be entered as follows:
```Bash
```console
singularity shell my-container.sif
marie@login$ singularity shell my-container.sif
```
```
**IMPORTANT:** In contrast to, for instance, Docker, this will mount various folders from the host
!!! note
system including $HOME. This may lead to problems with, e.g., Python that stores local packages in
the home folder, which may not work inside the container. It also makes reproducibility harder. It
is therefore recommended to use `--contain/-c` to not bind $HOME (and others like `/tmp`)
automatically and instead set up your binds manually via `-B` parameter. Example:
```Bash
In contrast to, for instance, Docker, this will mount various folders from the host system
singularity shell --contain -B /scratch,/my/folder-on-host:/folder-in-container my-container.sif
including $HOME. This may lead to problems with, e.g., Python that stores local packages in the
```
home folder, which may not work inside the container. It also makes reproducibility harder. It
 
is therefore recommended to use `--contain/-c` to not bind `$HOME` (and others like `/tmp`)
 
automatically and instead set up your binds manually via `-B` parameter. Example:
 
 
```console
 
marie@login$ singularity shell --contain -B /scratch,/my/folder-on-host:/folder-in-container my-container.sif
 
```
You can write into those folders by default. If this is not desired, add an `:ro` for read-only to
You can write into those folders by default. If this is not desired, add an `:ro` for read-only to
the bind specification (e.g. `-B /scratch:/scratch:ro\`). Note that we already defined bind paths
the bind specification (e.g. `-B /scratch:/scratch:ro\`). Note that we already defined bind paths
for `/scratch`, `/projects` and `/sw` in our global `singularity.conf`, so you needn't use the `-B`
for `/scratch`, `/projects` and `/sw` in our global `singularity.conf`, so you needn't use the `-B`
parameter for those.
parameter for those.
If you wish, for instance, to install additional packages, you have to use the `-w` parameter to
If you wish to install additional packages, you have to use the `-w` parameter to
enter your container with it being writable. This, again, must be done on a system where you have
enter your container with it being writable. This, again, must be done on a system where you have
the necessary privileges, otherwise you can only edit files that your user has the permissions for.
the necessary privileges, otherwise you can only edit files that your user has the permissions for.
E.g:
E.g:
```Bash
```console
singularity shell -w my-container.sif
marie@local$ singularity shell -w my-container.sif
Singularity.my-container.sif> yum install htop
Singularity.my-container.sif> yum install htop
```
```
The `-w` parameter should only be used to make permanent changes to your container, not for your
The `-w` parameter should only be used to make permanent changes to your container, not for your
productive runs (it can only be used writeable by one user at the same time). You should write your
productive runs (it can only be used writable by one user at the same time). You should write your
output to the usual Taurus file systems like `/scratch`. Launching applications in your container
output to the usual ZIH filesystems like `/scratch`. Launching applications in your container
#### Running a command inside the container
#### Run a Command Inside the Container
While the "shell" command can be useful for tests and setup, you can also launch your applications
While the `shell` command can be useful for tests and setup, you can also launch your applications
inside the container directly using "exec":
inside the container directly using "exec":
```Bash
```console
singularity exec my-container.img /opt/myapplication/bin/run_myapp
marie@login$ singularity exec my-container.img /opt/myapplication/bin/run_myapp
```
```
This can be useful if you wish to create a wrapper script that transparently calls a containerized
This can be useful if you wish to create a wrapper script that transparently calls a containerized
application for you. E.g.:
application for you. E.g.:
```Bash
```bash
#!/bin/bash
#!/bin/bash
X=`which singularity 2>/dev/null`
X=`which singularity 2>/dev/null`
if [ "z$X" = "z" ] ; then
if [ "z$X" = "z" ] ; then
echo "Singularity not found. Is the module loaded?"
echo "Singularity not found. Is the module loaded?"
exit 1
exit 1
fi
fi
singularity exec /scratch/p_myproject/my-container.sif /opt/myapplication/run_myapp "$@"
singularity exec /scratch/p_myproject/my-container.sif /opt/myapplication/run_myapp "$@"
The better approach for that however is to use `singularity run` for that, which executes whatever was set in the _%runscript_ section of the definition file with the arguments you pass to it.
```
Example:
Build the following definition file into an image:
The better approach is to use `singularity run`, which executes whatever was set in the `%runscript`
 
section of the definition file with the arguments you pass to it. Example: Build the following
 
definition file into an image:
 
 
```bash
Bootstrap: docker
Bootstrap: docker
From: ubuntu:trusty
From: ubuntu:trusty
@@ -285,33 +324,32 @@ singularity build my-container.sif example.def
@@ -285,33 +324,32 @@ singularity build my-container.sif example.def
Then you can run your application via
Then you can run your application via
```Bash
```console
singularity run my-container.sif first_arg 2nd_arg
singularity run my-container.sif first_arg 2nd_arg
```
```
Alternatively you can execute the container directly which is
Alternatively you can execute the container directly which is equivalent:
equivalent:
```Bash
```console
./my-container.sif first_arg 2nd_arg
./my-container.sif first_arg 2nd_arg
```
```
With this you can even masquerade an application with a singularity container as if it was an actual
With this you can even masquerade an application with a singularity container as if it was an actual
program by naming the container just like the binary:
program by naming the container just like the binary:
```Bash
```console
mv my-container.sif myCoolAp
mv my-container.sif myCoolAp
```
```
### Use-cases
### Use-Cases
One common use-case for containers is that you need an operating system with a newer GLIBC version
One common use-case for containers is that you need an operating system with a newer
than what is available on Taurus. E.g., the bullx Linux on Taurus used to be based on RHEL6 having a
[glibc](https://www.gnu.org/software/libc/) version than what is available on ZIH systems. E.g., the
rather dated GLIBC version 2.12, some binary-distributed applications didn't work on that anymore.
bullx Linux on ZIH systems used to be based on RHEL 6 having a rather dated glibc version 2.12, some
You can use one of our pre-made CentOS 7 container images (`/scratch/singularity/centos7.img`) to
binary-distributed applications didn't work on that anymore. You can use one of our pre-made CentOS
circumvent this problem. Example:
7 container images (`/scratch/singularity/centos7.img`) to circumvent this problem. Example:
```Bash
```console
$ singularity exec /scratch/singularity/centos7.img ldd --version
marie@login$ singularity exec /scratch/singularity/centos7.img ldd --version
ldd (GNU libc) 2.17
ldd (GNU libc) 2.17
```
```
Loading