Dockerizing the project is challanging because of two things:
Dockerizing the project is challenging because of two things:
1. the classpath setup done in [start_component.sh](implementation/start_component.sh)
2. the GPIO pins of the Raspberry Pi must be accessable from inside the container for the lights
2. the GPIO pins of the Raspberry Pi must be accessible from inside the container for the lights
## Dockerfile walkthrough
The Dockerfile is used to build the container.
It hold instructions which files to copy and commands to run.
It holds instructions on which files to copy and commands to run.
I choosed a multistage approach.
I chose a multistage approach.
This means first in a dedicated container all dependencies get resolved and files compiled.
They are copied to a second container which serves the purpose of the runtime environment.
...
...
@@ -20,7 +20,7 @@ FROM maven:3-jdk-11-slim as build
```
In the first line is defined which container we use for our (build) stage.
The `as build` names the stage.
Because the project uses Maven as dependency manager and Java 11 as version we use the [Apache Maven container](https://hub.docker.com/_/maven) with the tag (a tag is like a version) `3-jdk-11-slim`.
Because the project uses Maven as dependency manager and Java 11 as the version we use the [Apache Maven container](https://hub.docker.com/_/maven) with the tag (a tag is like a version) `3-jdk-11-slim`.
```Dockerfile
WORKDIR /root/app
...
...
@@ -32,13 +32,13 @@ When we run a command inside the container it is executed in this context.
COPY pom.xml .
```
This copies the `pom.xml` file from our project inside the container.
Because we set a working directory `.` actually means `/root/app`, so the file is located at `/root/app/pom.xml` inside the container.
Because we set a working directory `.` actually means `/root/app`, the file is located at `/root/app/pom.xml` inside the container.
```Dockerfile
RUN mvn dependency:resolve
```
This downloads all dependencies defined in the `pom.xml`.
As default location they are saved to `$HOME/.m2`.
As default location, they are saved to `$HOME/.m2`.
Because we are the user `root` inside the container the dependencies are saved at `/root/.m2`.
```Dockerfile
...
...
@@ -49,7 +49,7 @@ This builds the classpath and saves it to `classpath.txt` in the working directo
```Dockerfile
COPY src src
```
This copies the `src` folder from our project directory to current working directory inside the container.
This copies the `src` folder from our project directory to the current working directory inside the container.
```Dockerfile
RUN mvn compile
...
...
@@ -58,14 +58,14 @@ Here we finally build all our class files.
Our build stage is done.
It followes the container defintion for our final container.
It follows the container definition for our final container.
###
### Runtime stage
```Dockerfile
FROM openjdk:11-jre
```
For this container we use the [OpenJDK container](https://hub.docker.com/_/openjdk) as base. And because everything is already compiled we only need the `jre` variant.
For this container, we use the [OpenJDK container](https://hub.docker.com/_/openjdk) as the base. And because everything is already compiled we only need the `jre` variant.
```Dockerfile
WORKDIR /root/app
...
...
@@ -91,3 +91,48 @@ This sets the entrypoint of the container that is executed when the container is
The first part of the command exports the `CLASSPATH` that needs to be set to resolve all the used packages inside the Java files.
It would probably be cleaner to use the `ENV` definition in the Dockerfile but setting an environment based on command output is [currently not possible](https://github.com/moby/moby/issues/29110).
## Building the container
To build the container change the working directory to the `basyx.lichterkette` subdirectory of this repository and run the following command:
```console
#docker build -t basyx-lichterkette .
```
The tag `basyx-lichterkette` can be anything.
If you change it, be sure to replace the tag name in any following command in this documentation.
It is also possible to use `docker-compose` to build the container.
TODO - docker-compose.yaml
```console
#docker-compose build
```
## Running the container
### Components without GPIO access
Starting the container for any component that does not need access to the GPIO pins of the Raspberry Pi is straightforward:
```console
#docker run -d-eCOMPONENT=registry basyx-lichterkette
```
-`-d` detaches the container, if you want to run it in the foreground omit it
-`-e COMPONENT=registry` sets the environment variable `COMPONENT`. **This determines which component is started, be sure to set it to the correct component!**
### Components without GPIO access
For the other components, Docker needs access to the system resources of the Raspberry Pi.
The easiest way to do this (and the only one I found working) is to use the following command:
```console
#docker run -d-eCOMPONENT=lights -v"/usr/bin/raspi-gpio:/bin/raspi-gpio"--privileged basyx-lichterkette
```
-`-d` same as above
-`-e COMPONENT=lights` same as above
-`-v "/usr/bin/raspi-gpio:/bin/raspi-gpio"` mounts the `raspi-gpio` executable inside the container. **You have to have `raspi-gpio` installed on the host system!**
-`--privileged` ensures access to host devices (eg. GPIO)