Skip to content

Getting Started ‐ Environment Setup

tkrzielSICKAG edited this page Apr 16, 2024 · 20 revisions

This part of the documentation focuses on explaining how to get started working with specific technologies, applications and programming languages. Particularly, the section focuses on how to set up a docker-compose and Dockerfile for working with applications and setting up environments, such as database management systems, meant to run on the TDC-E.

Any application that is run on the TDC-E needs to be ported to the TDC-E.

The page discusses the following topics:

Setting up Node-RED is here an installation guide, while other topics focus on creating an environment for developed applications. MySQL and MQTT are special services which

Creating Environments usually involves the following four steps:

1. Creating an image

2. Building and Pushing the image to a registry

3. Pulling the image to the device

4. Providing a container for the image

Step one involves creating an image, which is code that is executed in the Container. Whatever your local code needs to run, it will also need in the Docker image, which is a copy of that code and everything the code needs to run. After the image is created and built, the best practice is to push the image to a registry where it will be accessible for downloading or pulling. You can also create the image locally and not upload it anywhere if you prefer other methods of sharing.

For your TDC-E device to be able to run the code, it needs to have the created image in its storage. To do so, the third step, pulling the code, is executed. There are, of course, other methods of sharing your code with the device. As the final step, a container, which is an environment your Docker image can run in, has to be created. The container is given different parameters, such as which image to run, how to run it, with with privileges to run it, whether to repeat it, what storage resources to share with it etc.

After these four steps, your environment should be set up, and the application that you developed should be running smoothly.

Keep in mind that there are two prerequisites to setting up any kind of environment on the TDC-E. Those are the following:

  • The TDC needs a network connection
  • Docker Hub needs to be available

Section one of this article describes setting up Node-RED on your device. In sections 2 to 4, creating a Dockerfile will be discussed, since the code for different environments varies. The process of building, pushing and pulling the created image is usually the same. For help with the build, pushing and pulling of images, refer to Getting Started - Build and Compose. Lastly, the MySql section will show the docker-compose.yml file, as parameters need to be set correctly.

1. Setting up Node-RED

1.1. Installation Guide

To install Node-RED on your device, go to SICK WebDash, and select Portainer from the four available options. The image below depicts how your current workspace should look like.To install Node-RED on your device, go to SICK WebDash, and select Portainer from the four available options. The image below depicts how your current workspace should look like.

portainer

Copy the lines of code located below and proceed to the next step. This stack is actually a docker compose file written in yaml notation. For additional information about the file, view the Stack Breakdown section. This is a docker-compose.yml file that is used to bind an image to a container on the TDC-E. The image is nodered/node-red, and is pre-created.

version: "2"
services:
  node-red:
    image: nodered/node-red
    user: root
    environment:
      - TZ=Europe/Zagreb
    ports:
      - "1880:1880"
    volumes:
      - node-red-data:/data
      - /dev:/dev
      - /mnt/data:/mnt/data
      - /sys/class/gpio:/sys/class/gpio
    privileged: true
    restart: always
 
volumes:
  node-red-data:

To add a new stack to the Portainer, select primary and pick the Stacks option. Then select Add Stack. Here you can write your stack in the Web Editor, upload it from your computer or use a git repository to form the docker-compose file. Paste the copied code into the Web Editor.

Additionally, you can add environmental variables and enable access control to restrict the management of this resource to administrators or to a set of users and/or teams. In the example below, the name of the stack is set to node-red and the code has been pasted into the editor.

stack-upload

Once set up, select Deploy the stack to start the upload process. This may take a few minutes. Once the upload is completed, you can click on your stack to view or edit options. With this option, we match the container with the corresponding image specified in it.

To start Node-RED, go to http://192.168.0.100:1880/. Node-RED should now be available on your device.

1.2. File Breakdown

In this section, a detailed breakdown of the most important parts of the docker compose file is given.

 volumes:
  node-red-data:

In the volumes section, we are defining a new volume called "node-red-data". This volume has a file system path on the host (TDC-E OS). Usually, this path is located under /mnt/data/docker/volumes.

This volume is then referenced in services section and will be explained in more detail in the following paragraphs.

services:
  node-red:
    image: nodered/node-red
    user: root
    environment:
      - TZ=Europe/Zagreb
    ports:
      - "1880:1880"
    volumes:
      - node-red-data:/data
      - /dev:/dev
      - /mnt/data:/mnt/data
      - /sys/class/gpio:/sys/class/gpio
    privileged: true
    restart: always

The services section lists containers to be run once the stack is deployed. In our case, we are deploying one container called node-red.

This container has a base image that will be downloaded from Docker Hub when the stack is deployed. The image is identified by the image name nodered/node-red. the user to create the container is going to be root. The most important sections are ports and volumes.

In the ports section, we define that the internal (container) port 1880 is going to be accessible on the host (TDC-E) with the same port number.

Under volumes, we are linking the host file system with the guest file system. The host being the TDC-E, and the guest being the node-red container that we will create. The first line node-red-data is referencing the volume we created in the volumes section described above. This is then mapped to the /data location inside of the container.

Other lines such as - /dev:/dev serve the same purpose. They link /dev from host to /dev within the container. This means that everything that is available under /dev on the host will be available to our node-red container for internal use. This /dev is a good example because under that location, the TDC-E has serial devices.

2. Setting up Python Application Environment

To set up a Python application environment so that the created Python program can be safely deployed to the TDC-E, a Dockerfile and a .yml file need to be created before the image can be pulled to the TDC.

2.1. Creating Environment

To create an image from your developed application, we first need to create a Dockerfile. The Dockerfile will be used as an image that the TDC-E will be able to read, run and process once a container for it is developed. For now, let us focus on the image development. Create a file simply named Dockerfile, without an extension. Make sure the Dockerfile is in the same folder as the application you want to turn into an image, so that no problems arise during the process of creating the image.

For this example, the AIN code found here will be used.

Paste the following code into the file:

FROM arm32v7/debian AS build

# Run installs
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    apt-utils \
    nano \
    python3 \
    python3-pip \
    iproute2 \
    python3-dev \
    default-libmysqlclient-dev \
    libssl-dev \
    pkg-config \
    build-essential

# pip install needed libraries with cert
RUN pip3 install requests --break-system-packages

# Create a directory for the app
WORKDIR /app

# Copy the necessary Python files
COPY readDIO.py ./
COPY toggleState.py ./
COPY direct.py ./

# Set the entry point command
CMD ["python3", "readDIO.py"]

NOTE: If your organization uses certificates to access services like pip, provide that information inside the Dockerfile. An example is listed below.

# internal certificate set
COPY *.crt /etc/ssl/certs/
RUN update-ca-certificates

2.2. Dockerfile Breakdown

The Dockerfile for the application looks the following:

FROM arm32v7/debian AS build

# Run installs
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    apt-utils \
    nano \
    python3 \
    python3-pip \
    iproute2 \
    python3-dev \
    default-libmysqlclient-dev \
    libssl-dev \
    pkg-config \
    build-essential

# pip install needed libraries with cert
RUN pip3 install requests --break-system-packages

# Create a directory for the app
WORKDIR /app

# Copy the necessary Python files
COPY readDIO.py ./
COPY toggleState.py ./
COPY direct.py ./

# Set the entry point command
CMD ["python3", "readDIO.py"]

The image takes arm32v7/debian as the base build for the image. It then proceeds to install requirements for the applications. Here, a larger environment is created with added services, as Python images are easily extensible and provide a good working environment if something goes awry. After service installations, the needed pip libraries are installed (in this case, the only needed library is requests).

A working directory with the name /app is created. Afterwards, the Python scripts readDIO.py and toggleState.py are copied into this folder so they may be used as the program runs. Lastly, the Dockerfile specifies that upon being bound to a container, it will start the command python3 and start running readDIO.py.

After the Dockerfile is created, the image needs to be built, pushed and pulled to the TDC-E, and a container needs to be created for the image. For help with the process, refer to Getting Started - Build and Compose.

3. Setting up C# Application Environment

To set up a working environment to run C# programs in, a Dockerfile and a .yml file need to be created before the image can be pulled to the TDC.

3.1. Creating Environment

To create an image from your developed application, we first need to create a Dockerfile. The Dockerfile will be used as an image that the TDC-E will be able to read, run and process once a container for it is developed. For now, let us focus on the image development. Create a file simply named Dockerfile, without an extension. Make sure the Dockerfile is in the same folder as the application you want to turn into an image, so that no problems arise during the process of creating the image.

For this example, the DIO code found here will be used.

Paste the following code into the Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /app

# Copy everything
COPY . ./

# Publish the application
RUN dotnet publish "ReadAIN.csproj" -c Release -o /app

# Set the working directory for the final image
WORKDIR /app

# Set the entry point
ENTRYPOINT ["dotnet", "ReadAIN.dll"]

After the Dockerfile is created, the image needs to be built, pushed and pulled to the TDC-E, and a container needs to be created for the image. For help with the process, refer to Getting Started - Build and Compose.

3.2. Dockerfile Breakdown

The Dockerfile for the application looks the following:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /app

# Copy everything
COPY . ./

# Publish the application
RUN dotnet publish "ReadAIN.csproj" -c Release -o /app

# Set the working directory for the final image
WORKDIR /app

# Set the entry point
ENTRYPOINT ["dotnet", "ReadAIN.dll"]

The file takes `mcr.microsoft.com/dotnet/sdk:7.0 AS buildas build for the image and sets the work directory to /app. It then copies all files to that folder, before publishing the application with the dotnet publish command. The working directory is once again set as /app, and dotnet is used to run the application titled ReadAIN.dll.

4. Setting up Go Application Environment

To set up a Go application environment so that the created Go program can be safely deployed to the TDC-E, a Dockerfile and a .yml file need to be created before the image can be pulled to the TDC.

To create an image from your developed application, we first need to create a Dockerfile. The Dockerfile will be used as an image that the TDC-E will be able to read, run and process once a container for it is developed. For now, let us focus on the image development. Create a file simply named Dockerfile, without an extension. Make sure the Dockerfile is in the same folder as the application you want to turn into an image, so that no problems arise during the process of creating the image.

For this example, the DIO code found here will be used.

Paste the following code into the file:

FROM meinside/alpine-golang:1.16.6-armv7

WORKDIR /app

COPY . .

RUN go mod download

COPY *.go ./

RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -o diogo main.go

ENTRYPOINT ["/app/diogo"]

After the Dockerfile is created, the image needs to be built, pushed and pulled to the TDC-E, and a container needs to be created for the image. For help with the process, refer to Getting Started - Build and Compose.

4.2. Dockerfile Breakdown

The Dockerfile for the application looks the following:

FROM meinside/alpine-golang:1.16.6-armv7

WORKDIR /app

COPY . .

RUN go mod download

COPY *.go ./

RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -o diogo main.go

ENTRYPOINT ["/app/diogo"]

For its build image, the file uses meinside/alpine-golang:1.16.6-armv7, which is an image by meinside that can be found on the following link. The work directory is /app and all files are copied into that particular work directory. This is where you will find the application if you enter the docker container. The Dockerfile then specifies that all go mods should be downloaded so that the program is able to run properly. Everything with the .go extension is copied into the folder and the application is then built locally, specifying the correct environment for the built. A diogo file is created, which is the program that will be running in the container. It does so by running the following statement:

RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -o /diogo 

The Go programming language can build applications easily by just providing the keywords go build, which is why it is a good practice to built the file, then send it to the TDC-E. However, since in this case the application was built on Windows OS, and TDC-E runs on its own TDC OS, using ARMv7 architecture, building a Go program would result in a .exe file, which is unsuitable for a Linux environment. To fix this issue, Go operating system (GOOS) is set to linux, Go architecture (GOARCH) is set to arm, and the Go ARM version (GOARM) is set to 7.

In the end, the entry point of the program is specified, naming the application name. This ensures that the container will run the file specified in the entry point once the image is assigned to a container.

5. Setting up MySQL Environment

5.1. Creating Environment

A MySql environment is created for the purpose of storing and retrieving data to and from SQL data tables. To set up the database environment, an existing MySql database Docker image is used and then modified for the purpose of running specific application that require added functionalities. The original image that is created by x11tete11x and is located here.

5.1.1. Creating an Image

For help with building an image, refer to this link.

The image for the MySql Environment has already been created by the original author, though the image needs a few adjustments to ensure compatibility with multiple programs. We also want to import created databases and tables without the need to manually create them inside the TDC-E's mysql environment. Thus, we build another Dockerfile from the pre-existing image.

For help with the process, a mysql directory with needed files has been created. Link to Application Files

Firstly, we create a new entrypoint.sh file. In this file, we specify the databases and data tables we want to create as we deploy our image and container to the TDC-E, removing the need to manually add them by traversing the TDC OS later. The code below shows the structure of the databases and tables used in some of the examples described in this documentation. If another database or table is needed, it can be specified here before repeating the four steps to creating an application environment.

#!/bin/bash

sleep 30;

mysql -e "CREATE DATABASE IF NOT EXISTS gpslocation;";
mysql -e "CREATE DATABASE IF NOT EXISTS diobase;"
mysql -e "CREATE DATABASE IF NOT EXISTS analog_base;"
mysql -e "CREATE TABLE IF NOT EXISTS gpslocation.gpsdata (id integer PRIMARY KEY AUTO_INCREMENT, latitude double, longitude double, time datetime, altitude double, speedKnots double, speedMph double, speedKmh double, course integer, fix integer, numberOfSatellites integer, gpsFixAvailable boolean, hdop double);"
mysql -e "CREATE TABLE IF NOT EXISTS diobase.dios (id integer PRIMARY KEY AUTO_INCREMENT, duration varchar(20), whattime timestamp);"
mysql -e "CREATE TABLE IF NOT EXISTS analog_base.analogi (id integer PRIMARY KEY AUTO_INCREMENT, whattime timestamp, value float);"

Now, we create a file named Dockerfile. In it, paste the following content:

FROM x11tete11x/arm32v7-mysql

COPY init.sh .

RUN sed -i '$ d' entrypoint.sh 
RUN echo './init.sh &' >> /entrypoint.sh
RUN echo 'exec "$@"' >> /entrypoint.sh

After the Dockerfile is created, the image needs to be built, pushed and pulled to the TDC-E. For help with the process, refer to Getting Started - Build and Compose.

5.1.2. Providing a Container for the Image

To assign the created image to a container, a docker-compose.yml file is created. The structure of the file is as following:

s:
  mysql-db:
    image: registry-name:1.0.0
    container_name: mysqldb
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: TDC_arch2023
      MYSQL_USER: root
      MYSQL_PASSWORD: TDC_arch2023
      TZ: Europe/Zagreb
    ports:
      - "3306:3306"
    command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-time-zone=+04:00']

For help with starting the container, refer to this section.

The second way of creating and starting the container is by using the following line:

docker compose up [OPTIONS] [SERVICE...]

Run this command in the TDC-E's terminal in the same folder as your docker-compose.yml file. The container and, by extension, your application, should now start running.

5.2. File Breakdown

In this section, a detailed breakdown of the Dockerfile and docker-compose.yml for Go will be provided.

5.2.1. Dockerfile

The Dockerfile for the image is the following:

FROM x11tete11x/arm32v7-mysql

COPY init.sh .

RUN sed -i '$ d' entrypoint.sh 
RUN echo './init.sh &' >> /entrypoint.sh
RUN echo 'exec "$@"' >> /entrypoint.sh

The Dockerfile is created by importing the aforementioned image by x11tete11x and copying a init.sh file from the same directory as the Dockerfile. The exec "$@" line is moved to the end of the file so that the program doesn't end before the databases are added, and the init.sh file is run in the background.

5.2.2. Docker-compose.yml

The docker-compose.yml in question is the following:

version: "2"

services:
  mysql-db:
    image: registry-name:1.0.0
    container_name: mysqldb
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: TDC_arch2023
      MYSQL_USER: root
      MYSQL_PASSWORD: TDC_arch2023
      TZ: Europe/Zagreb
    ports:
      - "3306:3306"
    command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-time-zone=+04:00']

Firstly, the docker-compose.yml file specifies the version of the file, which is 2. It's important to note that the Portainer service does not, at the moment, allow .yml versions beyond 2, which is why the latest version has not been specified.

The service that the docker-compose.yml sets up is mysql-db. It uses the previously created image, sets the container name to mysqldb and always restarts. The assigned port is 3306, which is the standard MySql port. The image is pulled from the registry where it has been pushed to. It further sets up the environment by adding a user (root) and password (TDC_arch2023). Since the image operates on a different time zone, a new time zone for Europe/Zagreb is set as that is the time zone that was used when the image was developed.

Finally, it runs the command command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-time-zone=+04:00']. The first command, mysqld, runs mysql. The second option sets the character set to utf8mb4 and the third sets the collation server to utf8mb4_unicode_ci. Lastly, if the time zone needs to be changed or modified due to some circumstances, one can add a parameter specifying the change in time (in this case, +04:00).

5.3. Accessing Databases

After starting the container, the database is running on port 3306 and can communicate with programs and services. Databases can be accessed using the following parameters (where schema is optional):

  • Hostname = 192.168.0.100
  • Port = 3306
  • Username = root
  • Password = TDC_arch2023
  • [Schema] = [gpslocation | diobase | analog_base]

The container will be run on each restart of the device. For working with the database inside the container, the following commands are useful:

  • mysql -h localhost -u [user] -p - accessing mysql; will be prompted to input password
  • mysqldump -u [user name] -p [options] [database_name] [tablename] > [dumpfile.sql] - creating a mysql dump file for exporting the database
  • docker cp [container]:[source-path] [destination-path] - copying the file from one source path to anotehr; useful for moving between containers
  • mysql -u username -p database_name < file.sql - importing mysql file
  • show databases; - list all available databases inside mysql
  • use [database]; - use a specific database inside mysql

6. Setting up Mosquitto Environment

6.1. Creating Environment

A MQTT Broker service is often used as a lightweight messaging protocol for sensors and mobile devices and works well with unreliable networks and latency due to its quality of service options. In this section, we demonstrate setting up the Mosquitto MQTT Broker.

For the core setup, only the correct Mosquitto image is needed. As the official Eclipse Mosquitto image runs for amd64, arm32v6, arm64v8, i386, ppc64le and s390x, and the TDC-E runs on ARM 32v7, we need another image to match the needed architecture. To that end, in this example, we use the jakezp/arm32v7-mqtt image found here. This image by jakezp matches the required architecture, and if no other setup is needed, one can simply upload this image to the TDC-E and start using it.

If no other parameter setting is needed, go to Providing a Container for the Image as no further action is needed after the image is uploaded to the TDC-E via Portainer. But if parameters like the username, password, inflight bytes, keep alive duration, QoS or message size limit are required, the image will first need to be edited. To that end, a new image is created.

NOTE: In the application files, you will find a mosquitto.conf file. This file is the standard configuration file for the service and contains all possible configurations of the broker. To set a parameter, simply uncomment it and set a value of the appropriate format.

6.1.1. Creating an Image

For help with building an image, refer to this link. The build.bat file that is located in the docker folder of the application files also serves as a script that builds the image, then creates a .tar file that can be uploaded to the Portainer.

The author's image is already operational and can run your MQTT service, but if we want to set specific parameters for the Mosquitto Broker, we need to change the mosquitto.conf file that comes with the service. In this example, we set a username and password for the MQTT service. As the service requires multiple files, go to the link below to be able to set the needed files.

Link to Application Files

To set the username and password, and to disable anonymous connection to the broker, the following two lines in the mosquitto.conf, present in the linked files, were uncommented:

allow_anonymous false
password_file /mosquitto/config/password_file

The first line sets allowing anonymous connections to false, while the second line provides a path link to the password file in which a username and password will be created. The file that specifies the password has the path /mosquitto/config/password_file. Now, we want to copy those parameters into the configuration file that will be running on the TDC-E as a service. This will be done so that we will create a new Dockerfile which will be used to create a new Docker Image.

The Dockerfile that sets the username and password is shown below.

FROM jakezp/arm32v7-mqtt

COPY mosquitto.conf /mosquitto/config/mosquitto.conf

# Create the password file and hash the password
RUN echo "user1:password" > /tmp/password_file && \
    mosquitto_passwd -U /tmp/password_file && \
    cp /tmp/password_file /mosquitto/config/password_file && \
    cp /tmp/password_file /mosquitto/config/local_password_file && \
    rm /tmp/password_file

# Copy the password file back to the local filesystem
VOLUME /mosquitto/config

CMD ["mosquitto", "-c", "/mosquitto/config/mosquitto.conf"]

6.1.2. Providing a Container for the Image

To assign the image to a container, a docker-compose.yml file is created. The structure of the file is as follows:

version: '2'

services:
  mosquitto:
    image: mosquitto-confed
    restart: always
    container_name: mosquitto
    ports:
      - "1883:1883"
      - "9001:9001"

For help with starting the container, refer to this section.

6.2. File Breakdown

In this section, a detailed breakdown of the Dockerfile and docker-compose.yml will be provided.

6.2.1. Dockerfile

The Dockerfile for the image is the following:

FROM jakezp/arm32v7-mqtt

COPY mosquitto.conf /mosquitto/config/mosquitto.conf

# Create the password file and hash the password
RUN echo "user1:password" > /tmp/password_file && \
    mosquitto_passwd -U /tmp/password_file && \
    cp /tmp/password_file /mosquitto/config/password_file && \
    cp /tmp/password_file /mosquitto/config/local_password_file && \
    rm /tmp/password_file

# Copy the password file back to the local filesystem
VOLUME /mosquitto/config

CMD ["mosquitto", "-c", "/mosquitto/config/mosquitto.conf"]

This file uses the base image we specified earlier, jakezp/arm32v7-mqtt. The next command that is run is the echo command. It specifies the pair user:password, which is set to the username user1 and password password. This text is written to the file location /tmp/password_file, which is a temporary file used to store this information. Then, the command mosquitto_passwd -U is run on that location, which hashes the plaintext password and creates a hashed, unreadable password. Now, this hashed password file is copied to the location /mosquitto/config/local_password_file, which provides a backup of the password file within the Docker container, before the file is safely removed.

Now, the password is copied back to the local filesystem, and the command mosquitto with the -c flag is run. The -c flag is used to start the broker service with the set configuration file. The path of the configuration file is specified as the last parameter of the CMD command.

6.2.2. Docker-compose.yml

The docker-compose.yml in question is the following:

version: '2'

services:
  mosquitto:
    image: mosquitto-confed
    restart: always
    container_name: mosquitto
    ports:
      - "1883:1883"
      - "9001:9001"

Firstly, the docker-compose.yml file specifies the version of the file, which is 2. It's important to note that the Portainer service does not, at the moment, allow .yml versions beyond 2, which is why the latest version has not been specified.

The service that will be installed is called mosquitto. The image it creates this service from is mosquitto-confed, which is created by the Dockerfile specified above. The service always restarts, the name of the container is set to mosquitto, and the ports that will be exposed are 1883:1883 and 9001:9001. Two ports are exposed as 1883 is used for MQTT, while 9001 is used for web sockets.