Skip to content

Getting Started ‐ Build and Compose

tkrzielSICKAG edited this page Feb 5, 2024 · 15 revisions

In this section, activities regarding image and container creation for running an application in the Docker environment will be described. On this page, there are two sections:

  • Building an Image
  • Creating a Container

The Image building section focuses on the details of commands to use while building your Docker image, while the Image Composing section focuses on creating a container for the image to run in.

1. Image Building

To set up any application or environment that should run with Docker, an image of your working environment is required, so that the dockerized image has all the necessary tools to run its functionalities. Now that you've written a Dockerfile, the next logical step is to build it, creating the summarized environment of your application that will run on the Docker.

1.1. Docker Building the Image

We've created an application and written a Dockerfile. The Dockerfile should now be built and pushed to a registry or saved to a local environment, so that it can be pulled from another device. To build the image, make sure to position yourself inside the directory you created your Dockerfile in, then open a terminal to write the following line:

docker build -t registry-name:1.0.0 .

This command builds a Docker image from the directory you are positioned in, using all that is specified in the Dockerfile. For this example, the image was also tagged registry-name:1.0.0, which names the repository registry-name, with the tag 1.0.0. The build process usually takes a bit more time, but it depends on the size of the base image, services that need to be installed and general required actions. If an error occurs, it is most likely due to the environment setup in the Dockerfile or a service that can't be reached.

After the image is built, one can either push it to a repository or create a .tar file that is a UNIX-based utility used for collecting a large number of files into one archive file.

1.2. Docker Pushing the Image

Pushing is done when you want the image to be sent to a registry which can be accessed easily if authenticated and authorized. To push the image to a registry, use the following command:

docker push registry-name:1.0.0

This will push the registry-name:1.0.0 image to your registry, which is specified by its tag. Make sure the registries in docker build and docker push have the same name.

1.3. Creating a .tar file

When not pushing to a registry, one can also create a .tar compressed file which can be uploaded to Portainer to pull the image to your TDC-E device. To do so, type the following command into the terminal.

docker save -o myapp.tar registry-name:1.0.0

This will save the registry that is named registry-name:1.0.0 as myapp.tar. The file is now ready to be transferred.

1.4. Pulling the Image

To provide an image to the Docker container, the image needs to be locally stored on the TDC-E. Two ways of completing this step will be discussed. Firstly, pulling the image from the registry will be described. Secondly, uploading a .tar file to Portainer will be shown.

1.4.1. Pulling Image from Terminal

An easy way to pull the image is logging into the TDC-E, then writing the following line:

docker pull registry-name:1.0.0

This line will pull the docker image from the specified registry onto your TDC-E, which means the program will now be locally stored on your device and ready for execution. Wait until the file is pulled, then proceed to creating a container for the file.

1.4.1. Uploading file to Portainer

If we created a .tar image, we can use this image to upload it directly to the TDC-E via the Portainer Images sidebar menu. The environment you should be looking at now is shown below:

image

Select the option Import and select the file that has been created using the previous command.

image

Wait until the file is uploaded. The file should now be listed in the list of images uploaded to the TDC-E. Proceed to creating a container for the file.

2. Creating a Container

For the image to be able to run in the Docker environment, a container that the image will be bound to needs to be created. In the next steps, a docker-compose.yml file is created and then described in detail.

2.1. Creating a docker-compose.yml File

To assign the created image to a container, a docker-compose.yml file is created. The structure of the file is as following:

version: "2"
services:
  dio-py:
    image: registry-name:1.0.0
    volumes:
      - /sys/class/gpio/:/sys/class/gpio/
    network_mode: "bridge"
    privileged: true
    restart: always

There are two ways to create and start the container. The first is via Portainer, which follows the same format as setting up the Node-RED environment described here, but using the docker-compose.yml file that is written in this section.

The second way of creating and starting the container is by using the following line:

docker compose up [OPTIONS] [SERVICE...]

Run this command in the TDC-E's terminal in the same folder as your docker-compose.yml file. The container and, by extension, your application, should now start running.

2.2. File Breakdown

The docker-compose.yml in question is the following:

version: "2"
services:
  dio-py:
    image: registry-name:1.0.0
    volumes:
      - /sys/class/gpio/:/sys/class/gpio/
    network_mode: "bridge"
    privileged: true
    restart: always

Firstly, the docker-compose.yml file specifies the version of the file, which is 2. It's important to note that the Portainer service does not, at the moment, allow .yml versions beyond 2, which is why the latest version has not been specified.

The service that needs to be installed is dio-py. For it to run, an image needs to be specified. It takes the previously created registry-name:1.0.0 image, then specifies a volume that will be attached to the container, as GPIO values need to be read. Thus, /sys/class/gpio/ from the host machine is mapped to /sys/class/gpio/ on the container. The network_mode is set to bridge which sets up a communication between the host and the container, isolated from the outside network. Privileges are elevated, and the container is specified to always restart, so the application works continuously.