This directory contains the docker files used to generate both the base
image, and the runtime
image. The base
image is also pre-built in my dockerhub "tano297/bonnetal:base", because building pytorch from source takes a while and it makes no sense for everybody to have to do it. I will now explain how to use the runtime
to use bonnetal. These images are also a good point to check for missing dependencies in your system, that I may have forgotten to mention. If it works on the docker container, and it doesn't work in your system, please make sure you try to solve your system issue before generating a github issue.
Once you have a working image, you should be able to use the entirety of bonnetal inside of it, both the training part in Python, and the inference part in C++. The base image has a PyTorch compiled from scratch, ROS Melodic, TensorRT 5.1.2, CUDA 10.1, and Ubuntu 18.04 as a base. CUDA 10.1 requires the 418 driver, so if you don't have this driver, and want to install it, you can build the base
image from its own Dockerfile.
- If your user ID is not 1000, you may have problems between your host account and the
developer
account that I create inside the container, especially in sharing files between host computer and docker. You can check this withecho $UID
and change theruntime/Dockerfile
accordingly in every place the number 1000 appears. - The
-v /home/$USER:/home/$USER
part of thedocker run
commands allows you to access your home inside the container in the same path that you have it in your own system. This location,/home/$USER
is parallel to that of the docker's use/home/developer
. If your user name isdeveloper
you will need to modify this in order to get thebonnetal
folder structure in the home, and from a security standpoint making your entire home accessible inside may not be optimal, so feel free to change this accordingly. If you have permissions problems with/home/$USER
inside the image, refer to point 1 (check the user and group id for the permissions withll -al
, it should say developer if you did it right). - Docker images are quite large in storage, so you may want to change the default location of the storage to a larger disk, or even an external one (not recommended). This is a nice tutorial: link
From our experience with Bonnet, as useful as Docker is, it is also sometimes quite nitpicky, so if you find other caveats, submit them as a PR so I add them here.
To use this docker images with the GPU you will need to download and install nvidia-docker
from here. First you will need docker. Then, if you have 14.04/16.04/18.04, or Debian Jessie/Stretch, you can use the provided install-nvidia-docker.sh
:
$ ./install-nvidia-docker.sh
This procedure downloads the base
image from my dockerhub, and then builds the runtime
image on top of that.
# THIS SHOULD BE DONE STANDING IN THE REPO's ROOT DIRECTORY
$ nvidia-docker build -t tano297/bonnetal:runtime -f docker/runtime/Dockerfile .
$ nvidia-docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER:/home/$USER --net=host --pid=host --ipc=host tano297/bonnetal:runtime /bin/bash
$ nvidia-smi # check that everything went well
This procedure builds both the base
and runtime
images. It can be useful if any of the dependencies are not compatible with your system (such as CUDA10.1), but IT TAKES A WHILE to build.
# THIS SHOULD BE DONE STANDING IN THE REPO's ROOT DIRECTORY
$ nvidia-docker build -t tano297/bonnetal:base -f docker/base/Dockerfile .
$ nvidia-docker build -t tano297/bonnetal:runtime -f docker/runtime/Dockerfile .
$ nvidia-docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER:/home/$USER --net=host --pid=host --ipc=host tano297/bonnetal:runtime /bin/bash
$ nvidia-smi # check that everything went well
If you want to try the framework out in a computer with no GPU, then you can basically do all the same but using the good 'ol docker, which you can just install with:
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
This procedure downloads the base
image from my dockerhub, and then builds the runtime
image on top of that.
# THIS SHOULD BE DONE STANDING IN THE REPO's ROOT DIRECTORY
$ docker build -t tano297/bonnetal:runtime -f docker/runtime/Dockerfile .
$ docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER:/home/$USER --net=host --pid=host --ipc=host tano297/bonnetal:runtime /bin/bash
$ nvidia-smi # check that everything went well
This procedure builds both the base
and runtime
images. It can be useful if any of the dependencies are not compatible with your system (such as CUDA10.1), but IT TAKES A WHILE to build.
# THIS SHOULD BE DONE STANDING IN THE REPO's ROOT DIRECTORY
$ docker build -t tano297/bonnetal:base -f docker/base/Dockerfile .
$ docker build -t tano297/bonnetal:runtime -f docker/runtime/Dockerfile .
$ docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER:/home/$USER --net=host --pid=host --ipc=host tano297/bonnetal:runtime /bin/bash
$ nvidia-smi # check that everything went well