The LibTorch inference for yolort
, both GPU and CPU are supported.
- Ubuntu 18.04
- LibTorch 1.8.0 / 1.9.0
- TorchVision 0.9.0 / 0.10.0
- OpenCV 3.4+
- CUDA 10.2 [Optional]
We didn't impose too strong restrictions on the version of CUDA and Ubuntu systems.
-
First, Setup the environment variables.
export TORCH_PATH=$(dirname $(python -c "import torch; print(torch.__file__)")) export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TORCH_PATH/lib/
-
Don't forget to compile
TorchVision
using the following scripts.git clone https://github.com/pytorch/vision.git cd vision git checkout release/0.9 # Double check the version of TorchVision currently in use mkdir build && cd build cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch # Set `-DWITH_CUDA=ON` if you're using GPU make -j4 sudo make install
-
Generate
TorchScript
modelUnlike ultralytics's
torch.jit.trace
mechanism, We're usingtorch.jit.script
to trace the YOLOv5 models which containing the whole pre-processing (especially with theletterbox
ops) and post-processing (especially with thenms
ops) procedures, as such you don't need to rewrite manually the C++ codes of pre-processing and post-processing.from yolort.models import yolov5n model = yolov5n(pretrained=True) model.eval() traced_model = torch.jit.script(model) traced_model.save("yolov5n.torchscript.pt")
-
Then compile the source code.
cd deployment/libtorch mkdir build && cd build cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch make
-
Now, you can infer your own images.
./yolo_inference [--input_source ../../../test/assets/zidane.jpg] [--checkpoint ../yolov5n.torchscript.pt] [--labelmap ../../../notebooks/assets/coco.names] [--gpu] # GPU switch, which is optional, and set False as default