You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I want to create an docker image with PyTorch, Torchvision, ONNX Runtime GPU and torch_tensorrt to use TensorRT on the Nvidia Jetson Orin Nano with Jet Pack 6.1.
From your latest docker image name dustynv/l4t-pytorch:r36.4.0, I have managed to install:
torch (2.5.0)
torchvision (0.20.0)
onnxruntime-gpu (1.20.0)
for cuda 12.6 and Python 3.10.
I run some tests and I'm able to run some torch models or onnx models, both, in the CPU and the GPU.
However, I was not able to install torch_tensorrt following and adapting your code from this Dockerfile for Jet Pack 4.6.
Have some insights to install torch_tensorrt on Jet Pack 6.1 ?
Moreover, I saw torch2trt is already installed in the dustynv/l4t-pytorch:r36.4.0 docker image. I did not really understood the difference between torch2trt and torch_tensorrt. It looks like torch2trt is for edge devices but it is not updated since 8 months. Which one is better to use for Nvidia Jetson Orin Nano with Jet Pack 6.1 and which one has the lowest inference time latency?
Thanks for sharing you docker images
The text was updated successfully, but these errors were encountered:
Hi,
I want to create an docker image with PyTorch, Torchvision, ONNX Runtime GPU and torch_tensorrt to use TensorRT on the Nvidia Jetson Orin Nano with Jet Pack 6.1.
From your latest docker image name dustynv/l4t-pytorch:r36.4.0, I have managed to install:
for cuda 12.6 and Python 3.10.
I run some tests and I'm able to run some torch models or onnx models, both, in the CPU and the GPU.
However, I was not able to install torch_tensorrt following and adapting your code from this Dockerfile for Jet Pack 4.6.
Have some insights to install torch_tensorrt on Jet Pack 6.1 ?
Moreover, I saw torch2trt is already installed in the dustynv/l4t-pytorch:r36.4.0 docker image. I did not really understood the difference between torch2trt and torch_tensorrt. It looks like torch2trt is for edge devices but it is not updated since 8 months. Which one is better to use for Nvidia Jetson Orin Nano with Jet Pack 6.1 and which one has the lowest inference time latency?
Thanks for sharing you docker images
The text was updated successfully, but these errors were encountered: