Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No installation available of torch_tensorrt for Jet Pack 6.1 #737

Open
alexoterno opened this issue Dec 4, 2024 · 0 comments
Open

No installation available of torch_tensorrt for Jet Pack 6.1 #737

alexoterno opened this issue Dec 4, 2024 · 0 comments

Comments

@alexoterno
Copy link

alexoterno commented Dec 4, 2024

Hi,
I want to create an docker image with PyTorch, Torchvision, ONNX Runtime GPU and torch_tensorrt to use TensorRT on the Nvidia Jetson Orin Nano with Jet Pack 6.1.
From your latest docker image name dustynv/l4t-pytorch:r36.4.0, I have managed to install:

  • torch (2.5.0)
  • torchvision (0.20.0)
  • onnxruntime-gpu (1.20.0)
    for cuda 12.6 and Python 3.10.

I run some tests and I'm able to run some torch models or onnx models, both, in the CPU and the GPU.

However, I was not able to install torch_tensorrt following and adapting your code from this Dockerfile for Jet Pack 4.6.
Have some insights to install torch_tensorrt on Jet Pack 6.1 ?

Moreover, I saw torch2trt is already installed in the dustynv/l4t-pytorch:r36.4.0 docker image. I did not really understood the difference between torch2trt and torch_tensorrt. It looks like torch2trt is for edge devices but it is not updated since 8 months. Which one is better to use for Nvidia Jetson Orin Nano with Jet Pack 6.1 and which one has the lowest inference time latency?

Thanks for sharing you docker images

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant