diff --git a/docs/en/guides/nvidia-jetson.md b/docs/en/guides/nvidia-jetson.md
index a194c243df5..1af1ec71468 100644
--- a/docs/en/guides/nvidia-jetson.md
+++ b/docs/en/guides/nvidia-jetson.md
@@ -2,12 +2,17 @@
comments: true
description: Learn to deploy Ultralytics YOLO11 on NVIDIA Jetson devices with our detailed guide. Explore performance benchmarks and maximize AI capabilities.
keywords: Ultralytics, YOLO11, NVIDIA Jetson, JetPack, AI deployment, performance benchmarks, embedded systems, deep learning, TensorRT, computer vision
+benchmark_version: 8.3.51
---
# Quick Start Guide: NVIDIA Jetson with Ultralytics YOLO11
This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLO11 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices. Additionally, it showcases performance benchmarks to demonstrate the capabilities of YOLO11 on these small and powerful devices.
+!!! tip "New product support"
+
+ We have updated this guide with the latest [NVIDIA Jetson Orin Nano Super Developer Kit](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit) which delivers up to 67 TOPS of AI performance — a 1.7X improvement over its predecessor — to seamlessly run the most popular AI models.
+
VIDEO
-
+#### NVIDIA Jetson Orin Nano Super Developer Kit
+
+
+
+ Benchmarked with Ultralytics {{ benchmark_version }}
+
-### Detailed Comparison Table
+#### NVIDIA Jetson Orin NX 16GB
+
+
+
+ Benchmarked with Ultralytics {{ benchmark_version }}
+
+
+### Detailed Comparison Tables
The below table represents the benchmark results for five different models (YOLO11n, YOLO11s, YOLO11m, YOLO11l, YOLO11x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
+#### NVIDIA Jetson Orin Nano Super Developer Kit
+
+!!! performance
+
+ === "YOLO11n"
+
+ | Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
+ |-----------------|--------|-------------------|-------------|------------------------|
+ | PyTorch | ✅ | 5.4 | 0.6176 | 21.3 |
+ | TorchScript | ✅ | 10.5 | 0.6100 | 13.40 |
+ | ONNX | ✅ | 10.2 | 0.6100 | 7.94 |
+ | OpenVINO | ✅ | 10.4 | 0.6091 | 57.36 |
+ | TensorRT (FP32) | ✅ | 11.9 | 0.6082 | 7.60 |
+ | TensorRT (FP16) | ✅ | 8.3 | 0.6096 | 4.91 |
+ | TensorRT (INT8) | ✅ | 5.6 | 0.3180 | 3.91 |
+ | TF SavedModel | ✅ | 25.8 | 0.6082 | 223.98 |
+ | TF GraphDef | ✅ | 10.3 | 0.6082 | 289.95 |
+ | TF Lite | ✅ | 10.3 | 0.6082 | 328.29 |
+ | PaddlePaddle | ✅ | 20.4 | 0.6082 | 530.46 |
+ | MNN | ✅ | 10.1 | 0.6120 | 74.75 |
+ | NCNN | ✅ | 10.2 | 0.6106 | 46.12 |
+
+ === "YOLO11s"
+
+ | Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
+ |-----------------|--------|-------------------|-------------|------------------------|
+ | PyTorch | ✅ | 18.4 | 0.7526 | 22.00 |
+ | TorchScript | ✅ | 36.5 | 0.7400 | 21.35 |
+ | ONNX | ✅ | 36.3 | 0.7400 | 13.91 |
+ | OpenVINO | ✅ | 36.4 | 0.7391 | 126.95 |
+ | TensorRT (FP32) | ✅ | 38.0 | 0.7400 | 13.29 |
+ | TensorRT (FP16) | ✅ | 21.3 | 0.7431 | 7.30 |
+ | TensorRT (INT8) | ✅ | 12.2 | 0.3243 | 5.25 |
+ | TF SavedModel | ✅ | 91.1 | 0.7400 | 406.73 |
+ | TF GraphDef | ✅ | 36.4 | 0.7400 | 629.80 |
+ | TF Lite | ✅ | 36.4 | 0.7400 | 953.98 |
+ | PaddlePaddle | ✅ | 72.5 | 0.7400 | 1311.67 |
+ | MNN | ✅ | 36.2 | 0.7392 | 187.66 |
+ | NCNN | ✅ | 36.2 | 0.7403 | 122.02 |
+
+ === "YOLO11m"
+
+ | Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
+ |-----------------|--------|-------------------|-------------|------------------------|
+ | PyTorch | ✅ | 38.8 | 0.7598 | 33.00 |
+ | TorchScript | ✅ | 77.3 | 0.7643 | 48.17 |
+ | ONNX | ✅ | 76.9 | 0.7641 | 29.31 |
+ | OpenVINO | ✅ | 77.1 | 0.7642 | 313.49 |
+ | TensorRT (FP32) | ✅ | 78.7 | 0.7641 | 28.21 |
+ | TensorRT (FP16) | ✅ | 41.8 | 0.7653 | 13.99 |
+ | TensorRT (INT8) | ✅ | 23.2 | 0.4194 | 9.58 |
+ | TF SavedModel | ✅ | 192.7 | 0.7643 | 802.30 |
+ | TF GraphDef | ✅ | 77.0 | 0.7643 | 1335.42 |
+ | TF Lite | ✅ | 77.0 | 0.7643 | 2842.42 |
+ | PaddlePaddle | ✅ | 153.8 | 0.7643 | 3644.29 |
+ | MNN | ✅ | 76.8 | 0.7648 | 503.90 |
+ | NCNN | ✅ | 76.8 | 0.7674 | 298.78 |
+
+ === "YOLO11l"
+
+ | Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
+ |-----------------|--------|-------------------|-------------|------------------------|
+ | PyTorch | ✅ | 49.0 | 0.7475 | 43.00 |
+ | TorchScript | ✅ | 97.6 | 0.7250 | 62.94 |
+ | ONNX | ✅ | 97.0 | 0.7250 | 36.33 |
+ | OpenVINO | ✅ | 97.3 | 0.7226 | 387.72 |
+ | TensorRT (FP32) | ✅ | 99.1 | 0.7250 | 35.59 |
+ | TensorRT (FP16) | ✅ | 52.0 | 0.7265 | 17.57 |
+ | TensorRT (INT8) | ✅ | 31.0 | 0.4033 | 12.37 |
+ | TF SavedModel | ✅ | 243.3 | 0.7250 | 1116.20 |
+ | TF GraphDef | ✅ | 97.2 | 0.7250 | 1603.32 |
+ | TF Lite | ✅ | 97.2 | 0.7250 | 3607.51 |
+ | PaddlePaddle | ✅ | 193.9 | 0.7250 | 4890.90 |
+ | MNN | ✅ | 96.9 | 0.7222 | 619.04 |
+ | NCNN | ✅ | 96.9 | 0.7252 | 352.85 |
+
+ === "YOLO11x"
+
+ | Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
+ |-----------------|--------|-------------------|-------------|------------------------|
+ | PyTorch | ✅ | 109.3 | 0.8288 | 81.00 |
+ | TorchScript | ✅ | 218.1 | 0.8308 | 113.49 |
+ | ONNX | ✅ | 217.5 | 0.8308 | 75.20 |
+ | OpenVINO | ✅ | 217.8 | 0.8285 | 508.12 |
+ | TensorRT (FP32) | ✅ | 219.5 | 0.8307 | 67.32 |
+ | TensorRT (FP16) | ✅ | 112.2 | 0.8248 | 32.94 |
+ | TensorRT (INT8) | ✅ | 61.7 | 0.4854 | 20.72 |
+ | TF SavedModel | ✅ | 545.0 | 0.8308 | 1048.8 |
+ | TF GraphDef | ✅ | 217.8 | 0.8308 | 2961.8 |
+ | TF Lite | ✅ | 217.8 | 0.8308 | 7898.8 |
+ | PaddlePaddle | ✅ | 434.8 | 0.8308 | 9903.68 |
+ | MNN | ✅ | 217.3 | 0.8308 | 1242.97 |
+ | NCNN | ✅ | 217.3 | 0.8304 | 850.05 |
+
+ Benchmarked with Ultralytics {{ benchmark_version }}
+
+#### NVIDIA Jetson Orin NX 16GB
+
!!! performance
=== "YOLO11n"
| Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
|-----------------|--------|-------------------|-------------|------------------------|
- | PyTorch | ✅ | 5.4 | 0.6176 | 19.80 |
- | TorchScript | ✅ | 10.5 | 0.6100 | 13.30 |
- | ONNX | ✅ | 10.2 | 0.6082 | 67.92 |
- | OpenVINO | ✅ | 10.4 | 0.6082 | 118.21 |
- | TensorRT (FP32) | ✅ | 14.1 | 0.6100 | 7.94 |
- | TensorRT (FP16) | ✅ | 8.3 | 0.6082 | 4.80 |
- | TensorRT (INT8) | ✅ | 6.6 | 0.3256 | 4.17 |
- | TF SavedModel | ✅ | 25.8 | 0.6082 | 185.88 |
- | TF GraphDef | ✅ | 10.3 | 0.6082 | 256.66 |
- | TF Lite | ✅ | 10.3 | 0.6082 | 284.64 |
- | PaddlePaddle | ✅ | 20.4 | 0.6082 | 477.41 |
- | NCNN | ✅ | 10.2 | 0.6106 | 32.18 |
+ | PyTorch | ✅ | 5.4 | 0.6176 | 19.50 |
+ | TorchScript | ✅ | 10.5 | 0.6100 | 13.03 |
+ | ONNX | ✅ | 10.2 | 0.6100 | 8.44 |
+ | OpenVINO | ✅ | 10.4 | 0.6091 | 40.83 |
+ | TensorRT (FP32) | ✅ | 11.9 | 0.6100 | 8.05 |
+ | TensorRT (FP16) | ✅ | 8.2 | 0.6096 | 4.85 |
+ | TensorRT (INT8) | ✅ | 5.5 | 0.3180 | 4.37 |
+ | TF SavedModel | ✅ | 25.8 | 0.6082 | 185.39 |
+ | TF GraphDef | ✅ | 10.3 | 0.6082 | 244.85 |
+ | TF Lite | ✅ | 10.3 | 0.6082 | 289.77 |
+ | PaddlePaddle | ✅ | 20.4 | 0.6082 | 476.52 |
+ | MNN | ✅ | 10.1 | 0.6120 | 53.37 |
+ | NCNN | ✅ | 10.2 | 0.6106 | 33.55 |
=== "YOLO11s"
| Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
|-----------------|--------|-------------------|-------------|------------------------|
- | PyTorch | ✅ | 18.4 | 0.7526 | 20.20 |
- | TorchScript | ✅ | 36.5 | 0.7416 | 23.42 |
- | ONNX | ✅ | 36.3 | 0.7416 | 162.01 |
- | OpenVINO | ✅ | 36.4 | 0.7416 | 159.61 |
- | TensorRT (FP32) | ✅ | 40.3 | 0.7416 | 13.93 |
- | TensorRT (FP16) | ✅ | 21.7 | 0.7416 | 7.47 |
- | TensorRT (INT8) | ✅ | 13.6 | 0.3179 | 5.66 |
- | TF SavedModel | ✅ | 91.1 | 0.7416 | 316.46 |
- | TF GraphDef | ✅ | 36.4 | 0.7416 | 506.71 |
- | TF Lite | ✅ | 36.4 | 0.7416 | 842.97 |
- | PaddlePaddle | ✅ | 72.5 | 0.7416 | 1172.57 |
- | NCNN | ✅ | 36.2 | 0.7419 | 66.00 |
+ | PyTorch | ✅ | 18.4 | 0.7526 | 19.00 |
+ | TorchScript | ✅ | 36.5 | 0.7400 | 22.90 |
+ | ONNX | ✅ | 36.3 | 0.7400 | 14.44 |
+ | OpenVINO | ✅ | 36.4 | 0.7391 | 88.70 |
+ | TensorRT (FP32) | ✅ | 37.9 | 0.7400 | 14.13 |
+ | TensorRT (FP16) | ✅ | 21.6 | 0.7406 | 7.55 |
+ | TensorRT (INT8) | ✅ | 12.2 | 0.3243 | 5.63 |
+ | TF SavedModel | ✅ | 91.1 | 0.7400 | 317.61 |
+ | TF GraphDef | ✅ | 36.4 | 0.7400 | 515.99 |
+ | TF Lite | ✅ | 36.4 | 0.7400 | 838.85 |
+ | PaddlePaddle | ✅ | 72.5 | 0.7400 | 1170.07 |
+ | MNN | ✅ | 36.2 | 0.7413 | 125.23 |
+ | NCNN | ✅ | 36.2 | 0.7403 | 68.13 |
=== "YOLO11m"
| Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
|-----------------|--------|-------------------|-------------|------------------------|
- | PyTorch | ✅ | 38.8 | 0.7595 | 36.70 |
- | TorchScript | ✅ | 77.3 | 0.7643 | 50.95 |
- | ONNX | ✅ | 76.9 | 0.7643 | 416.34 |
- | OpenVINO | ✅ | 77.1 | 0.7643 | 370.99 |
- | TensorRT (FP32) | ✅ | 81.5 | 0.7640 | 30.49 |
- | TensorRT (FP16) | ✅ | 42.2 | 0.7658 | 14.93 |
- | TensorRT (INT8) | ✅ | 24.3 | 0.4118 | 10.32 |
- | TF SavedModel | ✅ | 192.7 | 0.7643 | 597.08 |
- | TF GraphDef | ✅ | 77.0 | 0.7643 | 1016.12 |
- | TF Lite | ✅ | 77.0 | 0.7643 | 2494.60 |
- | PaddlePaddle | ✅ | 153.8 | 0.7643 | 3218.99 |
- | NCNN | ✅ | 76.8 | 0.7691 | 192.77 |
+ | PyTorch | ✅ | 38.8 | 0.7598 | 36.50 |
+ | TorchScript | ✅ | 77.3 | 0.7643 | 52.55 |
+ | ONNX | ✅ | 76.9 | 0.7640 | 31.16 |
+ | OpenVINO | ✅ | 77.1 | 0.7642 | 208.57 |
+ | TensorRT (FP32) | ✅ | 78.7 | 0.7640 | 30.72 |
+ | TensorRT (FP16) | ✅ | 41.5 | 0.7651 | 14.45 |
+ | TensorRT (INT8) | ✅ | 23.3 | 0.4194 | 10.19 |
+ | TF SavedModel | ✅ | 192.7 | 0.7643 | 590.11 |
+ | TF GraphDef | ✅ | 77.0 | 0.7643 | 998.57 |
+ | TF Lite | ✅ | 77.0 | 0.7643 | 2486.11 |
+ | PaddlePaddle | ✅ | 153.8 | 0.7643 | 3236.09 |
+ | MNN | ✅ | 76.8 | 0.7661 | 335.78 |
+ | NCNN | ✅ | 76.8 | 0.7674 | 188.43 |
=== "YOLO11l"
| Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
|-----------------|--------|-------------------|-------------|------------------------|
- | PyTorch | ✅ | 49.0 | 0.7475 | 47.6 |
- | TorchScript | ✅ | 97.6 | 0.7250 | 66.36 |
- | ONNX | ✅ | 97.0 | 0.7250 | 532.58 |
- | OpenVINO | ✅ | 97.3 | 0.7250 | 477.55 |
- | TensorRT (FP32) | ✅ | 101.6 | 0.7250 | 38.71 |
- | TensorRT (FP16) | ✅ | 52.6 | 0.7265 | 19.35 |
- | TensorRT (INT8) | ✅ | 31.6 | 0.3856 | 13.50 |
- | TF SavedModel | ✅ | 243.3 | 0.7250 | 895.24 |
- | TF GraphDef | ✅ | 97.2 | 0.7250 | 1301.19 |
- | TF Lite | ✅ | 97.2 | 0.7250 | 3202.93 |
- | PaddlePaddle | ✅ | 193.9 | 0.7250 | 4206.98 |
- | NCNN | ✅ | 96.9 | 0.7252 | 225.75 |
+ | PyTorch | ✅ | 49.0 | 0.7475 | 46.6 |
+ | TorchScript | ✅ | 97.6 | 0.7250 | 66.54 |
+ | ONNX | ✅ | 97.0 | 0.7250 | 39.55 |
+ | OpenVINO | ✅ | 97.3 | 0.7226 | 262.44 |
+ | TensorRT (FP32) | ✅ | 99.2 | 0.7250 | 38.68 |
+ | TensorRT (FP16) | ✅ | 51.9 | 0.7265 | 18.53 |
+ | TensorRT (INT8) | ✅ | 30.9 | 0.4033 | 13.36 |
+ | TF SavedModel | ✅ | 243.3 | 0.7250 | 850.25 |
+ | TF GraphDef | ✅ | 97.2 | 0.7250 | 1324.60 |
+ | TF Lite | ✅ | 97.2 | 0.7250 | 3191.24 |
+ | PaddlePaddle | ✅ | 193.9 | 0.7250 | 4204.97 |
+ | MNN | ✅ | 96.9 | 0.7225 | 414.41 |
+ | NCNN | ✅ | 96.9 | 0.7252 | 237.74 |
=== "YOLO11x"
| Format | Status | Size on disk (MB) | mAP50-95(B) | Inference time (ms/im) |
|-----------------|--------|-------------------|-------------|------------------------|
- | PyTorch | ✅ | 109.3 | 0.8288 | 85.60 |
- | TorchScript | ✅ | 218.1 | 0.8308 | 121.67 |
- | ONNX | ✅ | 217.5 | 0.8308 | 1073.14 |
- | OpenVINO | ✅ | 217.8 | 0.8308 | 955.60 |
- | TensorRT (FP32) | ✅ | 221.6 | 0.8307 | 75.84 |
- | TensorRT (FP16) | ✅ | 113.1 | 0.8295 | 35.75 |
- | TensorRT (INT8) | ✅ | 62.2 | 0.4783 | 22.23 |
- | TF SavedModel | ✅ | 545.0 | 0.8308 | 1497.40 |
- | TF GraphDef | ✅ | 217.8 | 0.8308 | 2552.42 |
- | TF Lite | ✅ | 217.8 | 0.8308 | 7044.58 |
- | PaddlePaddle | ✅ | 434.9 | 0.8308 | 8386.73 |
- | NCNN | ✅ | 217.3 | 0.8304 | 486.36 |
+ | PyTorch | ✅ | 109.3 | 0.8288 | 86.00 |
+ | TorchScript | ✅ | 218.1 | 0.8308 | 122.43 |
+ | ONNX | ✅ | 217.5 | 0.8307 | 77.50 |
+ | OpenVINO | ✅ | 217.8 | 0.8285 | 508.12 |
+ | TensorRT (FP32) | ✅ | 219.5 | 0.8307 | 76.44 |
+ | TensorRT (FP16) | ✅ | 112.0 | 0.8309 | 35.99 |
+ | TensorRT (INT8) | ✅ | 61.6 | 0.4854 | 22.32 |
+ | TF SavedModel | ✅ | 545.0 | 0.8308 | 1470.06 |
+ | TF GraphDef | ✅ | 217.8 | 0.8308 | 2549.78 |
+ | TF Lite | ✅ | 217.8 | 0.8308 | 7025.44 |
+ | PaddlePaddle | ✅ | 434.8 | 0.8308 | 8364.89 |
+ | MNN | ✅ | 217.3 | 0.8289 | 827.13 |
+ | NCNN | ✅ | 217.3 | 0.8304 | 490.29 |
+
+ Benchmarked with Ultralytics {{ benchmark_version }}
[Explore more benchmarking efforts by Seeed Studio](https://www.seeedstudio.com/blog/2023/03/30/yolov8-performance-benchmarks-on-nvidia-jetson-devices) running on different versions of NVIDIA Jetson hardware.
@@ -500,7 +631,7 @@ Deploying Ultralytics YOLO11 on NVIDIA Jetson devices is a straightforward proce
### What performance benchmarks can I expect from YOLO11 models on NVIDIA Jetson devices?
-YOLO11 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the [Detailed Comparison Table](#detailed-comparison-table) section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats.
+YOLO11 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the [Detailed Comparison Tables](#detailed-comparison-tables) section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats.
### Why should I use TensorRT for deploying YOLO11 on NVIDIA Jetson?