From f80d0d75c4d942b8252fdee521f64a7a76733ed5 Mon Sep 17 00:00:00 2001 From: Mohammed Yasin <32206511+Y-T-G@users.noreply.github.com> Date: Fri, 25 Oct 2024 19:48:28 +0800 Subject: [PATCH] Fix inaccurate example in Export docs (#17161) --- docs/en/modes/export.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md index 4be5bd5b90a..776d8264455 100644 --- a/docs/en/modes/export.md +++ b/docs/en/modes/export.md @@ -136,13 +136,13 @@ INT8 quantization is an excellent way to compress the model and speed up inferen from ultralytics import YOLO model = YOLO("yolo11n.pt") # Load a model - model.export(format="onnx", int8=True) + model.export(format="engine", int8=True) ``` === "CLI" ```bash - yolo export model=yolo11n.pt format=onnx int8=True # export model with INT8 quantization + yolo export model=yolo11n.pt format=engine int8=True # export TensorRT model with INT8 quantization ``` INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).