Skip to content

Commit

Permalink
Fix inaccurate example in Export docs (ultralytics#17161)
Browse files Browse the repository at this point in the history
  • Loading branch information
Y-T-G authored Oct 25, 2024
1 parent 98aa4bb commit f80d0d7
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/en/modes/export.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,13 +136,13 @@ INT8 quantization is an excellent way to compress the model and speed up inferen
from ultralytics import YOLO

model = YOLO("yolo11n.pt") # Load a model
model.export(format="onnx", int8=True)
model.export(format="engine", int8=True)
```

=== "CLI"

```bash
yolo export model=yolo11n.pt format=onnx int8=True # export model with INT8 quantization
yolo export model=yolo11n.pt format=engine int8=True # export TensorRT model with INT8 quantization
```

INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).
Expand Down

0 comments on commit f80d0d7

Please sign in to comment.