Skip to content

Commit

Permalink
Addressing comments on content
Browse files Browse the repository at this point in the history
  • Loading branch information
miniMaddy committed Feb 12, 2024
1 parent 0f1babd commit fc6f6dc
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions chapters/en/Unit 6 - Basic CV Tasks/object_detection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Object detection is the task of identifying and locating specific objects within

### Classification vs Localization

Classification distinguishes objects based on unique attributes, while localization determines an object's location within an image. Object detection combines both approaches, encapsulating identified entities with bounding boxes and assigning corresponding class labels. Imagine recognizing different fruit types and pinpointing their exact locations in a single image. That's object detection at play!
Classification distinguishes objects based on unique attributes, while localization determines an object's location within an image. Object detection combines both approaches, locating entities and assigning corresponding class labels. Imagine recognizing different fruit types and pinpointing their exact locations in a single image. That's object detection at play!

## Use Cases

Expand All @@ -31,9 +31,9 @@ bounding_boxes = pipe(image)
## How to Evaluate an Object Detection Model?
You have now seen how to use an object detection model, but how can you evaluate it? As demonstrated in the previous section, object detection is primarily a supervised learning task. This means that the dataset is composed of images and their corresponding bounding boxes, which serve as the ground truth. A few metrics can be used to evaluate your model. The most common ones are:

- **The Intersection over Union (IoU) or Jaccard index** measures the overlap between predicted and actual bounding boxes as a percentage ranging from 0% to 100%. Higher IoU percentages indicate better alignments, i.e., improved accuracy. Useful when assessing tracker performance under changing conditions, e.g., following wild animals during migration.
- **The Intersection over Union (IoU) or Jaccard index** measures the overlap between predicted and reference labels as a percentage ranging from 0% to 100%. Higher IoU percentages indicate better alignments, i.e., improved accuracy. Useful when assessing tracker performance under changing conditions, e.g., following wild animals during migration.

- **Mean Average Precision (mAP)** estimates object detection efficiency using both precision (correct prediction ratio) and recall (true positive identification ability). Calculated across varying IoU thresholds, mAP functions as a holistic assessment tool for object detection algorithms. Helpful when measuring sentiment analysis models' effectiveness, e.g., processing extensive customer reviews.
- **Mean Average Precision (mAP)** estimates object detection efficiency using both precision (correct prediction ratio) and recall (true positive identification ability). Calculated across varying IoU thresholds, mAP functions as a holistic assessment tool for object detection algorithms. Helpful for determining the model's performance in localization and detection in challenging conditions like finding irregular surface defects that vary in size and shape in a manufactured part.

## Conclusion and Future Work

Expand Down

0 comments on commit fc6f6dc

Please sign in to comment.