Skip to content

Commit

Permalink
fix docs and sample
Browse files Browse the repository at this point in the history
  • Loading branch information
negvet committed Aug 20, 2024
1 parent 738eeb9 commit 0099855
Show file tree
Hide file tree
Showing 4 changed files with 75 additions and 23 deletions.
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,11 @@
* Refactor OpenVINO imports by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/45
* Support OV IR / ONNX model file for Explainer by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/47
* Try CNN -> ViT assumption for IR insertion by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/48
* Enable AISE: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/49
* Enable AISE for classification: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/49
* Upgrade OpenVINO to 2024.3.0 by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/52
* Add saliency map visualization with explanation.plot() by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/53
* Enable flexible naming for saved saliency maps and include confidence scores by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/51
* Enable AISE for detection: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/55

### Known Issues

Expand Down
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,11 @@ At the moment, *Image Classification* and *Object Detection* tasks are supported
|-----------------|----------------------|-----------|---------------------|-------|
| Computer Vision | Image Classification | White-Box | ReciproCAM | [arxiv](https://arxiv.org/abs/2209.14074) / [src](openvino_xai/methods/white_box/recipro_cam.py) |
| | | | VITReciproCAM | [arxiv](https://arxiv.org/abs/2310.02588) / [src](openvino_xai/methods/white_box/recipro_cam.py) |
| | | | ActivationMap | experimental / [src](openvino_xai/methods/white_box/activation_map.py) |
| | | Black-Box | AISE | [src](openvino_xai/methods/black_box/aise.py) |
| | | | RISE | [arxiv](https://arxiv.org/abs/1806.07421v3) / [src](openvino_xai/methods/black_box/rise.py) |
| | Object Detection | White-Box | ClassProbabilityMap | experimental / [src](openvino_xai/methods/white_box/det_class_probability_map.py) |
| | | | ActivationMap | experimental / [src](openvino_xai/methods/white_box/activation_map.py) |
| | | Black-Box | AISEClassification | [src](openvino_xai/methods/black_box/aise.py) |
| | | | RISE | [arxiv](https://arxiv.org/abs/1806.07421v3) / [src](openvino_xai/methods/black_box/rise.py) |
| | Object Detection | White-Box | ClassProbabilityMap | experimental / [src](openvino_xai/methods/white_box/det_class_probability_map.py) |
| | | Black-Box | AISEDetection | [src](openvino_xai/methods/black_box/aise.py) |

### Supported explainable models

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ explanation.save("output_path", "name_")
Black-box mode does not update the model (treating the model as a black box).
Black-box approaches are based on the perturbation of the input data and measurement of the model's output change.

For black-box mode we support 2 algorithms: **AISE** (by default) and [**RISE**](https://arxiv.org/abs/1806.07421). AISE is more effective for generating saliency maps for a few specific classes. RISE - to generate maps for all classes at once.
For black-box mode we support 2 algorithms: **AISE** (by default) and [**RISE**](https://arxiv.org/abs/1806.07421). AISE is more effective for generating saliency maps for a few specific classes. RISE - to generate maps for all classes at once. AISE is supported for both classification and detection task.

Pros:
- **Flexible** - can be applied to any custom model.
Expand Down
84 changes: 67 additions & 17 deletions examples/run_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,39 +24,41 @@ def get_argument_parser():

def preprocess_fn(x: np.ndarray) -> np.ndarray:
# TODO: make sure it is correct
# x = cv2.resize(src=x, dsize=(416, 416)) # OTX YOLOX
x = cv2.resize(src=x, dsize=(992, 736)) # OTX ATSS
x = cv2.resize(src=x, dsize=(416, 416)) # OTX YOLOX
# x = cv2.resize(src=x, dsize=(992, 736)) # OTX ATSS
x = x.transpose((2, 0, 1))
x = np.expand_dims(x, 0)
return x


def main(argv):
def postprocess_fn(x) -> np.ndarray:
"""Returns boxes, scores, labels."""
return x["boxes"][0][:, :4], x["boxes"][0][:, 4], x["labels"][0]


def explain_white_box(args):
"""
White-box scenario.
Insertion of the XAI branch into the Model API wrapper, thus Model API wrapper has additional 'saliency_map' output.
Insertion of the XAI branch into the model, thus model has additional 'saliency_map' output.
"""

parser = get_argument_parser()
args = parser.parse_args(argv)

# Create ov.Model
model: ov.Model
model = ov.Core().read_model(args.model_path)

# OTX YOLOX
# cls_head_output_node_names = [
# "/bbox_head/multi_level_conv_cls.0/Conv/WithoutBiases",
# "/bbox_head/multi_level_conv_cls.1/Conv/WithoutBiases",
# "/bbox_head/multi_level_conv_cls.2/Conv/WithoutBiases",
# ]
# OTX ATSS
cls_head_output_node_names = [
"/bbox_head/atss_cls_1/Conv/WithoutBiases",
"/bbox_head/atss_cls_2/Conv/WithoutBiases",
"/bbox_head/atss_cls_3/Conv/WithoutBiases",
"/bbox_head/atss_cls_4/Conv/WithoutBiases",
"/bbox_head/multi_level_conv_cls.0/Conv/WithoutBiases",
"/bbox_head/multi_level_conv_cls.1/Conv/WithoutBiases",
"/bbox_head/multi_level_conv_cls.2/Conv/WithoutBiases",
]
# # OTX ATSS
# cls_head_output_node_names = [
# "/bbox_head/atss_cls_1/Conv/WithoutBiases",
# "/bbox_head/atss_cls_2/Conv/WithoutBiases",
# "/bbox_head/atss_cls_3/Conv/WithoutBiases",
# "/bbox_head/atss_cls_4/Conv/WithoutBiases",
# ]

# Create explainer object
explainer = xai.Explainer(
Expand All @@ -75,6 +77,7 @@ def main(argv):
explanation = explainer(
image,
targets=[0, 1, 2], # target classes to explain
overlay=True,
)

logger.info(
Expand All @@ -88,5 +91,52 @@ def main(argv):
explanation.save(output, Path(args.image_path).stem)


def explain_black_box(args):
"""
Black-box scenario.
"""

# Create ov.Model
model: ov.Model
model = ov.Core().read_model(args.model_path)

# Create explainer object
explainer = xai.Explainer(
model=model,
task=xai.Task.DETECTION,
preprocess_fn=preprocess_fn,
postprocess_fn=postprocess_fn,
explain_mode=ExplainMode.BLACKBOX, # defaults to AUTO
)

# Prepare input image and explanation parameters, can be different for each explain call
image = cv2.imread(args.image_path)

# Generate explanation
explanation = explainer(
image,
targets=[0], # target boxes to explain
overlay=True,
)

logger.info(
f"Generated {len(explanation.saliency_map)} detection "
f"saliency maps of layout {explanation.layout} with shape {explanation.shape}."
)

# Save saliency maps for visual inspection
if args.output is not None:
output = Path(args.output) / "detection_black_box"
explanation.save(output, Path(args.image_path).stem)


def main(argv):
parser = get_argument_parser()
args = parser.parse_args(argv)

explain_white_box(args)
explain_black_box(args)


if __name__ == "__main__":
main(sys.argv[1:])

0 comments on commit 0099855

Please sign in to comment.