Skip to content

mercedes-benz/automotive_regression_xai

Repository files navigation

automotive_regression_xai

Table of Contents

  1. About The Project
  2. Structure of the Repository
  3. Package Installation
  4. Usage
  5. Contributing
  6. License
  7. Contact

Provider Information

Source code has been tested solely for our own use cases, which might differ from yours. This project is actively maintained and contributing is endorsed.

About The Project

The automotive_regression_xa Python package has been developed as a reference to demonstrate explainable AI functions and operations, providing deeper insights into their behavior and decision-making processes. It showcases how these techniques can be utilized to support AI applications within the automotive industry.

(back to top)

Structure of the repository

├── automotive_regression_xai/
├── src/
│   └── xai_classes/
│       ├── xai_ale.py
│       ├── xai_ifi.py
│       ├── xai_lime.py
│       ├── xai_pfi.py
│       ├── xai_shap.py
│   └── automotive_regression_xai.py
│   └── sample_dataset.py
│   └── test.py

(back to top)

Package installation

  1. Clone the source code:

    git clone <repository_url>
    cd automotive_regression_xai
  2. Install Python dependencies in your Python environment:

    python3 -m pip install -e .
  3. Build a package and install it on the system:

    python3 -m pip install --upgrade build
    python3 -m build
    pip install dist/*.tar.gz
  4. Verify if the installation was successful:

    pip list | grep automotive_regression_xai

    (back to top)

Usage

ALE (Accumulated Local Effects):

Analyzes feature impacts by showing average changes in predictions as features vary, considering interactions while avoiding collinearity issues.

PFI (Permutation Feature Importance):

Measures feature importance by permuting feature values and observing changes in model performance.

IFI (Integrated Feature Importance):

Combines multiple importance measures for more holistic feature evaluation.

LIME (Local Interpretable Model-agnostic Explanations):

Explains individual predictions by approximating the model locally with interpretable surrogates like linear models.

SHAP (SHapley Additive exPlanations):

Assigns feature importance based on cooperative game theory, ensuring consistency and interpretability across all predictions.

Example

Here's an example of how you might use these arguments:

Contributing

The instructions on how to contribute can be found in the file CONTRIBUTING.md in this repository.

(back to top)

License

The code is published under the MIT license. Further information on that can be found in the LICENSE.md file in this repository.

(back to top)

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages