Skip to content

Latest commit

 

History

History
 
 

03_explaining_image_classifier_predictions

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Knowing What and Why? - Explaining Image Classifier Predictions

Description

As we implement highly responsible Computer Vision systems, it is becoming progressively clear that we must provide not only predictions but also explanations, as to what influenced its decision. In this post, I compared and benchmarked the most commonly used libraries for explaining the model predictions in the field of Image Classification - Eli5, LIME, and SHAP. I investigated the algorithms that they leverage, as well as compared the efficiency and quality of the provided explanations.

Hit the ground running

Via Conda

# setup conda environment & install all required packages
conda env create -f environment.yml
# activate conda environment
conda activate ExplainingImageClassifiers

Via Virtualenv

# set up python environment
apt-get install python3-venv
python3 -m venv .env
# activate python environment
source .env/bin/activate
# install all required packages
pip install -r requirements.txt

Download COCO Dataset

cd 01_coco_res_net
sh get_coco_dataset_sample.sh

ELI5

Full code example

The first library we will look into is Eli5 - it's a simple but reliable tool designed to visualize, inspect and debug ML models. The library allows, among other things, to interpret the predictions of Image Classifiers written in Keras. To do it, Eli5 leverages Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm. It is worth noting that this is not a general approach and it applies only to CNN solutions.

Eli5

Figure 1. Explanations provided by ELI5

LIME

Full code example

Why Should I Trust You?: Explaining the Predictions of Any Classifier is an article that underlies the entire branch of research aimed at explaining ML models. The ideas included in this paper became the foundation for the most popular of the interpretation libraries - Local interpretable model-agnostic explanations (LIME). This algorithm is completely different from Grad-CAM - tries to understand the model by perturbating the input data and understanding how these changes affect the predictions.

LIME

Figure 1. Explanations provided by LIME

SHAP

Full code example

SHapley Additive exPlanations (SHAP) and LIME are quite similar - both are additive and model-agnostic methods of explaining individual predictions. SHAP however, aims to explain model prediction for given input by computing the contribution of each feature to this prediction. To achieve this goal, SHAP uses Shapley Values, which originally comes from game theory.

SHAP

Figure 1. Explanations provided by SHAP