This repository contains Jupyter notebooks implementing the code samples found in the book Interpretable AI - Building Explainable Machine Learning Systems (Manning Publications). The book features far more content than you will find in these notebooks.
These notebooks use Python 3.7, scikit-learn 0.21.3 and PyTorch 1.4.0. You can install conda on your operating system by following the instructions on the Conda website. Once installed, you can create the conda environment from the environment.yml
file as follows.
$> conda env create -f packages/environment.yml
The environment name is interpretable-ai
and it can be activated as follows.
$> conda activate interpretable-ai
You are now ready to run all the code in the book on Jupyter. From the repository directory downloaded on your machine, you can run the following command to start the Jupyter web application.
$> jupyter notebook
There are limitations with the Conda package/environment managed system. It sometimes does not work as expected across multiple operating systems, different versions of the same operating system or different hardware. If you do encounter issues while creating the conda environment detailed in the previous section, you can instead use Docker. Docker can be installed on your operating system by following the instructions on the Docker website. Once installed, you can then build the Docker image from command line by running the following command from the repository directory downloaded on your machine.
$> docker build . -t interpretable-ai
Note that the interpretable-ai tag is used for the Docker image. If the above command runs successfully, Docker should print the identifier of the image that was built. You can also view the details of the built image by running the following command.
$> docker images
Run the following command to run the Docker container using the built image and start the Jupyter web application.
$> docker run -p 8888:8888 interpretable-ai:latest
- Chapter 2: White-Box Models
- Chapter 3: Model Agnostic Methods - Global Interpretability
- Tree Ensembles and Global Interpretability
- Tree Ensembles (Random Forest)
- Partial Dependence Plots (PDPs)
- Feature Interactions
- Data
- Models
- Tree Ensembles and Global Interpretability
- Chapter 4: Model Agnostic Methods - Local Interpretability
- Deep Neural Networks and Local Interpretability
- Deep Neural Networks (DNNs)
- Local Interpretable Model-agnostic Explanations (LIME)
- Shapley Additive exPlanations (SHAP)
- Anchors
- Illustration of Activation Functions
- Data
- Models
- Deep Neural Networks and Local Interpretability
- Chapter 5: Saliency Mapping
- Convolutional Neural Networks and Visual Attribution
- Convolutional Neural Networks (CNNs)
- Visual Attribution Methods
- Vanilla backpropagation
- Guided backpropagation
- Integrated gradients
- SmoothGrad
- Grad-CAM
- Guided Grad-CAM
- Data
- Models
- Convolutional Neural Networks and Visual Attribution
- Chapter 6: Understanding Layers and Units
- Setup: Refer the readme on how to setup the network dissection framework
- Results: Refer the readme to download the network dissection results for certain pre-trained models
- Network Dissection
- Visualize Network Dissection Results
- Chapter 7: Understanding Semantic Similarity
- Chapter 8: Fairness and Mitigating Bias
- Chapter 9: Path to Explainable AI
- Appendix: PyTorch