Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the Image explainer #244

Open
SelComputas opened this issue May 25, 2020 · 3 comments
Open

How to use the Image explainer #244

SelComputas opened this issue May 25, 2020 · 3 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@SelComputas
Copy link

SelComputas commented May 25, 2020

Describe the bug
A clear and concise description of what the bug is.

I am unable to understand how to use this program to run my CNN model through it, in order to know which pixels are the essential ones.

To Reproduce
Steps to reproduce the behavior:

  1. Follow your "Getting Started"
  2. Open "notebooks/explain-multiclass-classification-local.ipynb"
  3. Look at the image;
    image
  4. Try to understand how one is supposed to use the Image Kernel mentioned in the image.

Expected behavior
I expect to have a clear and concise way of using your tool in order to obtain some explanation as to what my CNN model is looking at when making predictions.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows
  • Browser: Chrome
  • Version

Additional context
I feel like I am just missing the part of your README-files where you explain how to use the Image Explainer mentioned in the image above.

I can also note that I have a .pb (TensorFlow) model, and a .onnx model.

@imatiach-msft
Copy link
Collaborator

imatiach-msft commented May 26, 2020

@SelComputas currently there is an image explainer in the contrib package azureml-contrib-explain-model package but I wouldn't recommend it, as there is no visualization dashboard for it yet. We have an https://github.com/interpretml/interpret-text repository for text interpretability and logically we would like to add a repository for images eventually, but unfortunately we don't have one yet. Sorry, we should update that image in the notebooks as it isn't accurate anymore, we open-sourced parts of a previous package as this package which has been made an extension to the interpret package.

@imatiach-msft imatiach-msft added documentation Improvements or additions to documentation enhancement New feature or request labels May 26, 2020
@pk2005
Copy link

pk2005 commented Apr 25, 2021

Hi! Is there any update on this? I have similar issue and wanted to use InterpretML for my CNN. Thanks!

@imatiach-msft
Copy link
Collaborator

imatiach-msft commented Apr 26, 2021

@pk2005 currently we don't have image support yet, sorry. However, there is a hierarchical image explainer available in SHAP github repository (https://github.com/slundberg/shap) which works well for images. If you are using pytorch, captum also has great image explainers: https://github.com/pytorch/captum. I'm not sure what is available for tensorflow, I just found this example using integrated gradients with a quick search, there might be better toolkits available:
https://www.tensorflow.org/tutorials/interpretability/integrated_gradients

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Development

No branches or pull requests

3 participants