-
Notifications
You must be signed in to change notification settings - Fork 517
Multimodal input binary classifier with Saliency #723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @NullCodex did you try |
Yes, but since label is not a float, it says it does not require grad. |
The label data is for instance |
I see. Maybe to narrow down the issue, did you try other Captum algorithms? Like IntegratedGradients? |
I am pretty sure it's because captum doesn't support grads with integers. I tried other algorithms as well and none of them works. Are there any working examples where |
Also are |
Captum relies on PyTorch built-in functions to compute the gradient, and it is true that constants do not require a grad. |
The entire class is below:
Right now, I am hacking it so that i change the class to take only the img in |
What about changing your forward so that it takes
Hope this helps |
Regarding LRP, the current implementation indeed does not readily define a rule for all torch.nn modules, however, it should be straightforward to define one (see #712). Hope this helps |
With using |
I guess the error message is a bit misleading here. Should we change that to something like f"Module of type {type(layer)} has no rule defined and no default rule exists for this module type."
"Please, set a rule explicitly for this module and assure that it is appropriate for this type of layer." in captum/captum/attr/_core/lrp.py Line 283 in 7300cd8
|
Thanks for the good suggestion, @nanohanno! Do you want to create a PR for this? I can gladly help getting this landed. I can also update the message myself if needed. |
Hi @NullCodex , yes that sounds reasonable: the first 1024 values are attributions of the 32x32 pixels, and the last 10 values are for the embedding of the labels. |
@bilalsal how should i provide an explanation of the remaining grad? since there are still 10 values. |
The remaining values represent the attribution of the label embedding: Each value, e.g. the grad for the 1st channel in the 10-dimensional embedding) indicates how likely the respective channel (in the label embedding) contributed to the output. This info could be insightful if you want to shed light into how the learned I recommend you do further experimentation and visualization of the remaining grads to get a closer idea about them and whether they can help your analysis. Hope this helps |
I am happy to create a PR for it. |
Summary: This PR provides additional information to an error message that is triggered when neither an explicit LRP rule is defined nor a default rule exists. The current error message is quite short and does not give enough information to the user on how to solve the problem as discussed in issue #723 . Feel free to propose a different wording. Pull Request resolved: #727 Reviewed By: aobo-y Differential Revision: D30045186 Pulled By: vivekmig fbshipit-source-id: 2dc7f7a29d014da12ebdc409d4abc676d5cbbc34
❓ Questions and Help
Hi Everyone,
Question:
How can I apply saliency to a dataset composed of categorical and image data?
I am somewhat of a beginner with pytorch and the available resources are just not clicking with my use case. The ultimate goal is for me to plot the saliency of a model, but I am stuck on calculating the gradient. Any help or guidance would be much appreciated.
What I've reviewed:
Multimodal_VQA_Captum_Insights tutorial
BERT tutorials
(These resources all have very different data structures(images/sentences) and are confusing for a beginner to translate to an easier image/categorical dataset)
My issue
Model
Categorical
The categorical data is just a normal label such as 4
What I tried
where input is just a (1, 32, 32) image. I set target=None since it's a binary classifier.
Failure output
One of the differentiated Tensors does not require grad
Since the label is not a float it does not require grad, is there a way i can use the saliency method to capture the grads?
The text was updated successfully, but these errors were encountered: