Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Explainability Evaluation] - Characterization score, and other ways to combine Fid+/- scores #6188

Merged
merged 17 commits into from
Dec 15, 2022

Conversation

seemanne
Copy link
Contributor

@seemanne seemanne commented Dec 8, 2022

This PR implements the features from #5960

The tests and the math part is done, I'm somewhat lost with the sphinx documentation and very much open to suggestions for the docstrings.

As #5962 also requires calculating an AUC and probably a weighted harmonic mean for F_1 type scores these parts have been implemented as separate helper functions.

Copy link
Contributor

@RexYing RexYing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! A few comments on clarity.

def get_characterization_score(weights: Tensor,
fidelities: Tensor) -> 'Tensor':
r"""Returns the componentwise characterization score of
fidelities[0] and fidelities[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to add here a bit more detail: do the two rows corresponding to fid+ and fid-? Which one is which?
And does the column corresponds to num nodes (explanation)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added explanation to which row corresponds to what.
As for the columns I'm unsure but shouldn't it be the fidelities of different explanations? As far as I understand it the two fidelities are computed as the sum average over all nodes and serve as a way to gauge the "explanatory power" of different explanations, or am I mistaken here?

torch_geometric/explain/utils/metrics.py Outdated Show resolved Hide resolved
torch_geometric/explain/utils/metrics.py Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Dec 9, 2022

Codecov Report

Merging #6188 (c0eb004) into master (01482d4) will increase coverage by 0.01%.
The diff coverage is 100.00%.

❗ Current head c0eb004 differs from pull request most recent head dab8c14. Consider uploading reports for the commit dab8c14 to get more accurate results

@@            Coverage Diff             @@
##           master    #6188      +/-   ##
==========================================
+ Coverage   84.50%   84.51%   +0.01%     
==========================================
  Files         374      376       +2     
  Lines       20889    20905      +16     
==========================================
+ Hits        17652    17668      +16     
  Misses       3237     3237              
Impacted Files Coverage Δ
torch_geometric/explain/__init__.py 100.00% <100.00%> (ø)
torch_geometric/explain/metric/__init__.py 100.00% <100.00%> (ø)
torch_geometric/explain/metric/fidelity.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Contributor

@BlazStojanovic BlazStojanovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great @seemanne, thank you! I'm not sure about the docstrings, but once we get confirmation that they compile, we can merge this 👍🏻

torch_geometric/explain/utils/metrics.py Outdated Show resolved Hide resolved
@seemanne
Copy link
Contributor Author

@BlazStojanovic The docs compile now, I've added a section about these evaluation metrics in explain.rst. Marked the branch as ready.

@seemanne seemanne marked this pull request as ready for review December 11, 2022 13:27
@seemanne seemanne changed the title DRAFT: [Explainability Evaluation] - Characterization score, and other ways to combine Fid+/- scores [Explainability Evaluation] - Characterization score, and other ways to combine Fid+/- scores Dec 11, 2022
Copy link
Contributor

@BlazStojanovic BlazStojanovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! We can merge this @rusty1s

@rusty1s
Copy link
Member

rusty1s commented Dec 12, 2022

Super, will take a final look tomorrow.

@rusty1s rusty1s enabled auto-merge (squash) December 15, 2022 14:41
@rusty1s rusty1s merged commit bc55ff1 into pyg-team:master Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants