Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement partial auroc metric #3790

Merged

Conversation

0xRampey
Copy link
Contributor

@0xRampey 0xRampey commented Oct 2, 2020

What does this PR do?

Working implementation for #3675
Some notes:

  • Wrote very basic tests. Need some advice for more extensive testing.
  • Commented the @auc_decorator out because the area had to be corrected after calculation. See comments.
  • Perhaps the code can be optimized more to reduce trips between GPU and CPU. Planning to do that once extensive tests are in place.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Ofc ;)

Resources

@pep8speaks
Copy link

pep8speaks commented Oct 2, 2020

Hello @prampey! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-12-29 17:47:04 UTC

@mergify mergify bot requested a review from a team October 2, 2020 11:46
@Borda Borda added feature Is an improvement or enhancement Metrics labels Oct 2, 2020
@edenlightning edenlightning modified the milestones: 0.9.x, 1.0 Oct 4, 2020
@williamFalcon williamFalcon modified the milestones: 1.0, 1.1 Oct 5, 2020
@mergify
Copy link
Contributor

mergify bot commented Oct 6, 2020

This pull request is now in conflict... :(

@0xRampey
Copy link
Contributor Author

0xRampey commented Oct 7, 2020

I have added tests across the range (0, 1] for max_fpr. However, the tests are failing because there is a difference in float precision of 1e-3 popping up between the torch and sklearn results. Here's an example:

E       AssertionError: assert False                                                                                                                            
E        +  where False = <built-in method allclose of type object at 0x7fa60c1a3a80>(tensor(0.5120, device='cuda:0'), tensor(0.5122, device='cuda:0'))         
E        +    where <built-in method allclose of type object at 0x7fa60c1a3a80> = torch.allclose                                                                

This error originated because I changed the following code from
pred = torch.randint(n_cls_pred, (300,), device=device) to pred = torch.rand((300,), device=device) .
The AUC metric is usually applied to prediction probabilities and not prediction labels themselves, hence the need to generate random prediction values between 0 and 1.

I'd appreciate any help on taking this forward :)

@mergify
Copy link
Contributor

mergify bot commented Oct 11, 2020

This pull request is now in conflict... :(

@Borda Borda modified the milestones: 1.1, 1.0.x Oct 20, 2020
@edenlightning edenlightning modified the milestones: 1.0.3, 1.1 Oct 20, 2020
@ddrevicky
Copy link
Contributor

Hey @prampey the problem was with the reorder=True param to auc which uses unstable torch.argsort and this can cause the tpr/fpr pairs from which the AUC is calculated to be switched and that leads to computation of the wrong value. I tried setting it to False and your tests seem to pass locally :). Good job with the PR.

I think you just need to update the docstring and CHANGELOG now.

@mergify mergify bot requested a review from a team October 27, 2020 07:18
@SkafteNicki SkafteNicki linked an issue Nov 2, 2020 that may be closed by this pull request
@SkafteNicki
Copy link
Member

Hi @prampey, how is it going here?

@0xRampey
Copy link
Contributor Author

Hi @SkafteNicki , haven't had an opportunity to work on this again. Will hopefully put in some time this week :)

@0xRampey
Copy link
Contributor Author

Hey @SkafteNicki , sorry I've been hung up with my job and sending college applications. 🙈
Everything seems to be working fine, added a few more values to test out the whole range of (0,1] for max_fpr

SkafteNicki
SkafteNicki previously approved these changes Dec 14, 2020
Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@SkafteNicki SkafteNicki added the ready PRs ready to be merged label Dec 14, 2020
@Borda Borda changed the base branch from master to release/1.2-dev December 14, 2020 17:35
@Borda Borda dismissed SkafteNicki’s stale review December 14, 2020 17:35

The base branch was changed.

justusschock
justusschock previously approved these changes Dec 15, 2020
Copy link
Contributor

@rohitgr7 rohitgr7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just some comments.

pytorch_lightning/metrics/functional/classification.py Outdated Show resolved Hide resolved
pytorch_lightning/metrics/functional/classification.py Outdated Show resolved Hide resolved
pytorch_lightning/metrics/functional/classification.py Outdated Show resolved Hide resolved
pytorch_lightning/metrics/functional/classification.py Outdated Show resolved Hide resolved
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
@Borda Borda removed the ready PRs ready to be merged label Dec 23, 2020
pytorch_lightning/metrics/functional/classification.py Outdated Show resolved Hide resolved
tests/metrics/functional/test_classification.py Outdated Show resolved Hide resolved
SkafteNicki and others added 2 commits December 29, 2020 18:36
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
@SkafteNicki SkafteNicki merged commit 2094633 into Lightning-AI:release/1.2-dev Dec 29, 2020
@0xRampey 0xRampey deleted the metrics/partial_auroc branch December 30, 2020 07:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Metrics] Partial AUROC
10 participants