-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new metric d_lambda
#855
Add new metric d_lambda
#855
Conversation
for more information, see https://pre-commit.ci
…rics into feature/799_DLambda_metric
Note to the reviewers: Working on adding tests and updating doc examples. I will make this PR ready for review very soon. |
Hi @stancld @SkafteNicki, I have finished implementing
If you can please review the code once and suggest testing strategy, that would be a great help. |
for more information, see https://pre-commit.ci
…rics into feature/799_DLambda_metric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For testing, what about comparing to the implementation in https://github.com/andrewekhalel/sewar?
Also remember:
- Remember to add entry to
Changelog.md
. - Remember to add reference in the docs
Thank you for the review @SkafteNicki :) |
Co-authored-by: Nicki Skafte Detlefsen <skaftenicki@gmail.com>
for more information, see https://pre-commit.ci
@ankitaS11 in that case, we only have the option to write have a similar implementation here: |
Or maybe we can adjust the implementation of UQI so that both Gaussian and uniform kernel can be used? |
Codecov Report
@@ Coverage Diff @@
## master #855 +/- ##
=====================================
Coverage 95% 95%
=====================================
Files 167 169 +2
Lines 6960 7024 +64
=====================================
+ Hits 6601 6664 +63
- Misses 359 360 +1 |
Hi @ankitaS11, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! :]
cc: @SkafteNicki @justusschock Just one thing - don't we wanna use preds, target
naming instead of ms, fused
and just reference to these naming in the docstrings?
@stancld , right. Totally missed that. I think we should do that for consistency :) |
ms: Tensor, | ||
fused: Tensor, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ms: Tensor, | |
fused: Tensor, | |
preds: Tensor, | |
target: Tensor, |
@ankitaS11 Please use preds, target
naming for variables to stay consistent with the rest of torchmetrics
package :]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've a question regarding this, @stancld - since this metric (D_LAMBDA
) is a no-reference image metric, so we don't have a target
per say... Do you still think that renaming it to preds
/target
is a good idea?
I believe it's a little confusing, but I also agree that we should stay consistent. So I don't have a strong opinion here, and if you say, I'm happy to rename it for now. Maybe, in the future, we can think of more generic names for all the metrics in the torchmetrics
API. Just sharing my thoughts :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ankitaS11 I would lean towards having preds, target
for the function call with some more detailed description referencing to ms, fused
in the accompanied docstring :]
Co-authored-by: Daniel Stancl <46073029+stancld@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc tests are failing, so can someone help with my comment?
>>> import torch | ||
>>> _ = torch.manual_seed(42) | ||
>>> ms = torch.rand([16, 3, 16, 16]) | ||
>>> fused = ms * 0.75 | ||
>>> ms, target = _d_lambda_update(ms, fused) | ||
>>> _d_lambda_compute(ms, fused) | ||
tensor(3.4769e-08) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way to skip the doc tests? In uqi
similar test was passing as we had a constant factor of 0.75 for distortion. (and its a full reference metric so it will give same result without torch.manual_seed
). Probably torch.manual_seed
is not solving the problem here as the test is failing everytime with a different expected value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets use the manual_seed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Borda, I just checked the CI, it seems to be failing for multiple environments with different values (Got: some random value
) even after using manual_seed
.
Eg: https://github.com/PyTorchLightning/metrics/runs/5382467199?check_suite_focus=true, https://github.com/PyTorchLightning/metrics/runs/5382472779?check_suite_focus=true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ankitaS11 In the previous scenario, it seems like as ms
and fused
were too similar, the value of this metric was really close to 0 and there was a discrepancy on the 8th decimal place (first non-zero decimal) across some OS/torch versions. I change fused = ms * 0.75
to fused = torch.rand([16, 3, 16, 16])
so that those two tensors won't be too similar. Hopefully, it looks like everything works fine now :]
Hi, I accidentally messed up a few things in my fork, and deleted it :( Sorry, I will create a new one immediately, I hope that's okay. |
What does this PR do?
Adds New Image Metric - D_Lambda (Spectral Distortion Index)
Part of #799
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃