Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new metric d_lambda #855

Closed
wants to merge 33 commits into from
Closed

Add new metric d_lambda #855

wants to merge 33 commits into from

Conversation

ankitaS11
Copy link
Contributor

@ankitaS11 ankitaS11 commented Feb 22, 2022

What does this PR do?

Adds New Image Metric - D_Lambda (Spectral Distortion Index)

Part of #799

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@ankitaS11 ankitaS11 marked this pull request as draft February 22, 2022 18:27
@ankitaS11
Copy link
Contributor Author

Note to the reviewers: Working on adding tests and updating doc examples. I will make this PR ready for review very soon.

@stancld stancld mentioned this pull request Feb 25, 2022
@ankitaS11
Copy link
Contributor Author

Hi @stancld @SkafteNicki, I have finished implementing d_lambda, but I need help with testing this because -

  1. UQI is used in d_lamba and sewar uses uniform filter in uqi while we use gaussian filter (the paper mentions gaussian distribution).
  2. Couldn't find any reference implementation for d_lambda which matches the formula in paper.

If you can please review the code once and suggest testing strategy, that would be a great help.

Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For testing, what about comparing to the implementation in https://github.com/andrewekhalel/sewar?

Also remember:

  • Remember to add entry to Changelog.md.
  • Remember to add reference in the docs

torchmetrics/image/d_lambda.py Outdated Show resolved Hide resolved
torchmetrics/image/d_lambda.py Show resolved Hide resolved
torchmetrics/functional/image/d_lambda.py Outdated Show resolved Hide resolved
torchmetrics/functional/image/d_lambda.py Outdated Show resolved Hide resolved
torchmetrics/functional/image/d_lambda.py Outdated Show resolved Hide resolved
@ankitaS11
Copy link
Contributor Author

For testing, what about comparing to the implementation in https://github.com/andrewekhalel/sewar?

Also remember:

  • Remember to add entry to Changelog.md.
  • Remember to add reference in the docs

Thank you for the review @SkafteNicki :)
sewar uses uniform filter in uqi and we have used gaussian filter (as the reference research paper mentions gaussian distribution). So I think we cannot compare our implementation with sewar.

ankitaS11 and others added 2 commits February 25, 2022 19:57
Co-authored-by: Nicki Skafte Detlefsen <skaftenicki@gmail.com>
@SkafteNicki
Copy link
Member

@ankitaS11 in that case, we only have the option to write have a similar implementation here:
https://github.com/PyTorchLightning/metrics/blob/master/tests/helpers/non_sklearn_metrics.py
written in numpy which we can compare against. It is not optimal, but that is just how it is with some of the newer metrics proposed in research papers.

@stancld
Copy link
Contributor

stancld commented Feb 25, 2022

@ankitaS11 in that case, we only have the option to write have a similar implementation here: https://github.com/PyTorchLightning/metrics/blob/master/tests/helpers/non_sklearn_metrics.py written in numpy which we can compare against. It is not optimal, but that is just how it is with some of the newer metrics proposed in research papers.

Or maybe we can adjust the implementation of UQI so that both Gaussian and uniform kernel can be used?

@SkafteNicki SkafteNicki added this to the v0.8 milestone Mar 1, 2022
@codecov
Copy link

codecov bot commented Mar 1, 2022

Codecov Report

Merging #855 (b3e466b) into master (3aba057) will increase coverage by 0%.
The diff coverage is 98%.

@@          Coverage Diff          @@
##           master   #855   +/-   ##
=====================================
  Coverage      95%    95%           
=====================================
  Files         167    169    +2     
  Lines        6960   7024   +64     
=====================================
+ Hits         6601   6664   +63     
- Misses        359    360    +1     

@SkafteNicki
Copy link
Member

Hi @ankitaS11,
Core part of the code looks good to me. I added some remaining things, trying to get it ready for merging.
Thanks :]

Copy link
Contributor

@stancld stancld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! :]

cc: @SkafteNicki @justusschock Just one thing - don't we wanna use preds, target naming instead of ms, fused and just reference to these naming in the docstrings?

@justusschock
Copy link
Member

@stancld , right. Totally missed that. I think we should do that for consistency :)

Comment on lines +97 to +98
ms: Tensor,
fused: Tensor,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ms: Tensor,
fused: Tensor,
preds: Tensor,
target: Tensor,

@ankitaS11 Please use preds, target naming for variables to stay consistent with the rest of torchmetrics package :]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've a question regarding this, @stancld - since this metric (D_LAMBDA) is a no-reference image metric, so we don't have a target per say... Do you still think that renaming it to preds/target is a good idea?

I believe it's a little confusing, but I also agree that we should stay consistent. So I don't have a strong opinion here, and if you say, I'm happy to rename it for now. Maybe, in the future, we can think of more generic names for all the metrics in the torchmetrics API. Just sharing my thoughts :)

cc: @SkafteNicki @justusschock

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ankitaS11 I would lean towards having preds, target for the function call with some more detailed description referencing to ms, fused in the accompanied docstring :]

ankitaS11 and others added 2 commits March 1, 2022 23:47
Co-authored-by: Daniel Stancl <46073029+stancld@users.noreply.github.com>
Copy link
Contributor Author

@ankitaS11 ankitaS11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc tests are failing, so can someone help with my comment?

Comment on lines 66 to 72
>>> import torch
>>> _ = torch.manual_seed(42)
>>> ms = torch.rand([16, 3, 16, 16])
>>> fused = ms * 0.75
>>> ms, target = _d_lambda_update(ms, fused)
>>> _d_lambda_compute(ms, fused)
tensor(3.4769e-08)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to skip the doc tests? In uqi similar test was passing as we had a constant factor of 0.75 for distortion. (and its a full reference metric so it will give same result without torch.manual_seed). Probably torch.manual_seed is not solving the problem here as the test is failing everytime with a different expected value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets use the manual_seed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Borda, I just checked the CI, it seems to be failing for multiple environments with different values (Got: some random value) even after using manual_seed.

Eg: https://github.com/PyTorchLightning/metrics/runs/5382467199?check_suite_focus=true, https://github.com/PyTorchLightning/metrics/runs/5382472779?check_suite_focus=true

Copy link
Contributor

@stancld stancld Mar 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ankitaS11 In the previous scenario, it seems like as ms and fused were too similar, the value of this metric was really close to 0 and there was a discrepancy on the 8th decimal place (first non-zero decimal) across some OS/torch versions. I change fused = ms * 0.75 to fused = torch.rand([16, 3, 16, 16]) so that those two tensors won't be too similar. Hopefully, it looks like everything works fine now :]

@mergify mergify bot added the ready label Mar 3, 2022
@stancld stancld mentioned this pull request Mar 4, 2022
@ankitaS11
Copy link
Contributor Author

Hi, I accidentally messed up a few things in my fork, and deleted it :( Sorry, I will create a new one immediately, I hope that's okay.

@ankitaS11 ankitaS11 mentioned this pull request Mar 6, 2022
4 tasks
@Borda Borda added this to the v0.8 milestone May 5, 2022
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants