Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert MAP implementation to pycocotools backend #1832

Merged
merged 33 commits into from
Jul 3, 2023
Merged

Revert MAP implementation to pycocotools backend #1832

merged 33 commits into from
Jul 3, 2023

Conversation

SkafteNicki
Copy link
Member

@SkafteNicki SkafteNicki commented Jun 8, 2023

What does this PR do?

Fixes #1024
Fixes #1164

Reverts the MeanAveragePrecision implementation back to its v0.6 version for two primary reasons:

  • Our current implementation is orders of magnitude slower and even with improvements in Improve Mean Average Precision performance #1389 it is still slower
  • Nobody on the primary team is actually able to maintain the metric because we do not have experience with object detection. Therefore it is better to rely on pycocotools during the major calculations and we "just" provide an interface

The consequence for now is that pycocotools will be (re-)introduced as an require dependency for the metric.

TODO list:

Benchmark results below (measured on my laptop)

Time This PR Master PR #1389 (improves current implementation)
Total time in init 0.0007 0.0010 0.0012
Total time in update 0.2445 0.2472 0.2769
Total time in compute 10.5286 324.6941 220.8867

As it been stated many times the old implementation is orders of magnitude faster than our current implementation.

Benchmark code
import time

import torch

try:
    from torchmetrics.detection import MeanAveragePrecision
except ImportError:
    from torchmetrics.detection import MAP
    MeanAveragePrecision = MAP

total_time = {}

class UpdateTime:
    def __init__(self, name) -> None:
        self._name = name

    def __enter__(self):
        self._start_time = time.perf_counter()

    def __exit__(self, exc_type, exc_val, exc_tb):
        end_time = time.perf_counter()
        if self._name in total_time:
            total_time[self._name] += end_time - self._start_time
        else:
            total_time[self._name] = end_time - self._start_time
        return True

def generate(n):
    boxes = torch.rand(n, 4) * 1000
    boxes[:, 2:] += boxes[:, :2]
    labels = torch.randint(0, 10, (n,))
    scores = torch.rand(n)
    return {"boxes": boxes, "labels": labels, "scores": scores}

with UpdateTime("init"):
    map = MeanAveragePrecision()

for _batch_idx in range(100):
    with UpdateTime("update"):
        detections = [generate(100) for _ in range(10)]
        targets = [generate(10) for _ in range(10)]
        map.update(detections, targets)

with UpdateTime("compute"):
    map.compute()

for name, time in total_time.items():
    print(f"Total time in {name}: {time}")
Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?
PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃


📚 Documentation preview 📚: https://torchmetrics--1832.org.readthedocs.build/en/1832/

@SkafteNicki SkafteNicki added bug / fix Something isn't working enhancement New feature or request test / CI testing or CI labels Jun 8, 2023
@SkafteNicki SkafteNicki added this to the future milestone Jun 8, 2023
@Borda
Copy link
Member

Borda commented Jun 8, 2023

can we rather preserve both so have MAPCoco?

@SkafteNicki
Copy link
Member Author

can we rather preserve both so have MAPCoco?

Yes, we could do that, but then I would prefer we just add a backend="pycocotools"/"pytorch" argument to the class. That said, I am in the process of improving the testing and if the old implementation does not pass, then I would propose to still remove it.

@Borda
Copy link
Member

Borda commented Jun 8, 2023

I would prefer we just add a backend="pycocotools"/"pytorch" argument to the class

yes, that is also a good option. Just feel we have invested lots of effort to have our own implementation, so dropping it... :( also this would unblock users with exactly coco expectations (even if there are some known bugs)

@SkafteNicki
Copy link
Member Author

I would prefer we just add a backend="pycocotools"/"pytorch" argument to the class

yes, that is also a good option. Just feel we have invested lots of effort to have our own implementation, so dropping it... :( also this would unblock users with exactly coco expectations (even if there are some known bugs)

We should not keep it just because we have invested time in it, that is just a sunk cost fallacy. We should keep it because we intent (with help) to make it stable.

@Borda Borda requested a review from a team June 30, 2023 17:49
@mergify mergify bot added the ready label Jul 1, 2023
@Borda Borda requested a review from maximsch2 July 3, 2023 07:40
@Borda Borda enabled auto-merge (squash) July 3, 2023 07:40
@ddelange
Copy link

ddelange commented Jul 3, 2023

hi 👋 does this PR include breaking API changes?

can we still from torchmetrics.detection.mean_ap import MeanAveragePrecision and the API will be the same as before, just with the pycocotools backend?

@Borda
Copy link
Member

Borda commented Jul 3, 2023

can we still from torchmetrics.detection.mean_ap import MeanAveragePrecision and the API will be the same as before, just with the pycocotools backend?

yes, it shall be compatible 🦩
cc: @SkafteNicki

@SkafteNicki
Copy link
Member Author

hi 👋 does this PR include breaking API changes?

can we still from torchmetrics.detection.mean_ap import MeanAveragePrecision and the API will be the same as before, just with the pycocotools backend?

@ddelange, no breaking changes :]

@Borda Borda merged commit 5cc05c3 into master Jul 3, 2023
60 checks passed
@Borda Borda deleted the new_map branch July 3, 2023 17:44
@adamjstewart
Copy link
Contributor

no breaking changes :]

One breaking change :]

The following code:

class Foo(LightningModule):
    def __init__(self):
        self.metric = MeanAveragePrecision()

    def on_validation_epoch_end(self):
        self.log_dict(self.metric.compute())

used to work in 0.11.4. However, this PR adds a new classes key-pair to the metrics dict containing a list of all observed classes. log_dict now complains about this entry:

ValueError: `self.log(classes, tensor([0, 1], dtype=torch.int32))` was called, but the tensor must have a single element. You can try doing `self.log(classes, tensor([0, 1], dtype=torch.int32).mean())`

Here, tensor([0, 1]) is the classes value. Current workaround is to delete this key from the dict before logging, but it would be good to do this automatically.

@ddelange
Copy link

ddelange commented Jul 6, 2023

That's not this PR, it's a breaking change that was introduced in v1.0.0.rc0 ref ac64e63

@adamjstewart
Copy link
Contributor

Good catch, thanks!

@ddelange
Copy link

ddelange commented Jul 6, 2023

No worries, I happened to know because I hit it in in a recent PR during a particularly representative case of dephell

@SkafteNicki
Copy link
Member Author

SkafteNicki commented Jul 7, 2023

@adamjstewart @ddelange
Yeah, I was probably too fast with saying no breaking changes :/
Would you prefer that we introduced an argument include_classes (or something similar) when initializing the class to enable/disable the inclusion of the class key.

For now you can also get around by a simple subclass:

from torchmetrics.detection.mean_ap import MeanAveragePrecision as MAP
class MeanAveragePrecision(MAP):
    def compute(self):
        res = super().compute()
        res.pop("classes")
        return res

@adamjstewart
Copy link
Contributor

adamjstewart commented Jul 7, 2023

An argument would be good. I would vote for it to default to False so that the above example works out-of-the-box. I can submit a PR if you'd like.

@SkafteNicki
Copy link
Member Author

An argument would be good. I would vote for it to default to False so that the above example works out-of-the-box. I can submit a PR if you'd like.

Please feel free to open a PR. I would prefer if we set the default to be True such that we do not break backwards compatible once again.

@adamjstewart
Copy link
Contributor

adamjstewart commented Jul 8, 2023

If it defaults to True then the above code will continue to not work. The purpose of a PR would be to restore backwards compatibility, not to double down on the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working enhancement New feature or request ready test / CI testing or CI
Projects
None yet
5 participants