Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Support dynamic threshold range in eval_hmean #962

Merged
merged 2 commits into from
Apr 22, 2022

Conversation

gaotongxiao
Copy link
Collaborator

Motivation

The current implementation of eval_hmean only provides a parameter to set the minimum score with which the boundaries are used for evaluation. However, it always evaluates the filtered boundaries on a fixed set of threshold values [0.3, 0,4, ..., 0.9] no matter what the minimum score is, which is an inflexible and illogical design. There is also no entry in the config to customize such a behavior.

Modification

This PR replaces score_thr with min_score_thr, max_score_thr, step so that users can configure the search space via input parameters. IcdarDataset.evaluate() is modified to allow these three parameters to be customized through the config file.

For example, by adapting the following snippet in the config, one can evaluate the model's output on a list of boundary score thresholds [0.1, 0.2, 0.3, 0.4, 0.5] and get the best score from them.

evaluation = dict(
    interval=100,
    metric='hmean-iou',
    min_score_thr=0.1,
    max_score_thr=0.5,
    step=0.1)

BC-breaking (Optional)

None

Copy link
Collaborator

@Mountchicken Mountchicken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • eval_hmean is also used in TextDetDataset. We'd better modify it too.
  • BTW, One thing that confuses me is that I didn't see these score_thr parameters that can be passed to eval_hmean in the config files, do users have to modify the source code to achieve this?

@gaotongxiao
Copy link
Collaborator Author

gaotongxiao commented Apr 22, 2022

  1. Sounds good
  2. In fact, all the extra keys in the evaluation field of config are passed to dataset.evaluate. We just need to make sure dataset.evaluate parses the parameter correctly. See
    eval_cfg = cfg.get('evaluation', {})
    and https://github.com/open-mmlab/mmcv/blob/3a3514a54d59655e4a9ef88b72bb9757783da9db/mmcv/runner/hooks/evaluation.py#L363-L364

@gaotongxiao gaotongxiao merged commit 888f700 into open-mmlab:main Apr 22, 2022
@gaotongxiao gaotongxiao deleted the fix_hmean_iou branch April 22, 2022 09:07
gaotongxiao added a commit to gaotongxiao/mmocr that referenced this pull request Jul 15, 2022
…ab#962)

* [Enhancement] Support dynamic threshold range in eval_hmean

* upgrade textdetdataset, add deprecate warning
gaotongxiao added a commit to gaotongxiao/mmocr that referenced this pull request Jul 15, 2022
…ab#962)

* [Enhancement] Support dynamic threshold range in eval_hmean

* upgrade textdetdataset, add deprecate warning
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants