Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decouple batch_size to det_batch_size, rec_batch_size and kie_batch_size in MMOCRInferencer #1801

Merged
merged 5 commits into from
Mar 24, 2023

Conversation

hugotong6425
Copy link
Contributor

@hugotong6425 hugotong6425 commented Mar 23, 2023

Motivation

in response to #1799

Modification

The MMOCRInferencer.__call__ method is modified to provide the flexibility of setting different values for det_batch_size, rec_batch_size, kie_batch_size, and chunk_size. Any one of these 4 parameters will override batch_size if it is not None.

To maintain backward compatibility, the default behavior of MMOCRInferencer.__call__ will remain unchanged.

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
  • CLA has been signed and all committers have signed the CLA in this PR.

@CLAassistant
Copy link

CLAassistant commented Mar 23, 2023

CLA assistant check
All committers have signed the CLA.

@gaotongxiao gaotongxiao changed the base branch from 1.x to dev-1.x March 23, 2023 03:42
@hugotong6425 hugotong6425 changed the title Bs mmocr inferencer decouple batch_size to det_batch_size, rec_batch_size, kie_batch_size and chunk_size in MMOCRInferencer Mar 23, 2023
@codecov
Copy link

codecov bot commented Mar 23, 2023

Codecov Report

Patch coverage: 35.71% and project coverage change: -0.07 ⚠️

Comparison is base (1a379f2) 89.36% compared to head (1d63475) 89.29%.

❗ Current head 1d63475 differs from pull request most recent head da24d25. Consider uploading reports for the commit da24d25 to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x    #1801      +/-   ##
===========================================
- Coverage    89.36%   89.29%   -0.07%     
===========================================
  Files          192      192              
  Lines        11279    11291      +12     
  Branches      1596     1602       +6     
===========================================
+ Hits         10079    10082       +3     
- Misses         886      889       +3     
- Partials       314      320       +6     
Flag Coverage Δ
unittests 89.29% <35.71%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmocr/apis/inferencers/mmocr_inferencer.py 85.62% <35.71%> (-5.16%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@gaotongxiao gaotongxiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution! The PR overall looks good, and it would be even greater if you could polish the documentation to make others aware of this change, by appending instructions behind line 462:

**MMOCRInferencer.\_\_call\_\_()**
| Arguments | Type | Default | Description |
| -------------------- | ----------------------- | ------------ | ------------------------------------------------------------------------------------------------ |
| `inputs` | str/list/tuple/np.array | **required** | It can be a path to an image/a folder, an np array or a list/tuple (with img paths or np arrays) |
| `return_datasamples` | bool | False | Whether to return results as DataSamples. If False, the results will be packed into a dict. |
| `batch_size` | int | 1 | Inference batch size. |

Comment on lines 261 to 262
chunk_size (int): For chunking inputs. Defaults to None.
If it is None, batch_size will be used as chunk_size.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to tell the conceptual difference between chunk_size and batch_size when they are presented together, I think just keeping batch_size only may be a better choice - simplicity always goes first.

Comment on lines 111 to 113
det_batch_size: int = None,
rec_batch_size: int = None,
kie_batch_size: int = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
det_batch_size: int = None,
rec_batch_size: int = None,
kie_batch_size: int = None,
det_batch_size: Optional[int] = None,
rec_batch_size: Optional[int] = None,
kie_batch_size: Optional[int] = None,

Comment on lines 120 to 124
det_batch_size (int): Batch size for text detection model.
Defaults to None. Overwrite batch_size if it is not None.
rec_batch_size (int): Batch size for text recognition model.
Defaults to None. Overwrite batch_size if it is not None.
kie_batch_size (int): Batch size for KIE model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
det_batch_size (int): Batch size for text detection model.
Defaults to None. Overwrite batch_size if it is not None.
rec_batch_size (int): Batch size for text recognition model.
Defaults to None. Overwrite batch_size if it is not None.
kie_batch_size (int): Batch size for KIE model.
det_batch_size (int, optional): Batch size for text detection model.
Defaults to None. Overwrite batch_size if it is not None.
rec_batch_size (int, optional): Batch size for text recognition model.
Defaults to None. Overwrite batch_size if it is not None.
kie_batch_size (int, optional): Batch size for KIE model.

@hugotong6425 hugotong6425 changed the title decouple batch_size to det_batch_size, rec_batch_size, kie_batch_size and chunk_size in MMOCRInferencer [WIP] decouple batch_size to det_batch_size, rec_batch_size and kie_batch_size in MMOCRInferencer Mar 23, 2023
@hugotong6425 hugotong6425 changed the title [WIP] decouple batch_size to det_batch_size, rec_batch_size and kie_batch_size in MMOCRInferencer decouple batch_size to det_batch_size, rec_batch_size and kie_batch_size in MMOCRInferencer Mar 24, 2023
@hugotong6425
Copy link
Contributor Author

@gaotongxiao Thanks for the feedback! I updated the code and documentation.

@gaotongxiao
Copy link
Collaborator

Thanks! LGTM now.

@gaotongxiao gaotongxiao merged commit c886936 into open-mmlab:dev-1.x Mar 24, 2023
@hugotong6425 hugotong6425 deleted the bs_MMOCRInferencer branch March 24, 2023 03:32
@OpenMMLab-Assistant-004

Hi @hugotong6425 !First of all, we want to express our gratitude for your significant PR in the mmocr project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR.

We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/raweFPmdzG

If you have WeChat,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)
Thank you again for your contribution❤

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants