Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Apex support for mixed precision training #1719

Closed
nijkah opened this issue Feb 13, 2022 · 2 comments
Closed

[Feature Request] Apex support for mixed precision training #1719

nijkah opened this issue Feb 13, 2022 · 2 comments

Comments

@nijkah
Copy link
Contributor

nijkah commented Feb 13, 2022

Describe the feature

Motivation
Many recent research works are based on apex for mixed precision training.

So I want to add this feature to mmcv with the name of class EpochBasedRunnerApex.
I am thinking implementation of just wrapping EpochBasedRunner as in Swin-Transformer Implementation.

Is it good idea to mmcv? Then I will create the PR.

Related resources
https://github.com/SwinTransformer/Swin-Transformer-Object-Detection
https://github.com/facebookresearch/ConvNeXt
https://github.com/microsoft/unilm/tree/master/beit/semantic_segmentation

@nijkah nijkah changed the title Apex support for mixed precision training [Feature Request] Apex support for mixed precision training Feb 13, 2022
@HAOCHENYE
Copy link
Collaborator

Thanks for your advice, but what is the advantage of apex over torch.cuda.amp? mmcv implements mixed precision training via Fp16OptimizerHook.

@nijkah
Copy link
Contributor Author

nijkah commented Feb 15, 2022

@HAOCHENYE
From this link, it says Apex is a first implementation of amp, and deprecated.
So, it doesn't seem to need to support that.
I think I have to check why these works used apex for amp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants