Native Amp Support #1336
Labels
duplicate
This issue or pull request already exists
feature
Is an improvement or enhancement
help wanted
Open to be worked on
Native automatic mixed precision support (
torch.cuda.amp
) is finally merged:https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html
Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i don’t even know if it can be hacked to handle double backward/gradient penalty, others…).
torch.cuda.amp
repairs all these, the interface is more flexible and intuitive, and the tighter integration brings more future performance optimizations into scope.If you want to talk about adding
torch.cuda.amp
to Lightning, with an eye towards it becoming the true source of mixed precision and replacing Apex, message me on Pytorch slack anytime. I pinged you there as well, but I’m not sure if you monitor it habitually.The text was updated successfully, but these errors were encountered: