You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If cloning the latest apex repo directly, line 19 of apex/apex/transformer/amp/grad_scaler.py used torch.cuda.amp.GradScaler, which requires torch version to be greater than 1.6 in order for torch.cuda to have amp module. However, mega repo requires pytorch to be 1.3 or lower.
May I ask which version of apex mega is using? How can I install the correct apex version :D
The text was updated successfully, but these errors were encountered:
As this repo is not maintained for quite a while, I do not remember what version of amp I'm using. One solution is find a version that is released around the end of 2019 and install it:)
Actually amp is not used for this project so another solution is that you can just avoid installing amp if you find there are problems with it. Also comment several lines of code by searching the word amp (no need for mixed precision) and half (no need to transfer the vector to half percision). The INSTALL.md mostly follows the original maskrcnn_benchmark's installation guide.
I may update the repo to make it work with latest pytorch version when I have time. Sorry for the incovenience. And thanks for your interest!
Ohh, thanks for your kind reply and help!
I actually tried to check out apex repo to commit e3794f422628d453b036f69de476bf16a0a838ac, and this worked fine!
thanks for the great work~
If cloning the latest apex repo directly, line 19 of
apex/apex/transformer/amp/grad_scaler.py
usedtorch.cuda.amp.GradScaler
, which requires torch version to be greater than 1.6 in order fortorch.cuda
to haveamp
module. However, mega repo requires pytorch to be 1.3 or lower.May I ask which version of apex mega is using? How can I install the correct apex version :D
The text was updated successfully, but these errors were encountered: