You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apex installation command pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ failed for me on windows, so I installed it from, https://anaconda.org/conda-forge/nvidia-apex. After using flag --fp16 True training was far slower than fp16 disabled. I tested it on pytorch 1.6 and stylegan2-pytorch 1.2.6.
I came across this issue which mentions torch.cuda.amp fixes many issues from apex.amp and has many more advantages including windows support. Please add support if possible.
The text was updated successfully, but these errors were encountered:
Closing issue as amp should mostly speedup newer Nvidia GPU's with compute capability 7.0+. I have also read other posts and articles that pytorch amp will not provide much benefit with mixed precision on Nvidia 10 series GPU's as they lack tensor cores.
Apex installation command
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
failed for me on windows, so I installed it from, https://anaconda.org/conda-forge/nvidia-apex. After using flag--fp16 True
training was far slower than fp16 disabled. I tested it onpytorch 1.6
andstylegan2-pytorch 1.2.6
.I came across this issue which mentions
torch.cuda.amp
fixes many issues fromapex.amp
and has many more advantages including windows support. Please add support if possible.The text was updated successfully, but these errors were encountered: