You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use 2 RTX3090 for block-nerf dataset training, and I find that it runs out of memory in moe module. Is that normal?
Althought I decrease the batch_size, even to batch_size=2312, it still went out of memory.
And my scripts are like:
I use 2 RTX3090 for block-nerf dataset training, and I find that it runs out of memory in moe module. Is that normal?
Althought I decrease the
batch_size
, even tobatch_size=2312
, it still went out of memory.And my scripts are like:
The text was updated successfully, but these errors were encountered: