You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 6, 2024. It is now read-only.
Hi,
Firstly, thanks for the repository!
As far as our understanding goes, IMPALA can be distributed across more than 2 GPUs. The example you have in the repo uses up to 2 GPUs. We have access to more GPUs in a single machine and want to utilize all in order to get the maximal throughput. What would be the best way to do it (more learners etc.) and what do we have to add/change to the code?
The text was updated successfully, but these errors were encountered:
Thanks for opening the issue, but I feel this might not be the right place for this issue. Might I suggest opening it on the TorchBeast repo As such I'm going to close this issue and any further ones on TorchBeast.
(FWIW though, PolyBeast by default uses 2 GPUs, one for for learning one for acting. I believe it can be configured to use more GPUs for acting, maximum one for learning. With NLE, the bottleneck is often the learning GPU/batch size, not the acting GPU)
Hi,
Firstly, thanks for the repository!
As far as our understanding goes, IMPALA can be distributed across more than 2 GPUs. The example you have in the repo uses up to 2 GPUs. We have access to more GPUs in a single machine and want to utilize all in order to get the maximal throughput. What would be the best way to do it (more learners etc.) and what do we have to add/change to the code?
The text was updated successfully, but these errors were encountered: