Skip to content
This repository has been archived by the owner on May 6, 2024. It is now read-only.

Using more than 2 GPUs with Polybeast #213

Closed
ege-k opened this issue Jul 14, 2021 · 2 comments
Closed

Using more than 2 GPUs with Polybeast #213

ege-k opened this issue Jul 14, 2021 · 2 comments

Comments

@ege-k
Copy link

ege-k commented Jul 14, 2021

Hi,
Firstly, thanks for the repository!
As far as our understanding goes, IMPALA can be distributed across more than 2 GPUs. The example you have in the repo uses up to 2 GPUs. We have access to more GPUs in a single machine and want to utilize all in order to get the maximal throughput. What would be the best way to do it (more learners etc.) and what do we have to add/change to the code?

@cdmatters
Copy link
Contributor

Hi there!

Thanks for opening the issue, but I feel this might not be the right place for this issue. Might I suggest opening it on the TorchBeast repo As such I'm going to close this issue and any further ones on TorchBeast.

(FWIW though, PolyBeast by default uses 2 GPUs, one for for learning one for acting. I believe it can be configured to use more GPUs for acting, maximum one for learning. With NLE, the bottleneck is often the learning GPU/batch size, not the acting GPU)

@heiner
Copy link
Contributor

heiner commented Jul 15, 2021

(OP already opened facebookresearch/torchbeast#31)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants