Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 309 Bytes

IMPALA.md

File metadata and controls

6 lines (4 loc) · 309 Bytes

IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures

This is more of: how to implement DeepRL with workers in a distributed system?

I took a skim of this after reading the kickstarting DeepRL paper, which used this agent. Might take another look at this again later.