Skip to content

Latest commit

 

History

History
 
 

week04_approx_rl

Materials

More materials

  • [recommended] How to actually do deep reinforcement learning by J. Schulman - pdf
  • [recommended] Spinning up - a massive repository of deep RL knowledge
  • [recommended] A table of collateral effects of reinforcement learning in games - table
  • DQN and modiffications - lecture by J. Schulman - video
    • interactive demos in your browser: demo1(karpathy), demo2(Hünermann)
  • Reinforcement learning architectures list - repo
  • Article on dueling DQN - arxiv
  • Article on double DQN - arxiv
  • Article on prioritized experience replay - arxiv
  • Article on Rainbow: Combining Improvements in Deep Reinforcement Learning - arxiv
  • Article on bootstrap DQN - pdf, summary
  • Article on asynchronuous methods in deep RL - arxiv
  • Successor representations for reinforcement learning - article, video
  • Video on asynchronuous methods (Mnih) - video
  • An overview of deep reinforcement learning - arxiv

DQN tutorials

  • [in pytorch] A great series starting from simple DQN to all the cool new stuff - url
  • A guide to deep RL from ~scratch (nervana blog) - url
  • Building deep q-network from ~scratch (blog) - url
  • Another guide guide to DQN from ~scratch (blog) - url

Practice

  • Seminar: Open In Colab
  • Homework (main): Open In Colab
  • Homework (debug): Open In Colab

From now on, we have two tracks, for pytorch and tensorflow. However, pytorch track is somewhat better supported by the course team. You can choose whichever track you want, but unless you're expertly familiar with your framework, we recommend you to start by completing the task in pytorch and only then reproduce your solution in your chosen framework.

Begin with seminar_<framework>.ipynb and then proceed with homework_<framework>.ipynb.

__Note: you're not required to submit assignments in all three frameworks. Pick one and go with it. Maybe switch it occasionally if you want more challenge. __