Skip to content

New experiences, replay and imagination, titrated, in training

License

Notifications You must be signed in to change notification settings

CoAxLab/newremagine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

newremagine

New experiences, replay and imagination, titrated, in training.

introduction

In this library we are given num_episodes to learn a model. Each episode can be spent on one of three options:

  1. Sample new data
  2. Replay past data
  3. Imagine new data

We have assume that:

  1. We have a finite amount of traning data.
  2. The test data is from the same distribution as the traning.
  3. We want the model to perform well on unseen (test) data.

So, what is the best way to divide up our time? Should we only sample new data? Should replay past data often? Should we imagine-as-augmentation often? What is the best mix? Answering these questions is our goal here.

install

git clone https://github.com/CoAxLab/newremagine
pip install -e newremagine

dependencies

  • python >3.6
  • torch > 1.5
  • standard anaconda

usage

See usage.ipynb.

About

New experiences, replay and imagination, titrated, in training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages