Skip to content

Unofficial re-implementation of "Learning Latent Dynamics for Planning from Pixels" (https://arxiv.org/abs/1811.04551 ) with PyTorch

License

Notifications You must be signed in to change notification settings

cross32768/PlaNet_PyTorch

Repository files navigation

PlaNet_PyTorch

Unofficial re-implementation of "Learning Latent Dynamics for Planning from Pixels" (https://arxiv.org/abs/1811.04551 )

Instructions

For training, install the requirements (see below) and run (default environment is cheetah run)

python3 train.py

To test learned model, run

python3 test.py dir

To predict video with learned model, run

python3 video_prediction.py dir

dir should be log_dir of train.py and you need to specify environment corresponding to the log by arguments.

Requirements

  • Python3
  • Mujoco (for DeepMind Control Suite)

and see requirements.txt for required python library

Qualitative tesult

Example of predicted video frame by learned model

Quantitative result

cartpole swingup

reacher easy

cheetah run

finger spin

ball_in_cup catch

walker walk

Work in progress.

I'm going to add result of experiments at least three times for each environment in the original paper.

All results are test score (without exploration noise), acquired at every 10 episodes.

And I applied moving average with window size=5

References

TODO

  • speed up training
  • Add more qualitative results (at least 3 experiments for each envifonment with different random seed)
  • Generalize code for other environments

About

Unofficial re-implementation of "Learning Latent Dynamics for Planning from Pixels" (https://arxiv.org/abs/1811.04551 ) with PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages