by Kaixhin

Kaixhin / PlaNet

Deep Planning Network: Control from pixels by latent planning with learned dynamics

233 Stars 41 Forks Last release: about 1 year ago (1.2) MIT License 50 Commits 3 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:


MIT License

PlaNet: A Deep Planning Network for Reinforcement Learning [1]. Supports symbolic/visual observation spaces. Supports some Gym environments (including classic control/non-MuJoCo environments, so DeepMind Control Suite/MuJoCo are optional dependencies). Hyperparameters have been taken from the original work and are tuned for DeepMind Control Suite, so would need tuning for any other domains (such as the Gym environments).

Run with
. For best performance with DeepMind Control Suite, try setting environment variable
(see instructions and details here).

Results and pretrained models can be found in the releases.


To install all dependencies with Anaconda run

conda env create -f environment.yml
and use
source activate planet
to activate the environment.




[1] Learning Latent Dynamics for Planning from Pixels

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.