Need help with dqn?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

sherjilozair
201 Stars 69 Forks MIT License 10 Commits 12 Opened issues

Description

Basic DQN implementation

Services available

!
?

Need anything else?

Contributors list

# 62,742
Haxe
Shell
Jupyter...
game-fr...
10 commits

dqn

This is a very basic DQN (with experience replay) implementation, which uses OpenAI's gym environment and Keras/Theano neural networks.

Requirements

  • gym
  • keras
  • theano
  • numpy

and all their dependencies.

Usage

To run,

python example.py 
. It runs
MsPacman-v0
if no env is specified. Uncomment the
env.render()
line to see the game while training, however, this is likely to make training slow.

Currently, it assumes that the observation is an image, i.e. a 3d array, which is the case for all Atari games, and other Atari-like environments.

Purpose

This is meant to be a very simple implementation, to be used as a starter code. I aimed it to be easy-to-comprehend rather than feature-complete.

Pull requests welcome!

References

  • https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf

TODO

  • Extend to other environemnts. Currently only works for Atari and Atari-like environments where the observation space is a 3D Box.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.