Need help with DQN-chainer?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

ugo-nama-kun
200 Stars 40 Forks MIT License 32 Commits 1 Opened issues

Services available

!
?

Need anything else?

Contributors list

# 187,808
TeX
Shell
Objecti...
31 commits

DQN-chainer

This software is a python implementation of Deep Q-Networks for playing ATARI games with Chainer package.

I followed the implementation described in: * V. Mnih et al., "Playing atari with deep reinforcement learning"

http://arxiv.org/pdf/1312.5602.pdf * V. Mnih et al., "Human-level control through deep reinforcement learning"

http://www.nature.com/nature/journal/v518/n7540/abs/nature14236.html

For japanese instruction of DQN and historical review, please check:

http://qiita.com/Ugo-Nama/items/08c6a5f6a571335972d5

Requirement

My implementation is dependent on RL-glue, Arcade Learning Environment, and Chainer. To run the software, you need following softwares/packages.

  • Python 2.7+
  • Numpy
  • Scipy
  • Pillow (PIL)
  • Chainer (1.3.0): https://github.com/pfnet/chainer
  • RL-glue core: https://sites.google.com/a/rl-community.org/rl-glue/Home/rl-glue
  • RL-glue Python codec: https://sites.google.com/a/rl-community.org/rl-glue/Home/Extensions/python-codec
  • Arcade Learning Environment (version ALE 0.4.4): http://www.arcadelearningenvironment.org/

This software was tested on Ubuntu 14.04 LTS.

How to run

Please check readme.txt

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.