DQN-chainer

by ugo-nama-kun

ugo-nama-kun / DQN-chainer
199 Stars 42 Forks Last release: Not found MIT License 32 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

DQN-chainer

This software is a python implementation of Deep Q-Networks for playing ATARI games with Chainer package.

I followed the implementation described in: * V. Mnih et al., "Playing atari with deep reinforcement learning"

http://arxiv.org/pdf/1312.5602.pdf * V. Mnih et al., "Human-level control through deep reinforcement learning"

http://www.nature.com/nature/journal/v518/n7540/abs/nature14236.html

For japanese instruction of DQN and historical review, please check:

http://qiita.com/Ugo-Nama/items/08c6a5f6a571335972d5

Requirement

My implementation is dependent on RL-glue, Arcade Learning Environment, and Chainer. To run the software, you need following softwares/packages.

  • Python 2.7+
  • Numpy
  • Scipy
  • Pillow (PIL)
  • Chainer (1.3.0): https://github.com/pfnet/chainer
  • RL-glue core: https://sites.google.com/a/rl-community.org/rl-glue/Home/rl-glue
  • RL-glue Python codec: https://sites.google.com/a/rl-community.org/rl-glue/Home/Extensions/python-codec
  • Arcade Learning Environment (version ALE 0.4.4): http://www.arcadelearningenvironment.org/

This software was tested on Ubuntu 14.04 LTS.

How to run

Please check readme.txt

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.