Python
Need help with prioritized-experience-replay?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.
Damcy

Description

implement of prioritized experience replay

134 Stars 38 Forks MIT License 19 Commits 2 Opened issues

Services available

Need anything else?

Prioritized Experience Replay

Usage

  1. in rankbase.py Experience.stroe give a simple description of store replay memory, or you can also refer rankbase_test.py
  2. It's more convenient to store replay as format (state1, action1, reward, state_2, terminal). If we use this method, all replay memory in Experience are legal and can be sampled as we like.
  3. run it with python3/python2.7

Rank-based

use binary heap tree as priority queue, and build an Experience class to store and retrieve the sample

Interface:
* All interfaces are in rank_based.py
* init conf, please read Experience.__init__ for more detail, all parameters can be set by input conf
* replay sample store: Experience.store
    params: [in] experience, sample to store
    returns: bools, True for success, False for failed
* replay sample sample: Experience.sample
    params: [in] global_step, used for cal beta
    returns: 
        experience, list of samples
        w, list of weight
        rank_e_id, list of experience's id, used for update priority value
* update priority value: Experience.update
    params: 
        [in] indices, rank_e_ids
        [in] delta, new TD-error

Proportional

you can find the implementation here: proportional

Reference

  1. "Prioritized Experience Replay" http://arxiv.org/abs/1511.05952
  2. Atari by @Kaixhin, Atari uses torch to implement rank-based algorithm.

Application

  1. TEST1 PASSED: These code has been applied to my own NLP DQN experiment, it significantly improves performance. See here for more detail.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.