Need help with a3c_trading?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

evgps
308 Stars 98 Forks 18 Commits 0 Opened issues

Description

Trading with recurrent actor-critic reinforcement learning

Services available

!
?

Need anything else?

Contributors list

# 171,129
Jupyter...
Tensorf...
Python
16 commits

A3C trading

Note: Sorry for misleading naming - please use A3Ctrading.py for training and testtrading.py for testing.

Trading with recurrent actor-critic reinforcement learning - check paper and more detailed old report

Full_UML

Configuration:
config.py

This file contains all the pathes and gloabal variables to be set up

Dataset: download from GDrive

After setting

config.py
please run this file to download and preprocess the data need for training and evaluation

Environment:
trader_gym.py

OpenAI.gym-like environment class

Model:
A3C_class.py

This file is containing

AC_network
,
Worker
and
Test_Worker
classes

Training:
A3C_training.py

Run this file, preferrable in

tmux
. During training it will create files in
tensorboard_dir
and in
model_dir

Testing:
A3C_testing.ipynb

Jupyter notebook
contains all for picturing

Cite as:

@article{ponomarev2019using, title={Using Reinforcement Learning in the Algorithmic Trading Problem}, author={Ponomarev, ES and Oseledets, IV and Cichocki, AS}, journal={Journal of Communications Technology and Electronics}, volume={64}, number={12}, pages={1450--1457}, year={2019}, publisher={Springer} }

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.