A PyTorch Library for Reinforcement Learning Research
Cherry is a reinforcement learning framework for researchers built on top of PyTorch.
Unlike other reinforcement learning implementations, cherry doesn't implement a single monolithic interface to existing algorithms. Instead, it provides you with low-level, common tools to write your own algorithms. Drawing from the UNIX philosophy, each tool strives to be as independent from the rest of the framework as possible. So if you don't like a specific tool, you don’t need to use it.
Features
To learn more about the tools and philosophy behind cherry, check out our Getting Started tutorial.
The following snippet showcases some of the tools offered by cherry.
import cherry as chWrap environments
env = gym.make('CartPole-v0') env = ch.envs.Logger(env, interval=1000) env = ch.envs.Torch(env)
policy = PolicyNet() optimizer = optim.Adam(policy.parameters(), lr=1e-2) replay = ch.ExperienceReplay() # Manage transitions
for step in range(1000): state = env.reset() while True: mass = Categorical(policy(state)) action = mass.sample() log_prob = mass.log_prob(action) next_state, reward, done, _ = env.step(action)
# Build the ExperienceReplay replay.append(state, action, reward, next_state, done, log_prob=log_prob) if done: break else: state = next_state # Discounting and normalizing rewards rewards = ch.td.discount(0.99, replay.reward(), replay.done()) rewards = ch.normalize(rewards) loss = -th.sum(replay.log_prob() * rewards) optimizer.zero_grad() loss.backward() optimizer.step() replay.empty()
Many more high-quality examples are available in the examples/ folder.
Note Cherry is considered in early alpha release. Stuff might break.
pip install cherry-rl
A human-readable changelog is available in the CHANGELOG.md file.
Documentation and tutorials are available on cherry’s website: http://cherry-rl.net.
First, thanks for your consideration in contributing to cherry. Here are a couple of guidelines we strive to follow.
We don't have forums, but are happy to discuss with you on slack. Make sure to send an email to [email protected] to get an invite.
Cherry draws inspiration from many reinforcement learning implementations, including
Because it's the sweetest part of the cake.