Cherry是面向基于PyTorch研究人员的强化学习框架

网友投稿 1055 2022-10-31

Cherry是面向基于PyTorch研究人员的强化学习框架

Cherry是面向基于PyTorch研究人员的强化学习框架

Cherry is a reinforcement learning framework for researchers built on top of PyTorch.

Unlike other reinforcement learning implementations, cherry doesn't implement a single monolithic interface to existing algorithms. Instead, it provides you with low-level, common tools to write your own algorithms. Drawing from the UNIX philosophy, each tool strives to be as independent from the rest of the framework as possible. So if you don't like a specific tool, you don’t need to use it.

Features

Pythonic and low-level interface à la Pytorch.Support for tabular (!) and function approximation algorithms.Various OpenAI Gym environment wrappers.Helper functions for popular algorithms. (e.g. A2C, DDPG, TRPO, PPO, SAC)Logging, visualization, and debugging tools.Painless and efficient distributed training on CPUs and GPUs.Unit, integration, and regression tested, continuously integrated.

To learn more about the tools and philosophy behind cherry, check out our Getting Started tutorial.

Example

The following snippet showcases some of the tools offered by cherry.

import cherry as ch# Wrap environmentsenv = gym.make('CartPole-v0')env = ch.envs.Logger(env, interval=1000)env = ch.envs.Torch(env)policy = PolicyNet()optimizer = optim.Adam(policy.parameters(), lr=1e-2)replay = ch.ExperienceReplay() # Manage transitionsfor step in range(1000): state = env.reset() while True: mass = Categorical(policy(state)) action = mass.sample() log_prob = mass.log_prob(action) next_state, reward, done, _ = env.step(action) # Build the ExperienceReplay replay.append(state, action, reward, next_state, done, log_prob=log_prob) if done: break else: state = next_state # Discounting and normalizing rewards rewards = ch.td.discount(0.99, replay.reward(), replay.done()) rewards = ch.normalize(rewards) loss = -th.sum(replay.log_prob() * rewards) optimizer.zero_grad() loss.backward() optimizer.step() replay.empty()

Many more high-quality examples are available in the examples/ folder.

Installation

Note Cherry is considered in early alpha release. Stuff might break.

pip install cherry-rl

Documentation

Documentation and tutorials are available on cherry’s website: http://cherry-rl-.

Contributing

First, thanks for your consideration in contributing to cherry. Here are a couple of guidelines we strive to follow.

It's always a good idea to open an issue first, where we can discuss how to best proceed.If you want to contribute a new example using cherry, it would preferably stand in a single file.If you would like to contribute a new feature to the core library, we suggest to first implement an example showcasing your new functionality. Doing so is quite useful: it allows for automatic testing,it ensures that the functionality is correctly implemented,it shows users how to use your functionality, andit gives a concrete example when discussing the best way to merge your implementation.

We don't have forums, but are happy to discuss with you on slack. Make sure to send an email to smr.arnold@gmail.com to get an invite.

Acknowledgements

Cherry draws inspiration from many reinforcement learning implementations, including

OpenAI Baselines,John Schulman's implementationsIlya Kostrikov's implementations,Shangtong Zhang's implementations,Dave Abel's implementations,Vitchyr Pong's implementations,Kai Arulkumaran's implementations,RLLab / Garage.

Why 'cherry' ?

Because it's the sweetest part of the cake.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:#yyds干货盘点# 解决名企真题:二分查找
下一篇:opencv-python教程学习系列10-颜色空间转换
相关文章

 发表评论

暂时没有评论,来抢沙发吧~