物联网小程序在未来智能生活中的重要角色与应用前景
1073
2022-10-28
garage 一个可复现的强化学习研究框架
garage
garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementations built using that toolkit.
The toolkit provides wide range of modular tools for implementing RL algorithms, including:
Composable neural network modelsReplay buffersHigh-performance samplersAn expressive experiment definition interfaceTools for reproducibility (e.g. set a global random seed which all components respect)Logging to many outputs, including TensorBoardReliable experiment checkpointing and resumingEnvironment interfaces for many popular benchmark suitesSupporting for running garage in diverse environments, including always up-to-date Docker containers
See the latest documentation for getting started instructions and detailed APIs.
Installation
pip install garage
Algorithms
The table below summarizes the algorithms available in garage.
Algorithm | Framework(s) |
---|---|
CEM | numpy |
CMA-ES | numpy |
REINFORCE (a.k.a. VPG) | PyTorch, TensorFlow |
DDPG | PyTorch, TensorFlow |
DQN | TensorFlow |
DDQN | TensorFlow |
ERWR | TensorFlow |
NPO | TensorFlow |
PPO | PyTorch, TensorFlow |
REPS | TensorFlow |
TD3 | TensorFlow |
TNPG | TensorFlow |
TRPO | PyTorch, TensorFlow |
MAML | PyTorch |
RL2 | TensorFlow |
PEARL | PyTorch |
SAC | PyTorch |
MTSAC | PyTorch |
MTPPO | PyTorch, TensorFlow |
MTTRPO | PyTorch, TensorFlow |
Supported Tools and Frameworks
garage supports Python 3.5+
The package is tested on Ubuntu 18.04. It is also known to run on recent versions of macOS, using Homebrew to install some dependencies. Windows users can install garage via WSL, or by making use of the Docker containers.
We currently support PyTorch and TensorFlow for implementing the neural network portions of RL algorithms, and additions of new framework support are always welcome. PyTorch modules can be found in the package garage.torch and TensorFlow modules can be found in the package garage.tf. Algorithms which do not require neural networks are found in the package garage.np.
The package is available for download on PyPI, and we ensure that it installs successfully into environments defined using conda, Pipenv, and virtualenv.
All components use the popular gym.Env interface for RL environments.
Testing
The most important feature of garage is its comprehensive automated unit test and benchmarking suite, which helps ensure that the algorithms and modules in garage maintain state-of-the-art performance as the software changes.
Our testing strategy has three pillars:
Automation: We use continuous integration to test all modules and algorithms in garage before adding any change. The full installation and test suite is also run nightly, to detect regressions.Acceptance Testing: Any commit which might change the performance of an algorithm is subjected to comprehensive benchmarks on the relevant algorithms before it is mergedBenchmarks and Monitoring: We benchmark the full suite of algorithms against their relevant benchmarks and widely-used implementations regularly, to detect regressions and improvements we may have missed.
Supported Releases
Garage releases a new stable version approximately every 4 months, in February, June, and October. Maintenance releases have a stable API and dependency tree, and receive bug fixes and critical improvements but not new features. We currently support each release for a window of 8 months.
Citing garage
If you use garage for academic research, please cite the repository using the following BibTeX entry. You should update the commit field with the commit or release tag your publication uses.
@misc{garage, author = {The garage contributors}, title = {Garage: A toolkit for reproducible reinforcement learning research}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/rlworkgroup/garage}}, commit = {be070842071f736eb24f28e4b902a9f144f5c97b}}
Credits
The original code for garage was adopted from predecessor project called rllab. The garage project is grateful for the contributions of the original rllab authors, and hopes to continue advancing the state of reproducibility in RL research in the same spirit.
rllab was developed by Rocky Duan (UC Berkeley/OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley/OpenAI), John Schulman (UC Berkeley/OpenAI), and Pieter Abbeel (UC Berkeley/OpenAI).
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~