Openrl
Unified Reinforcement Learning Framework
Install / Use
/learn @OpenRL-Lab/OpenrlREADME
OpenRL-v0.2.1 is updated on Dec 20, 2023
The main branch is the latest version of OpenRL, which is under active development. If you just want to have a try with OpenRL, you can switch to the stable branch.
Welcome to OpenRL
Documentation | 中文介绍 | 中文文档
<div align="center"> Crafting Reinforcement Learning Frameworks with Passion, Your Valuable Insights Welcome. <br><br> </div>OpenRL is an open-source general reinforcement learning research framework that supports training for various tasks such as single-agent, multi-agent, offline RL, self-play, and natural language. Developed based on PyTorch, the goal of OpenRL is to provide a simple-to-use, flexible, efficient and sustainable platform for the reinforcement learning research community.
Currently, the features supported by OpenRL include:
-
A simple-to-use universal interface that supports training for all tasks/environments
-
Support for both single-agent and multi-agent tasks
-
Support for offline RL training with expert dataset
-
Support self-play training
-
Reinforcement learning training support for natural language tasks (such as dialogue)
-
Support DeepSpeed
-
Support Arena , which allows convenient evaluation of various agents (even submissions for JiDi) in a competitive environment.
-
Importing models and datasets from Hugging Face. Supports loading Stable-baselines3 models from Hugging Face for testing and training.
-
Tutorial on how to integrate user-defined environments into OpenRL.
-
Support for models such as LSTM, GRU, Transformer etc.
-
Multiple training acceleration methods including automatic mixed precision training and data collecting wth half precision policy network
-
User-defined training models, reward models, training data and environment support
-
Support for gymnasium environments
-
Support for Callbacks, which can be used to implement various functions such as logging, saving, and early stopping
-
Dictionary observation space support
-
Popular visualization tools such as wandb, tensorboardX are supported
-
Serial or parallel environment training while ensuring consistent results in both modes
-
Chinese and English documentation
-
Provides unit testing and code coverage testing
-
Compliant with Black Code Style guidelines and type checking
Algorithms currently supported by OpenRL (for more details, please refer to Gallery):
- Proximal Policy Optimization (PPO)
- Dual-clip PPO
- Multi-agent PPO (MAPPO)
- Joint-ratio Policy Optimization (JRPO)
- Generative Adversarial Imitation Learning (GAIL)
- Behavior Cloning (BC)
- Advantage Actor-Critic (A2C)
- Self-Play
- Deep Q-Network (DQN)
- Multi-Agent Transformer (MAT)
- Value-Decomposition Network (VDN)
- Soft Actor Critic (SAC)
- Deep Deterministic Policy Gradient (DDPG)
Environments currently supported by OpenRL (for more details, please refer to Gallery):
- Gymnasium
- MuJoCo
- PettingZoo
- MPE
- Chat Bot
- Atari
- StarCraft II
- SMACv2
- Omniverse Isaac Gym
- DeepMind Control
- Snake
- gym-pybullet-drones
- EnvPool
- GridWorld
- Super Mario Bros
- Gym Retro
- Crafter
This framework has undergone multiple iterations by the OpenRL-Lab team which has applied it in academic research. It has now become a mature reinforcement learning framework.
OpenRL-Lab will continue to maintain and update OpenRL, and we welcome everyone to join our open-source community to contribute towards the development of reinforcement learning.
For more information about OpenRL, please refer to the documentation.
Outline
- Welcome to OpenRL
- Outline
- Why OpenRL?
- Installation
- Use Docker
- Quick Start
- Gallery
- Projects Using OpenRL
- Feedback and Contribution
- Maintainers
- Supporters
- Citing OpenRL
- License
- Acknowledgments
Why OpenRL
Here we provide a table for the comparison of OpenRL and existing popular RL libraries. OpenRL employs a modular design and high-level abstraction, allowing users to accomplish training for various tasks through a unified and user-friendly interface.
| Library | NLP/RLHF | Multi-agent | Self-Play Training | Offline RL | DeepSpeed | |:------------------------------------------------------------------:|:------------------:|:--------------------:|:--------------------:|:------------------:|:--------------------:| | OpenRL | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Stable Baselines3 | :x: | :x: | :x: | :x: | :x: | | Ray/RLlib | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | [DI-engine](ht
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
