OpenRFT
OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning
Install / Use
/learn @ADaM-BJTU/OpenRFTREADME
OpenRFT: Adapting Reasoning Foundation Model for Domain-Specific Tasks with Reinforcement Fine-Tuning
OpenRFT is an open-source project that aims to adapt generalist reasoning foundation models to domain-specific tasks through Reinforcement Fine-Tuning (RFT). By leveraging domain-specific samples, OpenRFT addresses challenges such as the lack of reasoning step data and the limited quantity of training samples, enabling efficient fine-tuning for domain-specific tasks.
<div align="center"> <img src="assets/main_fig.png" width="600" /> </div>News
- 2024-12-22
- Updated the training and evaluation code for OpenRFT
- 2024-12-21
- Updated the technical report for OpenRFT.
Dependency
The training code for this project relies on OpenRLHF and trl.
Project Structure
The OpenRFT project is organized as follows:
OpenRFT/
├── assets/
├── report/
├── src/ # Main source code for the project.
│ ├── evaluate/ # Scripts and utilities for model evaluation. It can also be used to sample data.
│ ├── PPO/ # Implementation of PPO for reinforcement learning. The most important part is a remote reward service startup script.
│ └── SFT/ # Supervised Fine-Tuning (SFT) code for initial training using domain-specific samples.
├── LICENSE # Licensing information for the project.
└── README.md
License
This work is released under the MIT License. See the LICENSE file for more details. By using this code or associated materials, you agree to comply with the terms outlined in the license.
