PaCoRe
PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
Install / Use
/learn @stepfun-ai/PaCoReREADME
PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
<div align="center">Read the Paper | Download Models | Training Data
</div>📖 Overview
We introduce PaCoRe (Parallel Coordinated Reasoning), a framework that shifts the driver of inference from sequential depth to coordinated parallel breadth, breaking the model context limitation and massively scaling test time compute:
- Think in Parallel: PaCoRe launches massive parallel exploration trajectories.
- Coordinate in Multi-rounds: It employs a message-passing architecture to compact these thoughts into concise messages and synthesize them to guide the next round.
Trained via large-scale, outcome-based reinforcement learning, PaCoRe masters the Reasoning Synthesis capabilities required to reconcile diverse parallel insights.
The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5% on HMMT 2025, surpassing GPT-5’s 93.2% by scaling effective TTC to roughly two million tokens.
We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work!
<p align="center"> <img src="figure/teaser_draft_02.png" width="48%" /> <img src="figure/before_after_train_lcb_02.png" width="48%" /> </p>
Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance. Left: On HMMT 2025, PaCoRe-8B demonstrates remarkable test-time scaling, yielding steady gains and ultimately surpassing GPT-5. Right: On LiveCodeBench, the RLVR-8B model fails to leverage increased test-time compute, while PaCoRe-8B model effectively unlocks substantial gains as the test-time compute increases.
<p align="center"> <img src="figure/train_reward_response_length_1130.png" width="48%" /> <img src="figure/benchmark_accuracy_1130.png" width="48%" /> </p>Figure 2 | PaCoRe Training dynamics. Left panels: The Training Reward and Response Length steadily increase, demonstrating the training stability and effectiveness. Right panels: Evaluation on HMMT 2025 and LiveCodeBench (2408-2505). Performance is reported using single round coordinated reasoning in PaCoRe inference setting with $\vec{K} = [16]$.
🔥 Releases
[2026/02/03] 🚀 PaCoRe Server is now open source!
- 🔗 Effortless to play PaCoRe with any LLM endpoint you have!
- 🍻 Even better—we've added first-class support for Step 3.5 Flash with OpenRouter free provider, StepFun's blazing-fast flagship model!
- 🎁 Check out the Inference Pipeline section to get started!
[2025/12/09] We are excited to release the PaCoRe-8B ecosystem:
- 📝 In-depth Technical Report: PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.
- 🤖 Model:
- PaCoRe-8B: Our final PaCoRe-trained model checkpoint!
- RLVR-8B-0926: The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on Qwen3-8B-Base.
- 📚 Data: PaCoRe-Train-8k The high-quality training corpus, including
opensource_math,public_mathcontest,synthetic_mathandcode:- 🤗 Stage1-3k: PaCoRe-Train-Stage1-3k
- 🤗 Stage2-5k: PaCoRe-Train-Stage2-5k
🔍 Experiments
<table class="tg"> <thead> <tr> <th class="tg-header"></th> <th class="tg-data">AIME 2025</th> <th class="tg-data">HMMT 2025</th> <th class="tg-data">IMO AnswerBench</th> <th class="tg-data">Apex</th> <th class="tg-data">LiveCodeBench</th> <th class="tg-data">HLE<sub>text</sub></th> <th class="tg-data">MultiChallenge</th> </tr> </thead> <tbody> <tr> <td class="tg-header">GPT-5</td> <td class="tg-data">93.5 (13k)</td> <td class="tg-data">93.2 (16k)</td> <td class="tg-data">72.9 (26k)</td> <td class="tg-data">1.0 (33k)</td> <td class="tg-data"><b>83.5</b> (13k)</td> <td class="tg-data"><b>26.0</b> (14k)</td> <td class="tg-data"><b>71.1</b> (5.0k)</td> </tr> <tr> <td class="tg-header">Qwen3-235B-Thinking</td> <td class="tg-data">91.6 (26k)</td> <td class="tg-data">82.3 (32k)</td> <td class="tg-data">71.7 (34k)</td> <td class="tg-data"><b>3.3</b> (46k)</td> <td class="tg-data">74.5 (21k)</td> <td class="tg-data">18.2 (23k)</td> <td class="tg-data">60.3 (1.6k)</td> </tr> <tr> <td class="tg-header">GLM-4.6</td> <td class="tg-data">92.3 (20k)</td> <td class="tg-data">88.7 (25k)</td> <td class="tg-data">73.5 (37k)</td> <td class="tg-data">0.7 (53k)</td> <td class="tg-data">79.5 (19k)</td> <td class="tg-data">17.2 (21k)</td> <td class="tg-data">54.9 (2.2k)</td> </tr> <tr> <td class="tg-header">DeepSeek-v3.1<sup>*</sup></td> <td class="tg-data">90.2 (16k)</td> <td class="tg-data">86.1 (20k)</td> <td class="tg-data">63.0 (27k)</td> <td class="tg-data">1.4 (36k)</td> <td class="tg-data">74.9 (11k)</td> <td class="tg-data">19.3 (18k)</td> <td class="tg-data">54.4 (1.1k)</td> </tr> <tr class="tg-midrule"> <td class="tg-header">Kimi-K2-Thinking</td> <td class="tg-data"><b>95.3</b> (25k)</td> <td class="tg-data">86.5 (33k)</td> <td class="tg-data">76.5 (44k)</td> <td class="tg-data">0.8 (60k)</td> <td class="tg-data">79.2 (25k)</td> <td class="tg-data">23.9 (29k)</td> <td class="tg-data">66.4 (1.6k)</td> </tr> <tr class="tg-midrule"> <td class="tg-header">RLVR-8B</td> <td class="tg-data">84.1 (50k)</td> <td class="tg-data">75.4 (48k)</td> <td class="tg-data">64.6 (56k)</td> <td class="tg-data">0.0 (65k)</td> <td class="tg-data">70.6 (34k)</td> <td class="tg-data">9.3 (35k)</td> <td class="tg-data">33.3 (1.7k)</td> </tr> <tr> <td class="tg-header"><b>PaCoRe-8B (low)</b></td> <td class="tg-data">89.7 (255k)</td> <td class="tg-data">88.1 (243k)</td> <td class="tg-data">76.1 (306k)</td> <td class="tg-data">0.7 (362k)</td> <td class="tg-data">75.8 (188k)</td> <td class="tg-data">13.0 (196k)</td> <td class="tg-data">41.8 (13k)</td> </tr> <tr> <td class="tg-header"><b>PaCoRe-8B (medium)</b></td> <td class="tg-data">92.5 (908k)</td> <td class="tg-data">92.9 (869k)</td> <td class="tg-data">77.3 (1080k)</td> <td class="tg-data">1.4 (1280k)</td> <td class="tg-data">76.7 (659k)</td> <td class="tg-data">14.6 (694k)</td> <td class="tg-data">45.7 (45k)</td> </tr> <tr class="tg-bottom"> <td class="tg-header"><b>PaCoRe-8B (high)</b></td> <td class="tg-data">93.7 (1873k)</td> <td class="tg-data"><b>94.5</b> (1796k)</td> <td class="tg-data"><b>78.4</b> (2258k)</td> <td class="tg-data">2.3 (2679k)</td> <td class="tg-data">78.2 (1391k)</td> <td class="tg-data">16.0 (1451k)</td> <td class="tg-data">48.0 (95.3k)</td> </tr> </tbody> </table>Table 1 | For each benchmark, we report accuracy together with total TTC (in thousands). For Low, Medium, and High, we apply the inference trajectory configuration as $\vec{K}=[4]$, $[16]$, and $[32, 4]$ separately.* DeepSeek-V3.1 refers to the Terminus version.
Key Findings
- Message Passing Unlocks Scaling. Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier and lets reasoning scale freely.
- Breadth > Depth. All compute is not equal. Coordinated parallel reasoning delivers far higher returns than extending a single chain.
- Data as a Force Multiplier. The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.
Getting Started 🚀
Data
The data is provided as a list[dict], where each entry represents a training instance:
conversation: The original problem/prompt messages.responses: A list of cached generated responses (trajectories). These serve as the input messages ($M$) used during PaCoRe training.ground_truth: The verifiable answer used for correctness evaluation.
Model Serving
You can directly use vllm serve to serve the model! More inference details of PaCoRe will be handled in Inference Pipeline.
Inference Pipeline

Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.
We will explain the PaCoRe inference pipeline in this section.
PaCoRe Server Mode (Recommended)
You can run PaCoRe as an OpenAI-compatible server that proxies requests through any upstream LLM provider (vLLM, OpenRouter, etc.) while applying the PaCoRe multi-round parallel reasoning pipeline.
Example: Using OpenRouter as the upstream provider
First, install this package:
pip install -e .
Then do the following steps:
- Set your OpenRouter API key:
export OPENROUTER_API_KEY='sk-or-...'
- Start the PaCoRe server:
python playground/example_pacore_server_op
Related Skills
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
workshop-rules
Materials used to teach the summer camp <Data Science for Kids>
last30days-skill
13.4kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
