SkillAgentSearch skills...

Review2Rebuttal

Review2Rebuttal is a grounded rebuttal skill extracted and generalized from the workflow ideas in the original [RebuttalStudio](https://github.com/runtsang/RebuttalStudio) project. It is a file-based skill and script toolkit that turns paper reviews, paper facts, and repo evidence into a reproducible rebuttal workspace.

Install / Use

/learn @Henryhe09/Review2Rebuttal
About this skill

Quality Score

0/100

Supported Platforms

Zed

README

Review2Rebuttal

<p align="center"> <img src="assets/icon.svg" alt="Review2Rebuttal icon" width="132" /> </p> <p align="center"> <strong>From reviews to grounded rebuttals.</strong><br/> A file-oriented rebuttal skill for Codex and Claude Code. </p> <p align="center"> <img alt="mode" src="https://img.shields.io/badge/modes-quick%20%7C%20full-243b53"> <img alt="workflow" src="https://img.shields.io/badge/workflow-stage1--stage5-116466"> <img alt="artifacts" src="https://img.shields.io/badge/output-on--disk%20artifacts-cf5c36"> <img alt="agents" src="https://img.shields.io/badge/agents-Codex%20%7C%20Claude%20Code-2d6a4f"> </p>

Review2Rebuttal is a grounded rebuttal skill extracted and generalized from the workflow ideas in the original RebuttalStudio project. It is a file-based skill and script toolkit that turns paper reviews, paper facts, and repo evidence into a reproducible rebuttal workspace.

The review2rebuttal skill can turn a paper, reviewer comments, and an optional code repository into a grounded rebuttal workspace. In practice, it can parse reviews, split them into atomic issues, build evidence-backed reply outlines, compile first-round rebuttals, support follow-up rounds, plan reviewer-requested experiments conservatively, and synthesize final remarks with explicit on-disk artifacts.

Repository-level planning and implementation notes have been moved to ../../internal-docs/review2rebuttal/ so the published skill directory stays focused on usage.

What It Does

| Capability | Quick Mode | Full Mode | | -------------------------------------------------- | ---------- | --------- | | Parse reviewer comments into structured records | Yes | Yes | | Split reviews into atomic issues | Yes | Yes | | Generate response outlines and draft scaffolds | Yes | Yes | | Compile first-round rebuttal files | Yes | Yes | | Build repo-aware evidence maps | No | Yes | | Plan reviewer-requested experiments conservatively | No | Yes | | Handle follow-up rounds | No | Yes | | Generate reviewer summaries and final remarks | No | Yes | | Track prompt packets and artifact write-back | No | Yes |

Visual Overview

Review2Rebuttal hero

Why This Is Different

  • It treats rebuttal as an artifact pipeline instead of a single prompt.
  • It keeps every important intermediate file on disk for auditability.
  • It separates fast drafting from conservative, repo-grounded reasoning.
  • It supports both a lightweight quick mode and a complete full mode.

Quick Start

1. Clone the repository

git clone https://github.com/Henryhe09/Review2Rebuttal.git
cd Review2Rebuttal

2. For Codex

Tell Codex:

Install the skill from this repo: codex-skills/review2rebuttal

Restart Codex after installation.

3. For Claude Code

Tell Claude Code:

Please read codex-skills/review2rebuttal/SKILL.md and use this workflow to help me with my rebuttal.

4. Provide the core materials

  • paper PDF
  • reviewer comments
  • code repository
  • rebuttal deadline

5. What it will help with

  • split reviewer comments into atomic issues
  • read the paper and codebase together
  • design a grounded rebuttal strategy
  • discuss and plan additional experiments
  • generate the first-round rebuttal
  • handle multi-turn follow-up rounds
  • produce a complete, traceable, well-structured rebuttal workspace

Core Outputs

| Stage | Main Outputs | Purpose | | ------------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------------------- | | Stage 0 | rebuttal-config.json, STRUCTURE.md | Initialize workspace and metadata | | Stage 1 | reviews/parsed/*.json, issues/*.json, issues/*.md | Normalize reviews and create atomic issue index | | Stage 2 | responses/outlines/*.md, responses/drafts/*.md, responses/prompts/*-prompt.md | Draft point-by-point responses | | Stage 3 | responses/compiled/*-first-round.md | Build first-round rebuttal documents | | Stage 4 | followups/*-context.md, followups/prompts/*-prompt.md | Prepare and draft follow-up responses | | Stage 5 | final/reviewer-summaries.md, final/final-remarks.md, final/prompts/final-remarks-prompt.md | Synthesize final remarks | | Cross-cutting | evidence/*, experiments/*, PROMPT_GENERATION_QUEUE.md | Ground claims and track packet execution |

Project Structure

review2rebuttal/
|- README.md
|- SKILL.md
|- agents/
|  |- openai.yaml
|- assets/
|  |- hero.svg
|  |- icon.svg
|  |- workflow.svg
|  |- workspace-map.svg
|- references/
|  |- artifact-schema.md
|  |- experiment-planning.md
|  |- modes.md
|  |- prompt-driven-generation.md
|  |- script-usage.md
|  |- workspace-structure.md
|- scripts/
|  |- apply_prompt_packet.py
|  |- build_evidence_map.py
|  |- build_generation_queue.py
|  |- build_issue_index.py
|  |- build_repo_index.py
|  |- build_reviewer_summaries.py
|  |- build_stage2_prompt_packets.py
|  |- build_stage4_prompt_packets.py
|  |- build_stage5_prompt_packet.py
|  |- compile_first_round.py
|  |- draft_followup_response.py
|  |- generate_final_remarks.py
|  |- generate_followup_context.py
|  |- generate_response_drafts.py
|  |- generate_response_outlines.py
|  |- parse_reviews.py
|  |- plan_experiments.py
|  |- rebuttal_utils.py
|  |- scaffold_rebuttal_project.py

Typical Usage

Quick Mode

python scripts/scaffold_rebuttal_project.py ./my-rebuttal
python scripts/parse_reviews.py ./my-rebuttal
python scripts/build_issue_index.py ./my-rebuttal
python scripts/generate_response_outlines.py ./my-rebuttal
python scripts/generate_response_drafts.py ./my-rebuttal
python scripts/compile_first_round.py ./my-rebuttal

Full Mode

python scripts/scaffold_rebuttal_project.py ./my-rebuttal
python scripts/parse_reviews.py ./my-rebuttal
python scripts/build_issue_index.py ./my-rebuttal
python scripts/build_repo_index.py ./my-rebuttal
python scripts/build_evidence_map.py ./my-rebuttal
python scripts/plan_experiments.py ./my-rebuttal
python scripts/generate_response_outlines.py ./my-rebuttal
python scripts/build_stage2_prompt_packets.py ./my-rebuttal
python scripts/compile_first_round.py ./my-rebuttal
python scripts/generate_followup_context.py ./my-rebuttal
python scripts/build_stage4_prompt_packets.py ./my-rebuttal
python scripts/build_reviewer_summaries.py ./my-rebuttal
python scripts/build_stage5_prompt_packet.py ./my-rebuttal
python scripts/build_generation_queue.py ./my-rebuttal

Output Snapshot

my-rebuttal/
|- paper/
|  |- facts.md
|- reviews/
|  |- raw/
|  |- parsed/
|- issues/
|- evidence/
|- responses/
|  |- outlines/
|  |- drafts/
|  |- compiled/
|  |- prompts/
|- followups/
|- experiments/
|- final/
|- PROMPT_GENERATION_QUEUE.md
|- STRUCTURE.md
|- rebuttal-config.json

Notes

  • This project defaults to conservative, evidence-bounded wording.
  • High-risk experimental claims should pass through responses/author-approvals.json.
  • The strongest outputs come from combining paper/facts.md, repo evidence, and prompt packets.
  • Process notes and planning drafts live outside the published skill directory under internal-docs/review2rebuttal/.

Credits

  • Workflow inspiration and stage framing from RebuttalStudio.
  • This repository focuses on reusable skill logic, on-disk artifacts, and agent-friendly execution rather than a desktop UI.

Related Skills

View on GitHub
GitHub Stars17
CategoryDevelopment
Updated23h ago
Forks1

Languages

Python

Security Score

75/100

Audited on Apr 7, 2026

No findings