ThinkSound
[NeurIPS 2025] PyTorch implementation of [ThinkSound], a unified framework for generating audio from any modality, guided by Chain-of-Thought (CoT) reasoning.
Install / Use
/learn @FunAudioLLM/ThinkSoundREADME
Repository layout
This ThinkSound GitHub repository hosts two related projects on separate branches:
| Branch | Project | Documentation |
|--------|---------|----------------|
| master | ThinkSound (NeurIPS 2025) — unified Any2Audio generation with CoT-guided flow matching | This file: README.md |
| prismaudio | PrismAudio — follow-up work (ICLR 2026) on video-to-audio with multi-dimensional CoT-RL | README.md on the prismaudio branch |
For ThinkSound, use branch master (this README). For PrismAudio, check out prismaudio and follow README.md there.
ThinkSound is a unified Any2Audio generation framework with flow matching guided by Chain-of-Thought (CoT) reasoning.
PyTorch implementation for multimodal audio generation and editing: generate or edit audio from video, text, and audio, powered by step-by-step reasoning from Multimodal Large Language Models (MLLMs).
📰 News
- 2026.03.24 🔥 PrismAudio is released in the same repo on branch
prismaudio— seeREADME.mdthere for setup and models. - 2026.01.26 🎉 PrismAudio accepted to ICLR 2026 Main Conference (code/docs on
prismaudio). - 2025.11.25 🔥 Online PrismAudio Demo is live.
- 2025.11.25 🔥 PrismAudio paper on arXiv — multi-dimensional CoT-RL for video-to-audio.
- 2025.09.19 🎉 ThinkSound accepted to the NeurIPS 2025 Main Conference!
- 2025.09.01 Our AudioCoT dataset is now open-sourced and available on Hugging Face!
- 2025.07.17 🧠 Finetuning enabled: training and finetuning code is now publicly available, along with clear usage instructions to help you customize and extend ThinkSound with your own data.
- 2025.07.15 📦 Simplified installation and usability: dependencies on PyPI for easy cross-platform setup; Windows
.batscripts automate environment creation and script running. - 2025.07.08 🔧 Major update: model lightweighted and optimized memory and GPU usage, now supports high-throughput audio generation at scale!
- 2025.07.01 Online demo on Hugging Face Spaces and ModelScope for interactive experience!
- 2025.07.01 Released inference scripts and web interface;
- 2025.06 ThinkSound paper released on arXiv!
- 2025.06 Online Demo is live - try it now!
<div align="center">
Follow-up: PrismAudio (same repo, prismaudio branch)
PrismAudio is the successor to ThinkSound (ICLR 2026), developed under a new name but kept in this repository on branch prismaudio. Installation, checkpoints, and citation are in README.md on that branch.
👉 git checkout prismaudio or open the branch on GitHub.
🚀 Features
- Any2Audio: Generate audio from arbitrary modalities — video, text, audio, or their combinations.
- Video-to-Audio SOTA: Achieves state-of-the-art results on multiple V2A benchmarks.
- CoT-Driven Reasoning: Chain-of-Thought reasoning for compositional and controllable audio generation via MLLMs.
- Interactive Object-centric Editing: Refine or edit specific sound events by clicking on visual objects or using text instructions.
- Unified Framework: One foundation model supports generation, editing, and interactive workflow.
✨ Method Overview
ThinkSound decomposes audio generation and editing into three interactive stages, all guided by MLLM-based Chain-of-Thought (CoT) reasoning:
- Foley Generation: Generate foundational, semantically and temporally aligned soundscapes from video.
- Object-Centric Refinement: Refine or add sounds for user-specified objects via clicks or regions in the video.
- Targeted Audio Editing: Modify generated audio using high-level natural language instructions.

⚡ Quick Start
Environment Preparation:
# ThinkSound code: branch master. PrismAudio: clone with -b prismaudio (see README.md on that branch).
git clone -b master https://github.com/liuhuadai/ThinkSound.git
cd ThinkSound
conda create -n thinksound python=3.10
conda activate thinksound
pip install thinksound
conda install -y -c conda-forge 'ffmpeg<7'
# Download pretrained weights https://huggingface.co/liuhuadai/ThinkSound to Directory ckpts/
# model weights can be also downloaded from https://www.modelscope.cn/models/iic/ThinkSound
git lfs install
git clone https://huggingface.co/liuhuadai/ThinkSound ckpts
# To improve inference and training speed, you may optionally install a FlashAttention backend compatible with your system and PyTorch version.
✅ Windows Tip:
Windows users can simply runsetup_windows.bat(or double-click it) to automatically create the conda environment, install all dependencies (including FFmpeg), and download the pretrained model — no manual setup required.
Make surecondaandgitare installed and available in your system PATH before running the script.
▶️ Run the Demo
Linux/macOS
chmod +x scripts/demo.sh
./scripts/demo.sh <path-to-your-demo-video> <title> <CoT description> [use-half]
Windows
You can use the provided .bat script instead:
.\scripts\demo.bat <path-to-your-demo-video> <title> <CoT description> [use-half]
Note:
<path-to-your-demo-video>: The path to a single video[use-half](optional): Add use-half at the end to enable half precision feature extraction.
📦 Batch Inference
Linux/macOS
chmod +x scripts/eval_batch.sh
./scripts/eval_batch.sh <video_path> <csv_path> <save_path (optional)> [use-half]
Windows
Use the equivalent .bat script:
.\scripts\eval_batch.bat <video_path> <csv_path> <save_path (optional)> [use-half]
Note:
<video_path>: Path to the root directory containing all .mp4 videos to be processed (all videos must be of equal duration).<csv_path>: A CSV file with text prompts for each video (seedemo_test.csvfor format).<save_path>(optional): Where to save generated audio. Defaults toresults/features.[use-half](optional): Add use-half at the end to enable half precision feature extraction.
Web Interface Usage
For an interactive experience, launch the Gradio web interface:
python app.py
🏋️ Train the Model
See Training.md
📄 License
This project is released under the Apache 2.0 License.
Note: The code, models, and dataset are for research and educational purposes only. Commercial use is NOT permitted. For commercial licensing, please contact the authors.
📦 Third-Party Components
-
Stable Audio Open VAE (by Stability AI): This repository includes a fine-tuned VAE from Stable Audio Open, licensed under the Stability AI Community License. Commercial use and redistribution require prior permission from Stability AI.
-
📘 All other code and models are released under the Apache License 2.0.
Acknowledgements
Many thanks to:
- stable-audio-tools (by Stability AI): For providing an easy-to-use framework for audio generation, as well as the VAE module
Related Skills
qqbot-channel
349.7kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.4k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
349.7kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t

