MMEdit
an audio editing diffusion model
Install / Use
/learn @ty0402/MMEditREADME
MMEDIT
Introduction
🟣 MMEDIT is an audio editing model built upon the powerful Qwen2-Audio 7B. It leverages the robust audio understanding and instruction-following capabilities of the large language model to achieve precise and high-fidelity audio editing.
Model Download
| Models | 🤗 Hugging Face | |-------|-------| | MMEdit| MMEdit |
download our pretrained model into ./ckpt/mmedit/
Model Usage
🔧 Dependencies and Installation
- Python >= 3.10
- PyTorch >= 2.5.0
- CUDA Toolkit
- Dependent models:
- Qwen/Qwen2-Audio-7B-Instruct, download into
./ckpt/qwen2-audio-7B-Instruct/
- Qwen/Qwen2-Audio-7B-Instruct, download into
# 1. Clone the repository
git clone https://github.com/xycs6k8r2Anonymous/MMEdit.git
cd MMEDIT
# 2. Create environment
conda create -n mmedit python=3.10 -y
conda activate mmedit
# 3. Install PyTorch and dependencies
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
# Download Qwen2-Audio-7B-Instruct
huggingface-cli download Qwen/Qwen2-Audio-7B-Instruct --local-dir ./ckpt/qwen2-audio-7B-instruct
# Download MMEdit (Our Model)
huggingface-cli download CocoBro/MMEdit --local-dir ./ckpt/mmedit
📂 Data Preparation
For detailed instructions on the data pipeline, and dataset structure used for training, please refer to our separate documentation:
👉 Data Pipeline & Preparation Guide
⚡ Quick Start
1. Inference
You can quickly generate example audio with the following code:
- add
bash bash_scripts/infer_single.sh
The output will be save at inference/example
- drop
🚀 Usage
1. Configuration
Before running inference or training, please check configs/config.yaml. The project uses hydra for configuration management, allowing easy overrides via command line.
2. Inference
To run batch inference using the provided scripts:
bash bash_scripts/inference.sh
3. Training
Ensure you have downloaded the Qwen2-Audio-7B-Instruct checkpoint to ./ckpt/qwen2-audio-7B-instruct and prepared your data according to the Data Pipeline Guide.
This script serves as a sanity check that the training pipeline is correctly wired up.
# Launch distributed training
bash bash_scripts/train_edit_1gpu.sh
If you need to modify the dataset, please edit the configuration files under:
configs/data/
Other training-related hyperparameters and settings can be adjusted in:
configs/train.yaml
For a more detailed end-to-end training tutorial and configuration examples, please refer to: https://github.com/wsntxxn/UniFlow-Audio
📝 Todo
- [x] Release inference code and checkpoints.
- [x] Add HuggingFace Gradio Demo.
- [ ] Release training scripts.
- [ ] Release evaluation metrics and post-processing tools.
🤝 Acknowledgement
We thank the following open-source projects for their inspiration and code:
🖊️ Citation
If you find this project useful, please cite our paper:
@article{mmedit2024,
title={MMEDIT: Audio Generation based on Qwen2-Audio 7B},
author={Your Name and Collaborators},
journal={arXiv preprint arXiv:25xx.xxxxx},
year={2024}
}
