SkillAgentSearch skills...

YuE

YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open

Install / Use

/learn @multimodal-art-projection/YuE

README

<p align="center"> <picture> <source srcset="./assets/logo/黑底.svg" media="(prefers-color-scheme: dark)"> <img src="./assets/logo/白底.svg" width="40%"> </picture> </p> <p align="center"> <a href="https://map-yue.github.io/">Demo 🎶</a> &nbsp;|&nbsp; 📑 <a href="https://arxiv.org/abs/2503.08638">Paper</a> <br> <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-en-cot">YuE-s1-7B-anneal-en-cot 🤗</a> &nbsp;|&nbsp; <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-en-icl">YuE-s1-7B-anneal-en-icl 🤗</a> &nbsp;|&nbsp; <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-jp-kr-cot">YuE-s1-7B-anneal-jp-kr-cot 🤗</a> <br> <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-jp-kr-icl">YuE-s1-7B-anneal-jp-kr-icl 🤗</a> &nbsp;|&nbsp; <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-zh-cot">YuE-s1-7B-anneal-zh-cot 🤗</a> &nbsp;|&nbsp; <a href="https://huggingface.co/m-a-p/YuE-s1-7B-anneal-zh-icl">YuE-s1-7B-anneal-zh-icl 🤗</a> <br> <a href="https://huggingface.co/m-a-p/YuE-s2-1B-general">YuE-s2-1B-general 🤗</a> &nbsp;|&nbsp; <a href="https://huggingface.co/m-a-p/YuE-upsampler">YuE-upsampler 🤗</a> </p>

Our model's name is YuE (乐). In Chinese, the word means "music" and "happiness." Some of you may find words that start with Yu hard to pronounce. If so, you can just call it "yeah." We wrote a song with our model's name, see here.

YuE is a groundbreaking series of open-source foundation models designed for music generation, specifically for transforming lyrics into full songs (lyrics2song). It can generate a complete song, lasting several minutes, that includes both a catchy vocal track and accompaniment track. YuE is capable of modeling diverse genres/languages/vocal techniques. Please visit the Demo Page for amazing vocal performance.

News and Updates

  • 📌 Join Us on Discord! <img alt="join discord" src="https://img.shields.io/discord/842440537755353128?color=%237289da&logo=discord"/>

  • 2025.06.04 🔥 Now YuE supports LoRA finetune.

  • 2025.03.12 🔥 Paper Released🎉: We now release YuE technical report!!! We discuss all the technical details, findings, and lessons learned. Enjoy, and feel free to cite us~

  • 2025.03.11 🫶 Now YuE supports incremental song generation!!! See YuE-UI by joeljuvel. YuE-UI is a Gradio-based interface supporting batch generation, output selection, and continuation. You can flexibly experiment with audio prompts and different model settings, visualize your progress on an interactive timeline, rewind actions, quickly preview audio outputs at stage 1 before committing to refinement, and fully save/load your sessions (JSON format). Optimized to run smoothly even on GPUs with just 8GB VRAM using quantized models.

  • 2025.02.17 🫶 Now YuE supports music continuation and Google Colab! See YuE-extend by Mozer.

  • 2025.02.07 🎉 Get YuE for Windows on pinokio.

  • 2025.01.30 🔥 Inference Update: We now support dual-track ICL mode! You can prompt the model with a reference song, and it will generate a new song in a similar style (voice cloning demo by @abrakjamson, music style transfer demo by @cocktailpeanut, etc.). Try it out! 🔥🔥🔥 P.S. Be sure to check out the demos first—they're truly impressive.

  • 2025.01.30 🔥 Announcement: A New Era Under Apache 2.0 🔥: We are thrilled to announce that, in response to overwhelming requests from our community, YuE is now officially licensed under the Apache 2.0 license. We sincerely hope this marks a watershed moment—akin to what Stable Diffusion and LLaMA have achieved in their respective fields—for music generation and creative AI. 🎉🎉🎉

  • 2025.01.29 🎉: We have updated the license description. we ENCOURAGE artists and content creators to sample and incorporate outputs generated by our model into their own works, and even monetize them. The only requirement is to credit our name: YuE by HKUST/M-A-P (alphabetic order).

  • 2025.01.28 🫶: Thanks to Fahd for creating a tutorial on how to quickly get started with YuE. Here is his demonstration.

  • 2025.01.26 🔥: We have released the YuE series.

<br>

TODOs📋

  • [ ] Support stemgen mode https://github.com/multimodal-art-projection/YuE/issues/21
  • [ ] Support llama.cpp https://github.com/ggerganov/llama.cpp/issues/11467
  • [ ] Support transformers tensor parallel. https://github.com/multimodal-art-projection/YuE/issues/7
  • [ ] Online serving on huggingface space.
  • [ ] Support vLLM and sglang https://github.com/multimodal-art-projection/YuE/issues/66
  • [x] Release paper to Arxiv.
  • [x] Example LoRA finetune code using 🤗 Transformers.
  • [x] Support Colab: YuE-extend by Mozer
  • [x] Support gradio interface. https://github.com/multimodal-art-projection/YuE/issues/1
  • [x] Support dual-track ICL mode.
  • [x] Fix "instrumental" naming bug in output files. https://github.com/multimodal-art-projection/YuE/pull/26
  • [x] Support seeding https://github.com/multimodal-art-projection/YuE/issues/20
  • [x] Allow --repetition_penalty to customize repetition penalty. https://github.com/multimodal-art-projection/YuE/issues/45

Hardware and Performance

GPU Memory

YuE requires significant GPU memory for generating long sequences. Below are the recommended configurations:

  • For GPUs with 24GB memory or less: Run up to 2 sessions to avoid out-of-memory (OOM) errors. Thanks to the community, there are YuE-exllamav2 and YuEGP for those with limited GPU resources. While both enhance generation speed and coherence, they may compromise musicality. (P.S. Better prompts & ICL help!)
  • For full song generation (many sessions, e.g., 4 or more): Use GPUs with at least 80GB memory. i.e. H800, A100, or multiple RTX4090s with tensor parallel.

To customize the number of sessions, the interface allows you to specify the desired session count. By default, the model runs 2 sessions (1 verse + 1 chorus) to avoid OOM issue.

Execution Time

On an H800 GPU, generating 30s audio takes 150 seconds. On an RTX 4090 GPU, generating 30s audio takes approximately 360 seconds.


🪟 Windows Users Quickstart

🐧 Linux/WSL Users Quickstart

For a quick start, watch this video tutorial by Fahd: Watch here.
If you're new to machine learning or the command line, we highly recommend watching this video first.

To use a GUI/Gradio interface, check out:

1. Install environment and dependencies

Make sure properly install flash attention 2 to reduce VRAM usage.

# We recommend using conda to create a new environment.
conda create -n yue python=3.8 # Python >=3.8 is recommended.
conda activate yue
# install cuda >= 11.8
conda install pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch -c nvidia
pip install -r <(curl -sSL https://raw.githubusercontent.com/multimodal-art-projection/YuE/main/requirements.txt)

# For saving GPU memory, FlashAttention 2 is mandatory. 
# Without it, long audio may lead to out-of-memory (OOM) errors.
# Be careful about matching the cuda version and flash-attn version
pip install flash-attn --no-build-isolation

2. Download the infer code and tokenizer

# Make sure you have git-lfs installed (https://git-lfs.com)
# if you don't have root, see https://github.com/git-lfs/git-lfs/issues/4134#issuecomment-1635204943
sudo apt update
sudo apt install git-lfs
git lfs install
git clone https://github.com/multimodal-art-projection/YuE.git

cd YuE/inference/
git clone https://huggingface.co/m-a-p/xcodec_mini_infer

3. Run the inference

Now generate music with YuE using 🤗 Transformers. Make sure your step 1 and 2 are properly set up.

Note:

  • Set --run_n_segments to the number of lyric sections if you want to generate a full song. Additionally, you can increase --stage2_batch_size based on your available GPU memory.

  • You may customize the prompt in genre.txt and lyrics.txt. See prompt engineering guide here.

  • You can increase --stage2_batch_size to speed up the inference, but be careful for OOM.

  • LM ckpts will be automatically downloaded from huggingface.

# This is the CoT mode.
cd YuE/inference/
python infer.py \
    --cuda_idx 0 \
    --stage1_model m-a-p/YuE-s1-7B-anneal-en-cot \
    --stage2_model m-a-p/YuE-s2-1B-general \
    --genre_txt ../prompt_egs/genre.txt \
    --lyrics_txt ../prompt_egs/lyrics.txt \
    --run_n_segments 2 \
    --stage2_batch_size 4 \
    --output_dir ../output \
    --max_new_tokens 3000 \
    --repetition_penalty 1.1

We also support music in-context-learning (provide a reference song), there are 2 types: single-track (mix/vocal/instrumental) and dual-track.

Note:

  • ICL requires a different ckpt, e.g. m-a-p/YuE-s1-7B-anneal-en-icl.

  • Music ICL generally requires a 30s audio segment. The model will write new songs with similar style of the provided audio, and may improve musicality.

Related Skills

View on GitHub
GitHub Stars6.1k
CategoryEducation
Updated10h ago
Forks719

Languages

Python

Security Score

100/100

Audited on Mar 24, 2026

No findings