Demucs
Code for the paper Hybrid Spectrogram and Waveform Source Separation
Install / Use
/learn @facebookresearch/DemucsREADME
Demucs Music Source Separation
Important: As I am no longer working at Meta, this repository is not maintained anymore. I've created a fork at github.com/adefossez/demucs. Note that this project is not actively maintained anymore and only important bug fixes will be processed on the new repo. Please do not open issues for feature request or if Demucs doesn't work perfectly for your use case :)
This is the 4th release of Demucs (v4), featuring Hybrid Transformer based source separation.
For the classic Hybrid Demucs (v3): [Go this commit][demucs_v3].
If you are experiencing issues and want the old Demucs back, please file an issue, and then you can get back to Demucs v3 with
git checkout v3. You can also go [Demucs v2][demucs_v2].
Demucs is a state-of-the-art music source separation model, currently capable of separating drums, bass, and vocals from the rest of the accompaniment. Demucs is based on a U-Net convolutional architecture inspired by [Wave-U-Net][waveunet]. The v4 version features [Hybrid Transformer Demucs][htdemucs], a hybrid spectrogram/waveform separation model using Transformers. It is based on [Hybrid Demucs][hybrid_paper] (also provided in this repo), with the innermost layers replaced by a cross-domain Transformer Encoder. This Transformer uses self-attention within each domain, and cross-attention across domains. The model achieves a SDR of 9.00 dB on the MUSDB HQ test set. Moreover, when using sparse attention kernels to extend its receptive field and per source fine-tuning, we achieve state-of-the-art 9.20 dB of SDR.
Samples are available on our sample page. Checkout [our paper][htdemucs] for more information. It has been trained on the [MUSDB HQ][musdb] dataset + an extra training dataset of 800 songs. This model separates drums, bass and vocals and other stems for any song.
As Hybrid Transformer Demucs is brand new, it is not activated by default, you can activate it in the usual
commands described hereafter with -n htdemucs_ft.
The single, non fine-tuned model is provided as -n htdemucs, and the retrained baseline
as -n hdemucs_mmi. The Sparse Hybrid Transformer model decribed in our paper is not provided as its
requires custom CUDA code that is not ready for release yet.
We are also releasing an experimental 6 sources model, that adds a guitar and piano source.
Quick testing seems to show okay quality for guitar, but a lot of bleeding and artifacts for the piano source.
Important news if you are already using Demucs
See the release notes for more details.
- 22/02/2023: added support for the SDX 2023 Challenge, see the dedicated doc page
- 07/12/2022: Demucs v4 now on PyPI. htdemucs model now used by default. Also releasing
a 6 sources models (adding
guitarandpiano, although the latter doesn't work so well at the moment). - 16/11/2022: Added the new Hybrid Transformer Demucs v4 models. Adding support for the torchaudio implementation of HDemucs.
- 30/08/2022: added reproducibility and ablation grids, along with an updated version of the paper.
- 17/08/2022: Releasing v3.0.5: Set split segment length to reduce memory. Compatible with pyTorch 1.12.
- 24/02/2022: Releasing v3.0.4: split into two stems (i.e. karaoke mode). Export as float32 or int24.
- 17/12/2021: Releasing v3.0.3: bug fixes (thanks @keunwoochoi), memory drastically
reduced on GPU (thanks @famzah) and new multi-core evaluation on CPU (
-jflag). - 12/11/2021: Releasing Demucs v3 with hybrid domain separation. Strong improvements on all sources. This is the model that won Sony MDX challenge.
- 11/05/2021: Adding support for MusDB-HQ and arbitrary wav set, for the MDX challenge. For more information on joining the challenge with Demucs see the Demucs MDX instructions
Comparison with other models
We provide hereafter a summary of the different metrics presented in the paper. You can also compare Hybrid Demucs (v3), [KUIELAB-MDX-Net][kuielab], [Spleeter][spleeter], Open-Unmix, Demucs (v1), and Conv-Tasnet on one of my favorite songs on my [soundcloud playlist][soundcloud].
Comparison of accuracy
Overall SDR is the mean of the SDR for each of the 4 sources, MOS Quality is a rating from 1 to 5
of the naturalness and absence of artifacts given by human listeners (5 = no artifacts), MOS Contamination
is a rating from 1 to 5 with 5 being zero contamination by other sources. We refer the reader to our [paper][hybrid_paper],
for more details.
| Model | Domain | Extra data? | Overall SDR | MOS Quality | MOS Contamination | |------------------------------|-------------|-------------------|-------------|-------------|-------------------| | [Wave-U-Net][waveunet] | waveform | no | 3.2 | - | - | | [Open-Unmix][openunmix] | spectrogram | no | 5.3 | - | - | | [D3Net][d3net] | spectrogram | no | 6.0 | - | - | | [Conv-Tasnet][demucs_v2] | waveform | no | 5.7 | - | | | [Demucs (v2)][demucs_v2] | waveform | no | 6.3 | 2.37 | 2.36 | | [ResUNetDecouple+][decouple] | spectrogram | no | 6.7 | - | - | | [KUIELAB-MDX-Net][kuielab] | hybrid | no | 7.5 | 2.86 | 2.55 | | [Band-Spit RNN][bandsplit] | spectrogram | no | 8.2 | - | - | | Hybrid Demucs (v3) | hybrid | no | 7.7 | 2.83 | 3.04 | | [MMDenseLSTM][mmdenselstm] | spectrogram | 804 songs | 6.0 | - | - | | [D3Net][d3net] | spectrogram | 1.5k songs | 6.7 | - | - | | [Spleeter][spleeter] | spectrogram | 25k songs | 5.9 | - | - | | [Band-Spit RNN][bandsplit] | spectrogram | 1.7k (mixes only) | 9.0 | - | - | | HT Demucs f.t. (v4) | hybrid | 800 songs | 9.0 | - | - |
Requirements
You will need at least Python 3.8. See requirements_minimal.txt for requirements for separation only,
and environment-[cpu|cuda].yml (or requirements.txt) if you want to train a new model.
For Windows users
Everytime you see python3, replace it with python.exe. You should always run commands from the
Anaconda console.
For musicians
If you just want to use Demucs to separate tracks, you can install it with
python3 -m pip install -U demucs
For bleeding edge versions, you can install directly from this repo using
python3 -m pip install -U git+https://github.com/facebookresearch/demucs#egg=demucs
Advanced OS support are provided on the following page, you must read the page for your OS before posting an issues:
- If you are using Windows: Windows support.
- If you are using macOS: macOS support.
- If you are using Linux: Linux support.
For machine learning scientists
If you have anaconda installed, you can run from the root of this repository:
conda env update -f environment-cpu.yml # if you don't have GPUs
conda env update -f environment-cuda.yml # if you have GPUs
conda activate demucs
pip install -e .
This will create a demucs environment with all the dependencies installed.
You will also need to install soundstretch/soundtouch: on macOS you can do brew install sound-touch,
and on Ubuntu sudo apt-get install soundstretch. This is used for the
pitch/tempo augmentation.
Running in Docker
Thanks to @xserrat, there is now a Docker image definition ready for using Demucs. This can ensure all libraries are correctly installed without interfering with the host OS. See his repo Docker Facebook Demucs for more information.
Running from Colab
I made a Colab to easily separate track with Demucs. Note that transfer speeds with Colab are a bit slow for large media files, but it will allow you to use Demucs without installing anything.
Web Demo
Integrated to Hugging Face Spaces with Gradio. See demo:
Graphical Interface
@CarlGao4 has released a GUI for Demucs: CarlGao4/Demucs-Gui. Downloads for Windows and macOS is available [here](https://githu
