SkillAgentSearch skills...

DeepMusicGeneration

Music Generation in MIDI format using Deep Learning.

Install / Use

/learn @AniketRajpoot/DeepMusicGeneration
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

DeepMusicGeneration

Exploring endless possiblilties to generate music using deep learning!

Sample outputs | Video demo | PDF report

Screenshot

Table of Contents

Introduction <a name="intro"></a>

Generating long pieces of music using deep learning is a challenging problem, as music contains structure at multiple timescales, from milisecond timings to motifs to phrases to repetition of entire sections. This repositary explores various techniques to manipulate and generate music pieces in MIDI asa well as RAW Audio format. This repositary aims to provide code and checkpoints for such models for end use. We also integrated everything in a web applicaiton for easier use.

There are currently two branches in this repository:

  1. archive: The branch contains archived notebooks with code for running Transformers on MIDI and wav files, both single and multi-instruments, it also includes samples generated by the said models.

Run the following command to clone the branch:

git clone -b archive --single-branch https://github.com/AniketRajpoot/DeepMusicGeneration.git
  1. master: The branch includes the work done for the B-Tech project as Colab notebooks (Demo and report available at header of this README) involving learning of inter-instrument dependencies along with the objective of developing realtime applications for the purpose of assisting musicians. Run the following command to clone the branch:
git clone -b master --single-branch https://github.com/AniketRajpoot/DeepMusicGeneration.git

Acknowledgements <a name="ackn"></a>

This project was made possible with previous contributions referenced below:

<ol> <li> https://github.com/bearpelican/musicautobot/ </li> <li> https://web.mit.edu/music21/ </li> <li> https://streamlit.io/ </li> </ol>

Methodology <a name="methods"></a>

Tasks <a name="tasks"></a>

We perform following music related tasks and also provide the code for the same :

By combining all the models in a singular pipeline, full potential of all the models can be unleashed and on can compose a complete song!

Preprocessing <a name="preproc"></a>

Models <a name="models"></a>

We provide pretrained checkpoints for the following models used to perform various tasks mentioned in the tasks section :

Deep Music Generator

The model is trained on a subset of LakhMIDI dataset with genre conditioning.

Run the following command:

gdown --id 1LJKXFEap9YrQ7Md4S38CD5ergr1jRVML

Alternatively, the link to the same is given below:

https://drive.google.com/file/d/1LJKXFEap9YrQ7Md4S38CD5ergr1jRVML/view?usp=sharing

Deep Mask Modelling

Run the following command:

!gdown --id 1lWR0VDT8jz_CbkCI8xBrlXyk8dAidH7t

Alternatively, the link to the same is given below:

https://drive.google.com/file/d/1lWR0VDT8jz_CbkCI8xBrlXyk8dAidH7t/view?usp=sharing

Dataset <a name="dataset"></a>

All the 3 models are pretrained using LakhMIDI dataset. Due to limited resources we were only able to train small models for music generation and music harmonization but musicBERT is a large model pretrained on the whole dataset. More about this here.

Training <a name="training"></a>

Evaluation <a name="eval"></a>

Running Streamlit app <a name="streamlitapp"></a>

View on GitHub
GitHub Stars17
CategoryEducation
Updated1y ago
Forks3

Languages

Python

Security Score

60/100

Audited on Feb 20, 2025

No findings