Onehotchord
Simple AI model to recognize chords
Install / Use
/learn @opsengine/OnehotchordREADME
One Hot Chord
An experimental deep learning project for real-time chord recognition from audio input. This project demonstrates how machine learning can be applied to music analysis. You can access the demo at https://onehotchord.com
Note: This is an experimental project and may not work perfectly in all scenarios.
Features
- Real-time chord recognition from audio input
- Browser-based interface with visual feedback
- Privacy-focused design - all processing happens on your device
- Supports common chord types: major, minor, diminished, 7th chords
- Responsive visualization of detected notes
How It Works
One Hot Chord uses a deep neural network to analyze audio and identify chords:
- Captures audio from your microphone
- Extracts frequency information using a Constant-Q Transform
- Processes these features through a neural network
- Identifies the root note and chord type
- Displays the results in real-time
Project Structure
gen_samples.py- Generates synthetic chord samples for trainingpreprocess.py- Extracts features from audio samplestrain.py- Trains the neural network modelmodel.py- Defines the neural network architecturedocs/- Web demo
Quick Start
Prerequisites
- Python 3.8+
- FluidSynth and a SoundFont file (for sample generation)
Setup
- Clone the repository
- Install dependencies:
pip install -r requirements.txt - Download a SoundFont file (e.g., FluidR3_GM.sf2) to the
sf2/directory
Training Pipeline
# Generate training samples
python gen_samples.py
# Preprocess audio samples
python preprocess.py [wav files]
# Train the model
python train.py chord_dataset.npz
# Try real-time recognition
python listen.py
Web Interface
Serve the web interface:
cd www
python -m http.server 8000
Then open your browser to http://localhost:8000
Limitations
This is still an experimental project built in a weekend. It has limitations:
- Works best with clean audio input
- May struggle with complex chord voicings
- Limited to the chord types it was trained on
- Performance varies depending on audio quality and background noise
License
This project is licensed under the Apache License - see the LICENSE file for details.
Acknowledgments
- FluidSynth for MIDI synthesis
- Librosa for audio processing
- PyTorch for neural network implementation
- ONNX Runtime for model deployment
Related Skills
node-connect
346.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
346.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
346.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
