Qelm
Qelm - Quantum Enhanced Language Model
Install / Use
/learn @R-D-BioTech-Alaska/QelmREADME
Quantum-Enhanced Language Model (QELM)
QELM (Quantum-Enhanced Language Model) combines quantum computing and NLP to create compact yet powerful language models.
Main script (current): Qelm2.py (trainer + GUI + utilities)
Legacy script: QelmT.py (older unified trainer/inference)
The latest versions feature:
- Multi-block quantum transformer architecture with advanced multi-head quantum attention.
- Novel techniques such as sub-bit encoding and entropy-mixed gates that allow more representational power per qubit.
- Parameter-shift gradient training (with support for Adam and advanced quantum training modes).
- A unified GUI-first workflow in
Qelm2.pyfor training, saving/loading, token maps, and advanced toggles. - Noise mitigation options: Pauli twirling and zero-noise extrapolation (ZNE) with user-configurable scaling factors.
- Utility modes for dataset/token preprocessing, including local and HuggingFace prep flags.
QELM Quantum (Connect to IBM quantum computers)
- Must have an IBM account
- Must have a basic understanding of running circuits
- Must be familiar with Quantum Computers (you can switch backends in the UI; mind shot/runtime budgets)
TensorFlow & Python Version Compatibility
TensorFlow is not yet compatible with the latest versions of Python.
To install a working Python version, use the official Python FTP archive, as they no longer provide an executable for this version or lower:
Note: QELM’s core trainer does not require TensorFlow; TensorFlow is optional for experimental modules.
Table of Contents
- What’s New in Qelm2.py?
- Architecture Overview
- Feature Matrix
- Features
- Installation
5.1. Prerequisites
5.2. Easy Installation
5.3. Cloning the Repository
5.4. Virtual Environment Setup
5.5. Dependency Installation - Training with Qelm2.py
- Chatting with QELMChatUI.py
- Benchmarks & Metrics
- Running on Real QPUs (IBM, etc.)
- Project Structure
- Roadmap
- License
- Contact

What’s New in Qelm2.py?
Qelm2.py (Trainer + GUI + Utilities)
- Unified GUI workflow: configure model, train, save/load
.qelm, manage token maps, and run inference from one interface. - Noise mitigation: GUI toggles for Pauli twirling and ZNE, plus a scaling-factor field (e.g.,
1,3,5). - Token/dataset tooling: built-in prep modes for generating token streams:
--qelm_prep_tokensfor local text → token stream--qelm_prep_hffor HuggingFace datasets → token stream
- LLM → QELM conversion: import LLM weights then convert using your selected encoder/architecture options (where supported by your import path).
Architecture Overview
QELM mirrors a transformer but swaps heavy linear algebra blocks for compact quantum circuits:
- Classical Embeddings → token → vector
- Quantum Attention (per head) → encode vector into qubits, entangle, extract features
- Quantum Feed-Forward / Channel Mixing → circuit blocks with trainable parameters
- Residual / Combine → classical post-processing
- Output Projection → vocab logits
Optional add-ons depend on your enabled flags (encoding modes, memory/context, mitigation, conversion encoders, etc.).
Feature Matrix
| Area | Feature | Old (qelm.py / QelmT.py) | New (Qelm2.py) |
|-------------|-------------------------------------|------------------------------|------------------|
| Encoding | Scalar RY / basic encoding | ✔ | ✔ |
| | Sub-bit encoding | ✔ | ✔ (toggle) |
| | Advanced encoder options | limited | expanded |
| Attention | Single-block fallback | ✔ | Multi-block |
| Training | Parameter-shift gradients | ✔ | ✔ |
| Optimizers | Adam + advanced modes | ✔ | ✔ |
| GUI | Trainer UI | ✔ | New consolidated UI |
| Utilities | Token/dataset prep modes | limited | ✔ (--qelm_prep_tokens, --qelm_prep_hf) |
| Noise | Pauli twirling & ZNE | ✔ / partial | ✔ (GUI toggle + scaling) |
Features
-
Quantum Circuit Transformers:
- Multi-block transformer architecture with quantum attention and feed-forward layers
- Ring entanglement, data reuploading (when enabled), and residual connections
-
Quantum Training Optimizations:
- Parameter-shift gradient training with Adam and advanced training modes
-
Advanced Quantum Techniques:
- Sub-bit encoding and entropy-controlled quantum channels
- Multiple ansatz/encoding options for experimental setups
- Noise mitigation: Pauli twirling and zero-noise extrapolation (ZNE), with selectable scaling factors
-
Unified Script (Qelm2.py):
- One consolidated script for training, inference, model save/load, token maps, and utilities
- CLI tool modes for dataset/token prep
-
Modern Chat UI (QELMChatUI.py):
- ChatGPT-style conversation interface with message bubbles and session handling (where implemented)
- Loads
.qelmmodels + token maps to generate readable natural language
Installation
Prerequisites
- Python 3.7+ (commonly tested up to 3.11)
- Qiskit and Qiskit Aer
- NumPy
- Tkinter (usually included with Python)
- psutil (optional, for CPU usage monitoring)
- datasets (optional; only required for
--qelm_prep_hf)
Easy Installation
pip install qelm
Cloning the Repository
git clone https://github.com/R-D-BioTech-Alaska/QELM.git
cd QELM
Virtual Environment Setup
python -m venv qelm_env
# On Linux/Mac:
source qelm_env/bin/activate
# On Windows:
qelm_env\Scripts\activate
Dependency Installation
pip install --upgrade pip
pip install -r requirements.txt
Training with Qelm2.py
Run the trainer UI:
python Qelm2.py
Outputs:
.qelmmodel file<modelname>_token_map.json- Training logs (loss/perplexity where enabled)
Chatting with QELMChatUI.py
(This model is 23 kb's in size)

The QELMChatUI.py script provides a ChatGPT-style interface for interacting with your QELM models.
- Model and Token Mapping:
Load your
.qelmmodel file along with the matching token mapping file (*_token_map.json) so responses map to real words. - Modern Chat Interface: Message bubbles, history/session behavior, and UI features as implemented in your current chat build.
To run the chat UI:
python QELMChatUI.py
Benchmarks & Metrics
Core metrics to report:
- Loss / Cross-Entropy
- Perplexity
- Optional text metrics (BLEU / distinct-n) if you enable them in your evaluation workflow
Running on Real QPUs (IBM, etc.)
If you run against IBM backends, ensure credentials are configured and select the backend you want.
Minimal example:
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService(channel="ibm_quantum", token="YOUR_TOKEN")
backend = service.backend("BACKEND_NAME")
Project Structure
QELM/
├── Qelm2.py # Main consolidated trainer + GUI + utilities
├── QelmT.py # Legacy trainer/inference (reference)
├── QELMChatUI.py # Chat interface for QELM models
├── requirements.txt
├── Datasets/
├── docs/
│ └── images/
│ ├── qelm_logo_small.png
│ ├── qelmtrainer.png
│ ├── QELM_Diagram.png
│ ├── quantum.png
│ ├── chat.png
│ └── ctheo.jpg
├── README.md
└── LICENSE

Roadmap
- Backend abstraction beyond Aer/IBM
- Automated benchmark script: perplexity/BLEU/top-k in one JSON report
- Tokenizer upgrades: plug-in BPE/Unigram tokenizers
- Auto circuit diagrams per block for documentation
<p align="center"> <img src="docs/images/ctheo.jpg"/> </p>
License
This project
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
