Magnet
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Install / Use
/learn @Prismadic/MagnetAbout this skill
Quality Score
0/100
Category
Development & EngineeringSupported Platforms
Claude Code
Claude Desktop
Gemini CLI
README
<p align="center">
<img height="300" width="300" src="./magnet.png">
<br>
<h1 align="center">magnet</h1>
<h3 align="center"><a href="https://prismadic.github.io/magnet/">📖 docs</a> | 💻 <a href="https://github.com/Prismadic/magnet/tree/main/examples">examples</a> | 📓 <a href="https://prismadic.substack.com">substack</a></h3>
<p align="center">the small distributed language model toolkit</p>
<p align="center"><i>⚡️ fine-tune state-of-the-art LLMs anywhere, rapidly ⚡️</i></p>
<div align="center">
</p>
🧬 Installation
pip install llm-magnet
or
python3 setup.py install
<img src='./divider.png' style="width:100%;height:5px;">
🎉 usage
check out the example notebooks
<small>a snippet to get you started</small>
from magnet.base import Magnet
from magnet.base import EmbeddedMagnet
cluster = EmbeddedMagnet()
cluster.start()
magnet = cluster.create_magnet()
await magnet.align()
config = {
"host": "127.0.0.1",
"credentials": None,
"domain": None,
"name": "my_stream",
"category": "my_category",
"kv_name": "my_kv",
"session": "my_session",
"os_name": "my_object_store",
"index": {
"milvus_uri": "127.0.0.1",
"milvus_port": 19530,
"milvus_user": "test",
"milvus_password": "test",
"dimension": 1024,
"model": "BAAI/bge-large-en-v1.5",
"name": "test",
"options": {
'metric_type': 'COSINE',
'index_type':'HNSW',
'params': {
"efConstruction": 40
, "M": 48
}
}
}
}
magnet = Magnet(config)
await magnet.align()
<img src='./divider.png' style="width:100%;height:5px;">
🔮 features
<center> <img src="./clustered_bidirectional.png" style="width:50%;transform: rotate(90deg);margin-top:200px;" align="right"> </center>- ⚡️ It's Fast
- <small>fast on consumer hardware</small>
- <small>very fast on Apple Silicon</small>
- <small>extremely fast on ROCm/CUDA</small>
- 🫵 Automatic or your way
- <small>rely on established transformer patterns to let
magnetdo the work</small> - <small>keep your existing data processing functions, bring them to
magnet!</small>
- <small>rely on established transformer patterns to let
- 🛰️ 100% Distributed
- <small>processing, embedding, storage, retrieval, querying, or inference from anywhere</small>
- <small>as much or as little compute as you need</small>
- 🧮 Choose Inference Method
- <small>HuggingFace</small>
- <small>vLLM node</small>
- <small>GPU</small>
- <small>mlx</small>
- 🌎 Huge Volumes
- <small>handle gigantic amounts of data inexpensively</small>
- <small>fault-tolerant by design</small>
- <small>decentralized workloads</small>
- 🔐 Secure
- <small>JWT</small>
- <small>Basic</small>
- 🪵 World-Class Comprehension
- <small>
magnetoptionally logs its own code as it's executed (yes, really)</small> - <small>build a self-aware system and allow it to learn from itself</small>
- <small>emojis are the future</small>
- <small>
