Mindnlp
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
Install / Use
/learn @mindspore-lab/MindnlpREADME
🎯 What is MindNLP?
MindNLP bridges the gap between HuggingFace's massive model ecosystem and MindSpore's hardware acceleration. With just import mindnlp, you can run any HuggingFace model on Ascend NPU, NVIDIA GPU, or CPU - no code changes required.
import mindnlp # That's it! HuggingFace now runs on MindSpore
from transformers import pipeline
pipe = pipeline("text-generation", model="Qwen/Qwen2-0.5B")
print(pipe("Hello, I am")[0]["generated_text"])
⚡ Quick Start
Text Generation with LLMs
import mindspore
import mindnlp
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="Qwen/Qwen3-8B",
ms_dtype=mindspore.bfloat16,
device_map="auto"
)
messages = [{"role": "user", "content": "Write a haiku about coding"}]
print(pipe(messages, max_new_tokens=100)[0]["generated_text"][-1]["content"])
Image Generation with Stable Diffusion
import mindspore
import mindnlp
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
ms_dtype=mindspore.float16
)
image = pipe("A sunset over mountains, oil painting style").images[0]
image.save("sunset.png")
BERT for Text Classification
import mindnlp
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("MindNLP is awesome!", return_tensors="pt")
outputs = model(**inputs)
✨ Features
<table> <tr> <td width="50%">🤗 Full HuggingFace Compatibility
- 200,000+ models from HuggingFace Hub
- Transformers - All model architectures
- Diffusers - Stable Diffusion, SDXL, ControlNet
- Zero code changes - Just
import mindnlp
🚀 Hardware Acceleration
- Ascend NPU - Full support for Huawei AI chips
- NVIDIA GPU - CUDA acceleration
- CPU - Optimized CPU execution
- Multi-device - Automatic device placement
🔧 Advanced Capabilities
- Mixed precision - FP16/BF16 training & inference
- Quantization - INT8/INT4 with BitsAndBytes
- Distributed - Multi-GPU/NPU training
- PEFT/LoRA - Parameter-efficient fine-tuning
📦 Easy Integration
- PyTorch-compatible API via mindtorch
- Safetensors support for fast loading
- Model Hub mirrors for faster downloads
- Comprehensive documentation
🧪 Mindtorch NPU Debugging
Mindtorch NPU ops are async by default. Use torch.npu.synchronize() when you need to block on results.
For debugging, set ACL_LAUNCH_BLOCKING=1 to force per-op synchronization.
📦 Installation
# From PyPI (recommended)
pip install mindnlp
# From source (latest features)
pip install git+https://github.com/mindspore-lab/mindnlp.git
<details>
<summary><b>📋 Version Compatibility</b></summary>
| MindNLP | MindSpore | Python | |---------|-----------|--------| | 0.6.x | ≥2.7.1 | 3.10-3.11 | | 0.5.x | 2.5.0-2.7.0 | 3.10-3.11 | | 0.4.x | 2.2.x-2.5.0 | 3.9-3.11 |
</details>💡 Why MindNLP?
| Feature | MindNLP | PyTorch + HF | TensorFlow + HF | |---------|---------|--------------|-----------------| | HuggingFace Models | ✅ 200K+ | ✅ 200K+ | ⚠️ Limited | | Ascend NPU Support | ✅ Native | ❌ | ❌ | | Zero Code Migration | ✅ | - | ❌ | | Unified API | ✅ | ✅ | ❌ | | Chinese Model Support | ✅ Excellent | ✅ Good | ⚠️ Limited |
🏆 Key Advantages
- Instant Migration: Your existing HuggingFace code works immediately
- Ascend Optimization: Native support for Huawei NPU hardware
- Production Ready: Battle-tested in enterprise deployments
- Active Community: Regular updates and responsive support
🗺️ Supported Models
MindNLP supports all models from HuggingFace Transformers and Diffusers. Here are some popular ones:
| Category | Models | |----------|--------| | LLMs | Qwen, Llama, ChatGLM, Mistral, Phi, Gemma, BLOOM, Falcon | | Vision | ViT, CLIP, Swin, ConvNeXt, SAM, BLIP | | Audio | Whisper, Wav2Vec2, HuBERT, MusicGen | | Diffusion | Stable Diffusion, SDXL, ControlNet | | Multimodal | LLaVA, Qwen-VL, ALIGN |
📚 Resources
🤝 Contributing
We welcome contributions! See our Contributing Guide for details.
# Clone and install for development
git clone https://github.com/mindspore-lab/mindnlp.git
cd mindnlp
pip install -e ".[dev]"
👥 Community
<p align="center"> <a href="https://github.com/mindspore-lab/mindnlp/graphs/contributors"> <img src="https://contrib.rocks/image?repo=mindspore-lab/mindnlp" /> </a> </p>Join the MindSpore NLP SIG (Special Interest Group) for discussions, events, and collaboration:
<p align="center"> <img src="./assets/qrcode_qq_group.jpg" width="200" alt="QQ Group"/> </p>⭐ Star History
<p align="center"> <a href="https://star-history.com/#mindspore-lab/mindnlp&Date"> <img src="https://api.star-history.com/svg?repos=mindspore-lab/mindnlp&type=Date" alt="Star History Chart" width="600"> </a> </p>If you find MindNLP useful, please consider giving it a star ⭐ - it helps the project grow!
📄 License
MindNLP is released under the Apache 2.0 License.
📖 Citation
@misc{mindnlp2022,
title={MindNLP: Easy-to-use and High-performance NLP and LLM Framework Based on MindSpore},
author={MindNLP Contributors},
howpublished={\url{https://github.com/mindspore-lab/mindnlp}},
year={2022}
}
<p align="center"> Made with ❤️ by the <a href="https://github.com/mindspore-lab">MindSpore Lab</a> team </p>
Related Skills
claude-opus-4-5-migration
84.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
342.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
50.1k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
mcp-for-beginners
15.7kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
