Smollm
Everything about the SmolLM and SmolVLM family of models
Install / Use
/learn @huggingface/SmollmREADME
Smol Models 🤏
Welcome to Smol Models, a family of efficient and lightweight AI models from Hugging Face. Our mission is to create fully open powerful yet compact models, for text and vision, that can run effectively on-device while maintaining strong performance.
[NEW] SmolLM3 (Language Model)
Our 3B model outperforms Llama 3.2 3B and Qwen2.5 3B while staying competitive with larger 4B alternatives (Qwen3 & Gemma3). Beyond the performance numbers, we're sharing exactly how we built it using public datasets and training frameworks.
Ressources:
Summary:
- 3B model trained on 11T tokens, SoTA at the 3B scale and competitive with 4B models
- Fully open model, open weights + full training details including public data mixture and training configs
- Instruct model with dual mode reasoning, supporting think/no_think modes
- Multilingual support for 6 languages: English, French, Spanish, German, Italian, and Portuguese
- Long context up to 128k with NoPE and using YaRN
👁️ SmolVLM (Vision Language Model)
SmolVLM is our compact multimodal model that can:
- Process both images and text and perform tasks like visual QA, image description, and visual storytelling
- Handle multiple images in a single conversation
- Run efficiently on-device
Repository Structure
smollm/
├── text/ # SmolLM3/2/1 related code and resources
├── vision/ # SmolVLM related code and resources
└── tools/ # Shared utilities and inference tools
├── smol_tools/ # Lightweight AI-powered tools
├── smollm_local_inference/
└── smolvlm_local_inference/
Getting Started
SmolLM3
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "HuggingFaceTB/SmolLM3-3B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
).to(device)
# prepare the model input
prompt = "Give me a brief explanation of gravity in simple terms."
messages_think = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages_think,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
SmolVLM
from transformers import AutoProcessor, AutoModelForVision2Seq
processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
model = AutoModelForVision2Seq.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What's in this image?"}
]
}
]
Ecosystem
<div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/RvHjdlRT5gGQt5mJuhXH9.png" width="700"/> </div>Resources
Documentation
Pretrained Models
Datasets
- SmolLM3 Pretraining dataset
- SmolTalk - Our instruction-tuning dataset
- FineMath - Mathematics pretraining dataset
- FineWeb-Edu - Educational content pretraining dataset
Related Skills
node-connect
347.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.9kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
