Splintr
A high-performance tokenizer (BPE + SentencePiece) built with Rust with Python bindings, focused on speed, safety, and resource optimization.
Install / Use
/learn @ml-rust/SplintrREADME

A high-performance tokenizer (BPE + SentencePiece + WordPiece) built with Rust with Python bindings, focused on speed, safety, and resource optimization.
The Problem
Tokenization is everywhere in modern AI. Whether you're building LLM applications, training models, or processing data pipelines, you're tokenizing text constantly. But existing tokenizers have a problem: they're slow.
When you need to tokenize batches of prompts, documents, or training data, you're stuck waiting. Python-based tokenizers can't fully leverage modern multi-core CPUs. You need something faster.
The Solution
Splintr brings Rust performance to Python. Built from the ground up for speed and efficiency:

| Configuration | Splintr | Tiktoken | HuggingFace | TokenDagger | | ------------- | ------------ | -------- | ----------- | ----------- | | 1,000 texts | 111 MB/s | 9 MB/s | 28 MB/s | 9 MB/s | | 500 texts | 107 MB/s | 10 MB/s | 27 MB/s | 8 MB/s | | 100 texts | 69 MB/s | 7 MB/s | 20 MB/s | 6 MB/s |
10-12x faster than tiktoken. 4x faster than HuggingFace. Built in Rust, accessible from Python.
Quick Start
Python
pip install splintr-rs
from splintr import Tokenizer
# Load a pretrained vocabulary
tokenizer = Tokenizer.from_pretrained("cl100k_base") # OpenAI GPT-4/3.5
# tokenizer = Tokenizer.from_pretrained("llama3") # Meta Llama 3 family
# tokenizer = Tokenizer.from_pretrained("deepseek_v3") # DeepSeek V3/R1
# tokenizer = Tokenizer.from_pretrained("mistral_v1") # Mistral 7B v0.1/v0.2
# tokenizer = Tokenizer.from_pretrained("mistral_v2") # Mistral 7B v0.3, Codestral
# tokenizer = Tokenizer.from_pretrained("mistral_v3") # Mistral NeMo, Large 2
# Encode and decode
tokens = tokenizer.encode("Hello, world!")
text = tokenizer.decode(tokens)
# Batch encode (10-12x faster)
texts = ["Hello, world!", "How are you?", "Machine learning is fun!"]
batch_tokens = tokenizer.encode_batch(texts)
See the API Guide for complete documentation and examples.
Rust
[dependencies]
splintr = "*" # or pin to a specific version
use splintr::{Tokenizer, CL100K_BASE_PATTERN};
let tokenizer = Tokenizer::new(encoder, special_tokens, CL100K_BASE_PATTERN)?;
let tokens = tokenizer.encode("Hello, world!");
let batch_tokens = tokenizer.encode_batch(&texts);
See the API Guide and docs.rs for complete Rust documentation.
Key Features
Performance where it matters:
- 12x faster batch encoding - Parallel processing across multiple texts using Rayon
- 3-4x faster single text encoding - Optimized sequential algorithm for typical use cases
- Smart parallelization - Sequential for small texts (<1MB), parallel for large datasets
- LRU caching - Avoid redundant encoding of frequently seen text chunks
Built for production:
- Compatible vocabularies - Supports cl100k_base, o200k_base (OpenAI), Llama 3 family (Meta), DeepSeek V3 (DeepSeek), and Mistral V1/V2/V3 (Mistral AI)
- Streaming decoders - Real-time LLM output display with proper UTF-8 handling (guide)
- 54 agent tokens - Built-in support for chat, CoT reasoning, ReAct agents, tool calling, RAG citations (docs)
- Battle-tested algorithms - Regexr with JIT (pure Rust), Aho-Corasick for special tokens, linked-list BPE, SentencePiece unigram, WordPiece for BERT-family models
Cross-platform:
- Python bindings via PyO3 (Linux, macOS, Windows)
- Native Rust library for maximum performance
Performance Deep Dive
All benchmarks performed on Linux (6.16.8-arch3-1) with 24 CPU cores, comparing against tiktoken (reference Python implementation), Hugging Face tokenizers, and TokenDagger.
Single Text Encoding
For single texts, splintr achieves 3-4x faster encoding across various text sizes:

Latency by content type:

Consistent low latency across Python code, JSON, English prose, and Chinese text makes splintr ideal for interactive applications and real-time processing.
Batch Encoding
The real magic happens with batches. Splintr parallelizes across texts to achieve 10-12x speedup:

Higher speedups on larger batches where parallelization overhead is amortized. Perfect for:
- Training data preprocessing
- Bulk document tokenization
- API batch processing
- Data pipeline throughput
Design Decision: Sequential by Default
Splintr uses sequential encoding for single texts and parallel encoding across batches based on empirical benchmarking:

Key findings:
- Sequential is faster for texts up to ~1MB (typical LLM prompts and documents)
- Rayon's parallelization overhead only pays off at ~1MB+ text sizes
- Most real-world inputs are well under 1MB
encode()uses sequential processing for optimal single-text performanceencode_batch()parallelizes across multiple texts for maximum throughputencode_rayon()available for the rare cases where you have >1MB single texts
This architecture ensures splintr is optimized for the most common tokenization patterns in LLM applications.
Running Benchmarks Yourself
# Clone and install
git clone https://github.com/ml-rust/splintr.git
cd splintr
pip install -e .
pip install tiktoken
# Run the benchmark suite
cd benchmarks
python benchmark.py --model cl100k_base --output results/my_benchmark.json
# View results
cat results/my_benchmark.md
The benchmark suite tests single text encoding, batch encoding, streaming decoder performance, and special token handling across various content types.
Regex Backends
Splintr uses a pure-Rust regex engine (regexr) by default, with optional PCRE2 support for compatibility.
Default Backend (regexr):
- Pure Rust implementation (no C dependencies)
- JIT compilation and SIMD acceleration
- Native UTF-8 and Unicode property support
Optional PCRE2 Backend:
from splintr import Tokenizer
# Default: regexr backend (pure Rust)
tokenizer = Tokenizer.from_pretrained("cl100k_base")
# Optional: switch to PCRE2 (requires --features pcre2)
tokenizer = Tokenizer.from_pretrained("cl100k_base").pcre2(True)
To enable PCRE2, build with the feature flag:
maturin develop --release --features pcre2
Benchmarking:
# Compare backends (requires PCRE2 feature)
python benchmarks/benchmark_regexr_comparison.py --model cl100k_base
# Visual comparison with charts
python benchmarks/benchmark_regexr_viz.py --model cl100k_base
Streaming Decoders
For real-time LLM applications where tokens arrive one at a time, Splintr provides streaming decoders that handle UTF-8 boundary alignment:
# Regular streaming decoder (cl100k_base, o200k_base, llama3)
decoder = tokenizer.streaming_decoder()
# ByteLevel streaming decoder (deepseek_v3, GPT-2)
decoder = tokenizer.byte_level_streaming_decoder()
# Process tokens as they arrive
for token_id in token_stream:
if text := decoder.add_token(token_id):
print(text, end="", flush=True)
print(decoder.flush())
Why streaming decoders? BPE tokens don't align with UTF-8 character boundaries. A multi-byte character like "世" might split across tokens. The streaming decoder buffers incomplete sequences and only outputs complete characters.
See the API Guide for detailed usage, examples, and best practices.
Supported Vocabularies
| Vocabulary | Used By | Vocabulary Size | Special Tokens | Import Constant |
| ------------- | ---------------------------------- | --------------- | -------------- | ----------------------- |
| cl100k_base | GPT-4, GPT-3.5-turbo | ~100,000 | 5 + 54 agent | CL100K_BASE_PATTERN |
| o200k_base | GPT-4o | ~200,000 | 2 + 54 agent | O200K_BASE_PATTERN |
| llama3 | Llama 3, 3.1, 3.2, 3.3 (Meta) | ~128,000 | 11 + 54 agent | LLAMA3_PATTERN |
| deepseek_v3 | DeepSeek V3, DeepSeek R1 | ~128,000 | 17 + 54 agent | LLAMA3_PATTERN |
| mistral_v1 | Mistral 7B v0.1/v0.2, Mixtral 8x7B | ~32,000 | 3 + 54 agent | SENTENCEPIECE_PATTERN |
| mistral_v2 | Mistral 7B v0.3, Codestral, 8x22B | ~32,768 | 10 + 54 agent | SENTENCEPIECE_PATTERN |
| mistral_v3 | Mistral NeMo, Large 2, Pixtral | ~131,000 | 10 + 54 agent | MISTRAL_V3_PATTERN |
OpenAI standard tokens:
- cl100k_base:
<|endoftext|>,<|fim_prefix|>,<|fim_middle|>,<|fim_suffix|>,<|endofprompt|> - o200k_base:
<|endoftext|>,<|endofprompt|>
Meta Llama 3 standard tokens:
- llama3:
<|begin_of_text|>,<|end_of_text|>,<|start_header_id|>,<|end_header_id|>,<|eot_id|>,<|eom_id|>(3.1+),<|python_tag|>(3.1+),<|step_id|>(3.2-Vision),<|image|>(3.2-Vision)
DeepSeek V3 standard tokens:
- deepseek_v3:
<|begin▁of▁sentence|>,<|end▁of▁sentence|>,<think>,</think>,<|User|>,<|Assistant|>,<|EOT|>, FIM tokens (<|fim▁hole|>,<|fim▁begin|>,<|fim▁end|>), tool calling tokens (<|tool▁calls▁begin|>, `<|to
