Bhumi
โก Bhumi โ The fastest AI inference client for Python, built with Rust for unmatched speed, efficiency, and scalability ๐
Install / Use
/learn @justrach/BhumiREADME
๐ BHUMI v0.4.82 - The Fastest AI Inference Client โก
Introduction
Bhumi is the fastest AI inference client, built with Rust for Python. It is designed to maximize performance, efficiency, and scalability, making it the best choice for LLM API interactions.
Why Bhumi?
- ๐ Fastest AI inference client โ Outperforms alternatives with 2-3x higher throughput
- โก Built with Rust for Python โ Achieves high efficiency with low overhead
- ๐ Supports 9+ AI providers โ OpenAI, Anthropic, Google Gemini, Groq, Cerebras, SambaNova, Mistral, Cohere, and more
- ๐๏ธ Vision capabilities โ Image analysis across 5 providers (OpenAI, Anthropic, Gemini, Mistral, Cerebras)
- ๐ Streaming and async capabilities โ Real-time responses with Rust-powered concurrency
- ๐ Automatic connection pooling and retries โ Ensures reliability and efficiency
- ๐ก Minimal memory footprint โ Uses up to 60% less memory than other clients
- ๐ Production-ready โ Optimized for high-throughput applications with OpenAI Responses API support
Bhumi (เคญเฅเคฎเคฟ) is Sanskrit for Earth, symbolizing stability, grounding, and speedโjust like our inference engine, which ensures rapid and stable performance. ๐
๐ What's New in v0.4.82
โจ Major New Features
- ๐ท Cohere Provider Support: Added Cohere AI with OpenAI-compatible
/v1/chat/completionsendpoint - ๐ก Free-Threaded Python 3.13+ Support: True parallel execution without GIL for maximum performance
- ๐๏ธ Removed orjson Dependency: Simplified dependencies using stdlib JSON for better compatibility
- โฌ๏ธ PyO3 0.26 Upgrade: Updated to latest PyO3 with modern Bound API and better performance
- ๐ง Tokio 1.47: Latest async runtime for improved concurrency
๐ Technical Improvements
- Enhanced OCR Integration:
client.ocr()andclient.upload_file()methods - Unified API: Single method handles both file upload and OCR processing
- Better Error Handling: Improved timeout and validation for OCR operations
- Production Ready: Optimized for high-volume document processing workflows
๐ OCR Capabilities
- Document Types: PDF, JPEG, PNG, and more formats
- Text Extraction: High-accuracy OCR with layout preservation
- Structured Data: Extract tables, forms, and key-value pairs
- Bounding Boxes: Precise text positioning and element detection
- Multi-format Output: Markdown text + structured JSON data
๐ What's New in v0.4.8
โจ Major New Features
- ๐ 8+ AI Providers: Added Mistral AI support with vision capabilities (Pixtral models)
- ๐๏ธ Vision Support: Image analysis across 5 providers (OpenAI, Anthropic, Gemini, Mistral, Cerebras)
- ๐ก OpenAI Responses API: Intelligent routing for new API patterns with better performance
- ๐ง Satya v0.3.7: Upgraded with nested model support and enhanced validation
- ๐ Production Ready: Improved wheel building, Docker compatibility, and CI/CD
๐ Technical Improvements
- Cross-platform Wheels: Enhanced building for Linux, macOS (Intel + Apple Silicon), Windows
- OpenSSL Integration: Proper SSL library linking for all platforms
- Workflow Optimization: Disabled integration tests for faster releases
- Bug Fixes: Resolved MAP-Elites buffer issues and Satya validation problems
- Performance Optimizations: Improved MAP-Elites archive loading with orjson + Satya validation
- Production Ready: Enhanced error handling and timeout protection
๐ Provider Support Matrix
| Provider | Chat | Streaming | Tools | Vision | Structured | |----------|------|-----------|-------|---------|------------| | OpenAI | โ | โ | โ | โ | โ | | Anthropic | โ | โ | โ | โ | โ ๏ธ | | Gemini | โ | โ | โ | โ | โ ๏ธ | | Groq | โ | โ | โ | โ | โ ๏ธ | | Cerebras | โ | โ | โ * | โ | โ ๏ธ | | SambaNova | โ | โ | โ | โ | โ ๏ธ | | OpenRouter | โ | โ | โ | โ | โ ๏ธ | | Cohere | โ | โ | โ | โ | โ ๏ธ |
*Cerebras tools require specific models
Installation
No Rust compiler required! ๐ Pre-compiled wheels are available for all major platforms:
pip install bhumi
Supported Platforms:
- ๐ง Linux (x86_64)
- ๐ macOS (Intel & Apple Silicon)
- ๐ช Windows (x86_64)
- ๐ Python 3.8, 3.9, 3.10, 3.11, 3.12
Latest v0.4.8 release includes improved wheel building and cross-platform compatibility!
Quick Start
OpenAI Example
import asyncio
from bhumi.base_client import BaseLLMClient, LLMConfig
import os
api_key = os.getenv("OPENAI_API_KEY")
async def main():
config = LLMConfig(
api_key=api_key,
model="openai/gpt-4o",
debug=True
)
client = BaseLLMClient(config)
response = await client.completion([
{"role": "user", "content": "Tell me a joke"}
])
print(f"Response: {response['text']}")
if __name__ == "__main__":
asyncio.run(main())
โก Performance Optimizations
Bhumi includes cutting-edge performance optimizations that make it 2-3x faster than alternatives:
๐ง MAP-Elites Buffer Strategy (v0.4.8 Enhanced)
- Ultra-fast archive loading with Satya v0.3.7 validation + stdlib JSON parsing (2-3x faster than standard JSON)
- Trained buffer configurations optimized through evolutionary algorithms
- Automatic buffer adjustment based on response patterns and historical data
- Type-safe validation with comprehensive error checking
- Secure loading without unsafe
eval()operations - Nested model support for complex data structures
๐ Performance Status Check
Check if you have optimal performance with the built-in diagnostics:
from bhumi.utils import print_performance_status
# Check optimization status
print_performance_status()
# ๐ Bhumi Performance Status
# โ
Optimized MAP-Elites archive loaded
# โก Optimization Details:
# โข Entries: 15,644 total, 15,644 optimized
# โข Coverage: 100.0% of search space
# โข Loading: Satya validation + stdlib JSON parsing (2-3x faster)
๐ Archive Distribution (v0.4.8 Enhanced)
When you install Bhumi, you automatically get:
- Pre-trained MAP-Elites archive for optimal buffer sizing
- Fast stdlib JSON parsing (2-3x faster than standard
json) - Satya v0.3.7-powered type validation for bulletproof data loading
- Performance metrics and diagnostics
- Nested model support for complex configurations
Gemini Example
import asyncio
from bhumi.base_client import BaseLLMClient, LLMConfig
import os
api_key = os.getenv("GEMINI_API_KEY")
async def main():
config = LLMConfig(
api_key=api_key,
model="gemini/gemini-2.0-flash",
debug=True
)
client = BaseLLMClient(config)
response = await client.completion([
{"role": "user", "content": "Tell me a joke"}
])
print(f"Response: {response['text']}")
if __name__ == "__main__":
asyncio.run(main())
Cerebras Example
import asyncio
from bhumi.base_client import BaseLLMClient, LLMConfig
import os
api_key = os.getenv("CEREBRAS_API_KEY")
async def main():
config = LLMConfig(
api_key=api_key,
model="cerebras/llama3.1-8b", # gateway-style model parsing is supported
debug=True,
)
client = BaseLLMClient(config)
response = await client.completion([
{"role": "user", "content": "Summarize the benefits of Bhumi in one sentence."}
])
print(f"Response: {response['text']}")
if __name__ == "__main__":
asyncio.run(main())
Mistral AI Example (with Vision)
import asyncio
from bhumi.base_client import BaseLLMClient, LLMConfig
import os
api_key = os.getenv("MISTRAL_API_KEY")
async def main():
# Text-only model
config = LLMConfig(
api_key=api_key,
model="mistral/mistral-small-latest",
debug=True
)
client = BaseLLMClient(config)
response = await client.completion([
{"role": "user", "content": "Bonjour! Parlez-moi de Paris."} # French language support
])
print(f"Mistral Response: {response['text']}")
# Vision model for image analysis
vision_config = LLMConfig(
api_key=api_key,
model="mistral/pixtral-12b-2409" # Pixtral vision model
)
vision_client = BaseLLMClient(vision_config)
response = await vision_client.completion([
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="}}
]
}
])
print(f"Vision Analysis: {response['text']}")
if __name__ == "__main__":
asyncio.run(main())
Provider API: Multi-Provider Model Format
Bhumi unifies providers using a simple provider/model format in LLMConfig.model. Base URLs are auto-set for known providers; you can override with base_url.
- Supported providers:
openai,anthropic,gemini,groq,sambanova,openrouter,cerebras,mistral,cohere - Foundation providers use
provider/model. Gateways like Groq/OpenRouter/SambaNova may use nested paths after the provider (e.g.,openrouter/meta-llama/llama-3.1-8b-instruct).
from bhumi.base_client import BaseLLMClient, LLMConfig
# OpenAI
client = BaseLLMClient(LLMConfig(api_key=os.getenv("OPENAI_API_KEY"), model="openai/gpt-4o"))
# Anthropic
client = BaseLLMClient(LLMConfig(api_key=os.getenv("ANTHROPIC_API_KEY"), model="anthropic/claude-3-5-sonnet-latest"))
# Gemini (OpenAI-compatible endpoint)
client = BaseL
