Inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-prem, or your laptop — all through one unified, production-ready inference API.
Install / Use
/learn @xorbitsai/InferenceREADME
Xorbits Inference: Model Serving Made Easy 🤖
<p align="center"> <a href="https://xinference.io/en">Xinference Enterprise</a> · <a href="https://inference.readthedocs.io/en/latest/getting_started/installation.html#installation">Self-hosting</a> · <a href="https://inference.readthedocs.io/">Documentation</a> </p> <p align="center"> <a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-454545?style=for-the-badge"></a> <a href="./README_zh_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/中文介绍-d9d9d9?style=for-the-badge"></a> <a href="./README_ja_JP.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9?style=for-the-badge"></a> </p> </div> <br />Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.
<div align="center"> <i><a href="https://discord.gg/Xw9tszSkr5">👉 Join our Discord community!</a></i> </div>🔥 Hot Topics
Framework Enhancements
- Agent-native Serving: Xinference integrates with Xagent to enable dynamic planning, tool use, and autonomous multi-step reasoning — moving beyond static pipelines.
- Auto batch: Multiple concurrent requests are automatically batched, significantly improving throughput: #4197
- Xllamacpp: New llama.cpp Python binding, maintained by Xinference team, supports continuous batching and is more production-ready.: #2997
- Distributed inference: running models across workers: #2877
- VLLM enhancement: Shared KV cache across multiple replicas: #2732
New Models
- Built-in support for Qwen-3.5: #4639
- Built-in support for GLM-5: #4638
- Built-in support for MiniMax-M2.7
- Built-in support for MiniMax-M2.5: #4630
- Built-in support for Kimi-K2.5: #4631
- Built-in support for FLUX.2-Klein: #4596
- Built-in support for Qwen3-ASR: #4581
- Built-in support for GLM-4.7: #4565
- Built-in support for MinerU2.5-2509-1.2B: #4569
Integrations
- Xagent: an enterprise agent platform for building and running AI agents with planning, memory, and tool use — not limited to rigid workflows.
- Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
- FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
- RAGFlow: is an open-source RAG engine based on deep document understanding.
- MaxKB: MaxKB = Max Knowledge Brain, it is a powerful and easy-to-use AI assistant that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.
Key Features
🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.
⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!
🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.
⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.
🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.
🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.
Why Xinference
| Feature | Xinference | FastChat | OpenLLM | RayLLM | |------------------------------------------------|------------|----------|---------|--------| | OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ | | vLLM Integrations | ✅ | ✅ | ✅ | ✅ | | More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ | | More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ | | Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ | | Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ | | Text Embedding Models | ✅ | ❌ | ❌ | ❌ | | Multimodal Models | ✅ | ❌ | ❌ | ❌ | | Audio Models | ✅ | ❌ | ❌ | ❌ | | More OpenAI Functionalities (Function Calling) | ✅ | ❌ | ❌ | ❌ |
Using Xinference
-
Self-hosting Xinference Community Edition</br> Quickly get Xinference running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions.
-
Xinference for enterprise / organizations</br> We provide additional enterprise-centric features. send us an email to discuss enterprise needs. </br>
Staying Ahead
Star Xinference on GitHub and be instantly notified of new releases.

Getting Started
Jupyter Notebook
The lightest way to experience Xinference is to try our Jupyter Notebook on Google Colab.
Docker
Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.
docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
K8s via helm
Ensure that you have GPU support in your Kubernetes cluster, then install as follows.
# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts
# update indexes and query xinference versions
helm repo update xinference
helm
Related Skills
tmux
333.7kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
blogwatcher
333.7kMonitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
product
Cloud-agnostic Kubernetes infrastructure with Terraform & Helm for homelabs, edge, and production clusters.
Unla
2.1k🧩 MCP Gateway - A lightweight gateway service that instantly transforms existing MCP Servers and APIs into MCP servers with zero code changes. Features Docker deployment and management UI, requiring no infrastructure modifications.
