SkillAgentSearch skills...

Infinity

Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali

Install / Use

/learn @michaelfeil/Infinity
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- PROJECT SHIELDS --> <!-- *** I'm using markdown "reference style" links for readability. *** Reference links are enclosed in brackets [ ] instead of parentheses ( ). *** See the bottom of this document for the declaration of the reference variables *** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. *** https://www.markdownguide.org/basic-syntax/#reference-style-links -->

[![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url]

Infinity ♾️

[![codecov][codecov-shield]][codecov-url] [![ci][ci-shield]][ci-url] [![Downloads][pepa-shield]][pepa-url] DOI Docker pulls

Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models, clip, clap and colpali. Infinity is developed under MIT License.

Why Infinity

  • Deploy any model from HuggingFace: deploy any embedding, reranking, clip and sentence-transformer model from HuggingFace
  • Fast inference backends: The inference server is built on top of PyTorch, optimum (ONNX/TensorRT) and CTranslate2, using FlashAttention to get the most out of your NVIDIA CUDA, AMD ROCM, CPU, AWS INF2 or APPLE MPS accelerator. Infinity uses dynamic batching and tokenization dedicated in worker threads.
  • Multi-modal and multi-model: Mix-and-match multiple models. Infinity orchestrates them.
  • Tested implementation: Unit and end-to-end tested. Embeddings via infinity are correctly embedded. Lets API users create embeddings till infinity and beyond.
  • Easy to use: Built on FastAPI. Infinity CLI v2 allows launching of all arguments via Environment variable or argument. OpenAPI aligned to OpenAI's API specs. View the docs at https://michaelfeil.github.io/infinity on how to get started.
<p align="center"> <a href="https://github.com/basetenlabs/truss-examples/tree/7025918c813d08d718b8939f44f10651a0ff2c8c/custom-server/infinity-embedding-server"><img src="https://avatars.githubusercontent.com/u/54861414" alt="Logo Baseten.co" width="50"/></a> <a href="https://github.com/runpod-workers/worker-infinity-embedding"><img src="https://github.com/user-attachments/assets/24f1906d-31b8-4e16-a479-1382cbdea046" alt="Logo Runpod" width="50"/></a> <a href="https://www.truefoundry.com/cognita"><img src="https://github.com/user-attachments/assets/1b515b0f-2332-4b12-be82-933056bddee4" alt="Logo TrueFoundry" width="50"/></a> <a href="https://vast.ai/article/serving-infinity"><img src="https://github.com/user-attachments/assets/8286d620-f403-48f5-bd7f-f471b228ae7b" alt="Logo Vast" width="46"/></a> <a href="https://www.dataguard.de"><img src="https://github.com/user-attachments/assets/3fde1ac6-c299-455d-9fc2-ba4012799f9c" alt="Logo DataGuard" width="50"/></a> <a href="https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/bring-open-source-llms-into-sap-ai-core/ba-p/13655167"><img src="https://github.com/user-attachments/assets/743e932b-ed5b-4a71-84cb-f28235707a84" alt="Logo SAP" width="47"/></a> <a href="https://x.com/StuartReid1929/status/1763434100382163333"><img src="https://github.com/user-attachments/assets/477a4c54-1113-434b-83bc-1985f10981d3" alt="Logo Nosible" width="44"/></a> <a href="https://github.com/freshworksinc/freddy-infinity"><img src="https://github.com/user-attachments/assets/a68da78b-d958-464e-aaf6-f39132be68a0" alt="Logo FreshWorks" width="50"/></a> <a href="https://github.com/dstackai/dstack/tree/master/examples/deployment/infinity"><img src="https://github.com/user-attachments/assets/9cde2d6b-dc16-4f0a-81ba-535a84321467" alt="Logo Dstack" width="50"/></a> <a href="https://embeddedllm.com/blog/"><img src="https://avatars.githubusercontent.com/u/148834374" alt="Logo JamAI" width="50"/></a> <a href="https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct#infinity_emb"><img src="https://avatars.githubusercontent.com/u/1961952" alt="Logo Alibaba Group" width="50"/></a> <a href="https://github.com/bentoml/BentoInfinity/"><img src="https://avatars.githubusercontent.com/u/49176046" alt="Logo BentoML" width="50"/></a> <a href="https://x.com/bo_wangbo/status/1766371909086724481"><img src="https://avatars.githubusercontent.com/u/60539444" alt="Logo JinaAi" width="50"/></a> <a href="https://github.com/dwarvesf/llm-hosting"><img src="https://avatars.githubusercontent.com/u/10388449" alt="Logo Dwarves Foundation" width="50"/></a> <a href="https://github.com/huggingface/chat-ui/blob/daf695ea4a6e2d081587d7dbcae3cacd466bf8b2/docs/source/configuration/embeddings.md#openai"><img src="https://avatars.githubusercontent.com/u/25720743" alt="Logo HF" width="50"/></a> <a href="https://www.linkedin.com/posts/markhng525_join-me-and-ekin-karabulut-at-the-ai-infra-activity-7163233344875393024-LafB?utm_source=share&utm_medium=member_desktop"><img src="https://avatars.githubusercontent.com/u/86131705" alt="Logo Gradient.ai" width="50"/></a> </p>

Latest News 🔥

  • [2025/07] Blackwell support
  • [2024/11] AMD, CPU, ONNX docker images
  • [2024/10] pip install infinity_client
  • [2024/07] Inference deployment example via Modal and a free GPU deployment
  • [2024/06] Support for multi-modal: clip, text-classification & launch all arguments from env variables
  • [2024/05] launch multiple models using the v2 cli, including --api-key
  • [2024/03] infinity supports experimental int8 (cpu/cuda) and fp8 (H100/MI300) support
  • [2024/03] Docs are online: https://michaelfeil.github.io/infinity/latest/
  • [2024/02] Community meetup at the Run:AI Infra Club
  • [2024/01] TensorRT / ONNX inference
  • [2023/10] Initial release

Getting started

Launch the cli via pip install

pip install infinity-emb[all]

After your pip install, with your venv active, you can run the CLI directly.

infinity_emb v2 --model-id BAAI/bge-small-en-v1.5

Check the v2 --help command to get a description for all parameters.

infinity_emb v2 --help

Launch the CLI using a pre-built docker container (recommended)

Instead of installing the CLI via pip, you may also use docker to run michaelf34/infinity. Make sure you mount your accelerator ( i.e. install nvidia-docker and activate with --gpus all).

port=7997
model1=michaelfeil/bge-small-en-v1.5
model2=mixedbread-ai/mxbai-rerank-xsmall-v1
volume=$PWD/data

docker run -it --gpus all \
 -v $volume:/app/.cache \
 -p $port:$port \
 michaelf34/infinity:latest \
 v2 \
 --model-id $model1 \
 --model-id $model2 \
 --port $port

The cache path inside the docker container is set by the environment variable HF_HOME.

Specialized docker images

<details> <summary>Docker container for CPU</summary> Use the `latest-cpu` image or `x.x.x-cpu` for slimer image. Run like any other cpu-only docker image. Optimum/Onnx is often the prefered engine.
docker run -it \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest-cpu \
v2 \
--engine optimum \
--model-id $model1 \
--model-id $model2 \
--port $port
</details> <details> <summary>Docker Container for ROCm (MI200 Series and MI300 Series)</summary> Use the `latest-rocm` image or `x.x.x-rocm` for rocm compatible inference. **This image is currently not build via CI/CD (to large), consider pinning to exact version.** Make sure you have ROCm is correctly installed and ready to use with Docker.

Visit Docs for more info.

</details> <details> <summary>Docker Container for Onnx-GPU, Cuda Extensions, TensorRT</summary> Use the `latest-trt-onnx` image or `x.x.x-trt-onnx` for nvidia compatible inference. **This image is currently not build via CI/CD (to large), consider pinning to exact version.**

This image has support for:

  • ONNX-Cuda "CudaExecutionProvider"
  • ONNX-TensorRT "TensorRTExecutionProvider" (may not always work due to version mismatch with ORT)
  • CudaExtensions and packages, e.g. Tri-Dao's pip install flash-attn package when using Pytorch.
  • nvcc compiler support
docker run -it \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest-trt-onnx \
v2 \
--engine optimum \
--device cuda \
--model-id $model1 \
--port $port
</details>

Using local models with Docker container

In order to deploy a local model with a docker container, you need to mount the model inside the container and specify the path in the container to the launch command.

Example:

git lfs install 
cd /tmp
mkdir models && cd models && git clone https://huggingface.co/BAAI/bge-small-en-v1.5
docker run -it   -v /tmp/models:/models  -p 8081:8081  michaelf34/infinity:latest v2  --model-id "/models/bge-small-en-v1.5" --port 8081

Advanced CLI usage

<details> <summary>Launching multiple models at once</summary>

Since infinity_emb>=0.0.34, you can use cli v2 method to launch multiple models at the same time. Checkout infinity_emb v2 --help for all args and validation.

Multiple Model CLI Playbook:

    1. cli options can be repeated e.g. `v2 --model-id mode

Related Skills

View on GitHub
GitHub Stars2.7k
CategoryDevelopment
Updated2h ago
Forks182

Languages

Python

Security Score

100/100

Audited on Mar 25, 2026

No findings