SkillAgentSearch skills...

Dynamo

A Datacenter Scale Distributed Inference Serving Framework

Install / Use

/learn @ai-dynamo/Dynamo
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- SPDX-FileCopyrightText: Copyright (c) 2024-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->

Dynamo banner

License GitHub Release PyPI Ask DeepWiki Discord Community Contributors

| Docs | Roadmap | Recipes | Examples | Prebuilt Containers | Blog | Design Proposals |

Dynamo

The open-source, datacenter-scale inference stack. Dynamo is the orchestration layer above inference engines — it doesn't replace SGLang, TensorRT-LLM, or vLLM, it turns them into a coordinated multi-node inference system. Disaggregated serving, intelligent routing, multi-tier KV caching, and automatic scaling work together to maximize throughput and minimize latency for LLM, reasoning, multimodal, and video generation workloads.

Built in Rust for performance, Python for extensibility.

When to use Dynamo

  • You're serving LLMs across multiple GPUs or nodes and need to coordinate them
  • You want KV-aware routing to avoid redundant prefill computation
  • You need to independently scale prefill and decode (disaggregated serving)
  • You want automatic scaling that meets latency SLAs at minimum total cost of ownership (TCO)
  • You need fast cold-starts when spinning up new replicas

If you're running a single model on a single GPU, your inference engine alone is probably sufficient.

Feature support at a glance:

| | SGLang | TensorRT-LLM | vLLM | |---|:----:|:----------:|:--:| | Disaggregated Serving | ✅ | ✅ | ✅ | | KV-Aware Routing | ✅ | ✅ | ✅ | | SLA-Based Planner | ✅ | ✅ | ✅ | | KVBM | 🚧 | ✅ | ✅ | | Multimodal | ✅ | ✅ | ✅ | | Tool Calling | ✅ | ✅ | ✅ |

Full Feature Matrix → — LoRA, request migration, speculative decoding, and feature interactions.

Key Results

| Result | Context | |--------|---------| | 7x higher throughput per GPU | DeepSeek R1 on GB200 NVL72 w/ Dynamo vs B200 without (InferenceX) | | 7x faster model startup | ModelExpress weight streaming (DeepSeek-V3 on H200) | | 2x faster time to first token | KV-aware routing, Qwen3-Coder 480B (Baseten benchmark) | | 80% fewer SLA breaches | Planner autoscaling at 5% lower TCO (Alibaba APSARA 2025 @ 2:50:00) | | 750x higher throughput | DeepSeek-R1 on GB300 NVL72 (InferenceXv2) |

What Dynamo Does

Most inference engines optimize a single GPU or a single node. Dynamo is the orchestration layer above them — it turns a cluster of GPUs into a coordinated inference system.

<p align="center"> <img src="./docs/assets/img/dynamo-readme-overview.svg" alt="Dynamo architecture overview" width="600" /> </p>

Architecture Deep Dive →

Core Capabilities

| Capability | What it does | Why it matters | |------------|-------------|----------------| | Disaggregated Prefill/Decode | Separates prefill and decode into independently scalable GPU pools | Maximizes GPU utilization; each phase runs on hardware tuned for its workload | | KV-Aware Routing | Routes requests based on worker load and KV cache overlap | Eliminates redundant prefill computation — 2x faster TTFT | | KV Block Manager (KVBM) | Offloads KV cache across GPU → CPU → SSD → remote storage | Extends effective context length beyond GPU memory | | ModelExpress | Streams model weights GPU-to-GPU via NIXL/NVLink | 7x faster cold-start for new replicas | | Planner | SLA-driven autoscaler that profiles workloads and right-sizes pools | Meets latency targets at minimum total cost of ownership (TCO) | | Grove | K8s operator for topology-aware gang scheduling (NVL72) | Places workloads optimally across racks, hosts, and NUMA nodes | | AIConfigurator | Simulates 10K+ deployment configs in seconds | Finds optimal serving config without burning GPU-hours | | Fault Tolerance | Canary health checks + in-flight request migration | Workers fail; user requests don't |

New in 1.0

  • Zero-config deploy (DGDR) (beta): Specify model, HW, and SLA in one YAML — AIConfigurator auto-profiles the workload, Planner optimizes the topology, and Dynamo deploys
  • Agentic inference: Per-request hints for latency priority, expected output length, and cache pinning TTL. LangChain + NeMo Agent Toolkit integrations
  • Multimodal E/P/D: Disaggregated encode/prefill/decode with embedding cache — 30% faster TTFT on image workloads
  • Video generation: Native FastVideo + SGLang Diffusion support — real-time 1080p on single B200
  • K8s Inference Gateway plugin: KV-aware routing inside the standard Kubernetes gateway
  • Storage-tier KV offload: S3/Azure blob support + global KV events for cluster-wide cache visibility

Quick Start

Option A: Container (fastest)

# Pull a prebuilt container (SGLang example)
docker run --gpus all --network host --rm -it nvcr.io/nvidia/ai-dynamo/sglang-runtime:1.0.1

# Inside the container — start frontend and worker
python3 -m dynamo.frontend --http-port 8000 --discovery-backend file > /dev/null 2>&1 &
python3 -m dynamo.sglang --model-path Qwen/Qwen3-0.6B --discovery-backend file &

# Send a request
curl -s localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-0.6B",
  "messages": [{"role": "user", "content": "Hello!"}],
  "max_tokens": 100
}' | jq

Also available: tensorrtllm-runtime:1.0.1 and vllm-runtime:1.0.1.

Option B: Install from PyPI

pip install "ai-dynamo[sglang]"   # or [vllm] or [trtllm]

Then start the frontend and a worker as shown above. See the full installation guide for system dependencies and backend-specific notes.

Option C: Kubernetes (recommended)

For production multi-node clusters, install the Dynamo Platform and deploy with a single manifest:

# Zero-config deploy: specify model + SLA, Dynamo handles the rest
apiVersion: nvidia.com/v1beta1
kind: DynamoGraphDeploymentRequest
metadata:
  name: my-model
spec:
  model: Qwen/Qwen3-0.6B
  backend: vllm
  sla:
    ttft: 200.0   # ms
    itl: 20.0     # ms
  autoApply: true

Pre-built recipes for common models:

| Model | Framework | Mode | Recipe | |-------|-----------|------|--------| | Llama-3-70B | vLLM | Aggregated | View | | DeepSeek-R1 | SGLang | Disaggregated | View | | Qwen3-32B-FP8 | TensorRT-LLM | Aggregated | View |

See recipes/ for the full list. Cloud-specific guides: AWS EKS · [Google GKE](examples/

View on GitHub
GitHub Stars6.4k
CategoryDevelopment
Updated3m ago
Forks967

Languages

Rust

Security Score

80/100

Audited on Mar 29, 2026

No findings