Manifest
Smart LLM Routing for OpenClaw. Cut Costs up to 70% 🦞🦚
Install / Use
/learn @mnfst/ManifestREADME
What is Manifest?
Manifest is a smart model router for OpenClaw. It sits between your agent and your LLM providers, scores each request, and routes it to the cheapest model that can handle it. Simple questions go to fast, cheap models. Hard problems go to expensive ones. You save money without thinking about it.
- Route requests to the right model: Cut costs up to 70%
- Automatic fallbacks: If a model fails, the next one picks up
- Set limits: Don't exceed your budget
Quick start
Cloud version
Go to app.manifest.build and follow the guide.
Local version
openclaw plugins install manifest
openclaw gateway restart
Dashboard opens at http://127.0.0.1:2099. The plugin starts an embedded server, runs the dashboard locally, and registers itself as a provider automatically. No account or API key needed.
Cloud vs local
Pick cloud version for quick setup and multi-device access. Pick local version for keeping all your data on your machine or for using local models like Ollama.
Not sure which one to choose? Start with cloud.
How it works
Every request to manifest/auto goes through a 23-dimension scoring algorithm (runs in under 2ms). The scorer picks a tier (simple, standard, complex, or reasoning) and routes to the best model in that tier from your connected providers.
All routing data (tokens, costs, model, duration) is recorded automatically. You see it in the dashboard. No extra setup.
Manifest vs OpenRouter
| | Manifest | OpenRouter | | ------------ | -------------------------------------------- | --------------------------------------------------- | | Architecture | Local. Your requests, your providers | Cloud proxy. All traffic goes through their servers | | Cost | Free | 5% fee on every API call | | Source code | MIT, fully open | Proprietary | | Data privacy | Metadata only (cloud) or fully local | Prompts and responses pass through a third party | | Transparency | Open scoring. You see why a model was chosen | No visibility into routing decisions |
Supported providers
Works with 300+ models across these providers:
| Provider | Models |
| ------------------------------------------------------------------------------ | -------------------------------------------------------------------- |
| OpenAI | gpt-5.3, gpt-4.1, o3, o4-mini + 54 more |
| Anthropic | claude-opus-4-6, claude-sonnet-4.5, claude-haiku-4.5 + 14 more |
| Google Gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro + 19 more |
| DeepSeek | deepseek-chat, deepseek-reasoner + 11 more |
| xAI | grok-4, grok-3, grok-3-mini + 8 more |
| Mistral AI | mistral-large, codestral, devstral + 26 more |
| Qwen (Alibaba) | qwen3-235b, qwen3-coder, qwq-32b + 42 more |
| MiniMax | minimax-m2.5, minimax-m1, minimax-m2 + 5 more |
| Kimi (Moonshot) | kimi-k2, kimi-k2.5 + 3 more |
| Amazon Nova | nova-pro, nova-lite, nova-micro + 5 more |
| Z.ai (Zhipu) | glm-5, glm-4.7, glm-4.5 + 5 more |
| OpenRouter | 300+ models from all providers |
| Ollama | Run any model locally (Llama, Gemma, Mistral, ...) |
| Custom providers | Any provider with an OpenAI-compatible API endpoint |
Contributing
Manifest is open source under the MIT license. See CONTRIBUTING.md for dev setup, architecture, and workflow. Join the conversation on Discord.
