SkillAgentSearch skills...

LocalAGI

LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. Local AI that works on consumer-grade hardware (CPU and GPU).

Install / Use

/learn @mudler/LocalAGI
About this skill

Quality Score

0/100

Category

Design

Supported Platforms

Universal

README

<p align="center"> <img src="./webui/react-ui/public/logo_1.png" alt="LocalAGI Logo" width="220"/> </p> <h3 align="center"><em>Your AI. Your Hardware. Your Rules</em></h3> <div align="center">

Go Report Card License: MIT GitHub stars GitHub issues

Try on Telegram

</div>

Create customizable AI assistants, automations, chat bots and agents that run 100% locally. No need for agentic Python libraries or cloud service keys, just bring your GPU (or even just CPU) and a web browser.

LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. Create Agents with a couple of clicks, connect via MCP, and use built-in Skills (manage skills in the Web UI and enable them per agent). Every agent exposes a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU). Skills follow the skillserver format and can be created, imported, or synced from git.

🛡️ Take Back Your Privacy

Are you tired of AI wrappers calling out to cloud APIs, risking your privacy? So were we.

LocalAGI ensures your data stays exactly where you want it—on your hardware. No API keys, no cloud subscriptions, no compromise.

🌟 Key Features

  • 🎛 No-Code Agents: Easy-to-configure multiple agents via Web UI.
  • 🖥 Web-Based Interface: Simple and intuitive agent management.
  • 🤖 Advanced Agent Teaming: Instantly create cooperative agent teams from a single prompt.
  • 📡 Connectors: Built-in integrations with Discord, Slack, Telegram, GitHub Issues, and IRC.
  • 🛠 Comprehensive REST API: Seamless integration into your workflows. Every agent created will support OpenAI Responses API out of the box.
  • 📚 Short & Long-Term Memory: Built-in knowledge base (RAG) for collections, file uploads, and semantic search. Manage collections in the Web UI under Knowledge base; agents with "Knowledge base" enabled use it automatically (implementation uses LocalRecall libraries).
  • 🧠 Planning & Reasoning: Agents intelligently plan, reason, and adapt.
  • 🔄 Periodic Tasks: Schedule tasks with cron-like syntax.
  • 💾 Memory Management: Control memory usage with options for long-term and summary memory.
  • 🖼 Multimodal Support: Ready for vision, text, and more.
  • 🔧 Extensible Custom Actions: Easily script dynamic agent behaviors in Go (interpreted, no compilation!).
  • 📚 Built-in Skills: Manage reusable agent skills in the Web UI (create, edit, import/export, git sync). Enable "Skills" per agent to inject skill tools and the skill list into the agent.
  • 🛠 Fully Customizable Models: Use your own models or integrate seamlessly with LocalAI.
  • 📊 Observability: Monitor agent status and view detailed observable updates in real-time.

🛠️ Quickstart

# Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI

# CPU setup (default)
docker compose up

# NVIDIA GPU setup
docker compose -f docker-compose.nvidia.yaml up

# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up

# AMD GPU setup
docker compose -f docker-compose.amd.yaml up

# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up

# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up

Now you can access and manage your agents at http://localhost:8080

Still having issues? see this Youtube video: https://youtu.be/HtVwIxW3ePg

Videos

Creating a basic agent Agent Observability Filters and Triggers RAG and Matrix

📚🆕 Local Stack Family

🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:

<table> <tr> <td width="50%" valign="top"> <a href="https://github.com/mudler/LocalAI"> <img src="https://raw.githubusercontent.com/mudler/LocalAI/refs/heads/master/core/http/static/logo_horizontal.png" width="300" alt="LocalAI Logo"> </a> </td> <td width="50%" valign="top"> <h3><a href="https://github.com/mudler/LocalAI">LocalAI</a></h3> <p>LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI API specifications for local AI inferencing. Does not require GPU.</p> </td> </tr> <tr> <td width="50%" valign="top"> <a href="https://github.com/mudler/LocalRecall"> <img src="https://raw.githubusercontent.com/mudler/LocalRecall/refs/heads/main/static/localrecall_horizontal.png" width="300" alt="LocalRecall Logo"> </a> </td> <td width="50%" valign="top"> <h3><a href="https://github.com/mudler/LocalRecall">LocalRecall</a></h3> <p>A REST-ful API and knowledge base management system. LocalAGI embeds this functionality: the Web UI includes a <strong>Knowledge base</strong> section and the same collections API, so you no longer need to run LocalRecall separately.</p> </td> </tr> </table>

🖥️ Hardware Configurations

LocalAGI supports multiple hardware configurations through Docker Compose profiles:

CPU (Default)

  • No special configuration needed
  • Runs on any system with Docker
  • Best for testing and development
  • Supports text models only

NVIDIA GPU

  • Requires NVIDIA GPU and drivers
  • Uses CUDA for acceleration
  • Best for high-performance inference
  • Supports text, multimodal, and image generation models
  • Run with: docker compose -f docker-compose.nvidia.yaml up
  • Default models:
    • Text: gemma-3-4b-it-qat
    • Multimodal: moondream2-20250414
    • Image: sd-1.5-ggml
  • Environment variables:
    • MODEL_NAME: Text model to use
    • MULTIMODAL_MODEL: Multimodal model to use
    • IMAGE_MODEL: Image generation model to use
    • LOCALAI_SINGLE_ACTIVE_BACKEND: Set to true to enable single active backend mode

Intel GPU

  • Supports Intel Arc and integrated GPUs
  • Uses SYCL for acceleration
  • Best for Intel-based systems
  • Supports text, multimodal, and image generation models
  • Run with: docker compose -f docker-compose.intel.yaml up
  • Default models:
    • Text: gemma-3-4b-it-qat
    • Multimodal: moondream2-20250414
    • Image: sd-1.5-ggml
  • Environment variables:
    • MODEL_NAME: Text model to use
    • MULTIMODAL_MODEL: Multimodal model to use
    • IMAGE_MODEL: Image generation model to use
    • LOCALAI_SINGLE_ACTIVE_BACKEND: Set to true to enable single active backend mode

Customize models

You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:

# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up

# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up

# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=moondream2-20250414 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose -f docker-compose.intel.yaml up

# With custom actions directory
LOCALAGI_CUSTOM_ACTIONS_DIR=/app/custom-actions docker compose up

If no models are specified, it will use the defaults:

  • Text model: gemma-3-4b-it-qat
  • Multimodal model: moondream2-20250414
  • Image model: sd-1.5-ggml

Good (relatively small) models that have been tested are:

  • qwen_qwq-32b (best in co-ordinating agents)
  • gemma-3-12b-it
  • gemma-3-27b-it

🏆 Why Choose LocalAGI?

  • ✓ Ultimate Privacy: No data ever leaves your hardware.
  • ✓ Flexible Model Integration: Supports GGUF, GGML, and more thanks to LocalAI.
  • ✓ Developer-Friendly: Rich APIs and intuitive interfaces.
  • ✓ Effortless Setup: Simple Docker compose setups and pre-built binaries.
  • ✓ Feature-Rich: From planning to multimodal capabilities, connectors for Slack, MCP support, built-in Skills, LocalAGI has it all.

🌟 Screenshots

Powerful Web UI

Web UI Dashboard Web UI Agent Settings Web UI Create Group Web UI Agent Observability

Connectors Ready-to-Go

<p align="center"> <img src="https://github.com/user-attachments/assets/4171072f-e4bf-4485-982b-55d55086f8fc" alt="Telegram" width="60"/> <img src="https://github.com/user-attachments/assets/9235da84-0187-4f26-8482-32dcc55702ef" alt="Discord" width="220"/> <img src=

Related Skills

View on GitHub
GitHub Stars1.7k
CategoryDesign
Updated5h ago
Forks254

Languages

Go

Security Score

95/100

Audited on Mar 27, 2026

No findings