Mucgpt
MUCGPT provides a web interface based for a given large language model (LLM). It includes different modes of interaction and lets users create individual assistant.
Install / Use
/learn @it-at-m/MucgptREADME
Project Information
<!-- Tech Stack -->Technology Stack
<!-- CI -->Build Status
<!-- Container Images -->Container Images
</div> <!-- ABOUT THE PROJECT -->MUCGPT is a system that enables users to interact with a large language model (LLM) through a web interface. This interaction is facilitated by an agentic system that can access several tools. To get a feel for it, take a look at our demo frontend.
Roles and rights management is facilitated by access to an OpenID Connect provider.
Users can create their own assistants and share them within the organization. A personal assistant is a configuration of the MUCGPT agent, particularly the activated tools and system prompts.
See the open issues for a full list of proposed features (and known issues).
Table of contents
⚒️ Built With
Backend
Frontend
Deployment
- Node.js 20+
- Git
- Python 3.12
- uv
- Docker
🏃♂️➡️ Getting started
- Install uv: https://docs.astral.sh/uv/getting-started/installation/
- Install Node.js 20+
⚙️ Configure the environment
Configuration is done via YAML configuration files (primary) with optional environment variable overrides.
Each service reads a config.yaml mounted into the container. Environment variables can override any YAML setting using a service-specific prefix and __ (double underscore) as the nested delimiter.
| Service | YAML file (in stack/) | Env Prefix |
|---------|------------------------|------------|
| core-service | core.config.yaml | MUCGPT_CORE_ |
| assistant-service | assistant.config.yaml | MUCGPT_ASSISTANT_ |
| assistant-migrations | assistant.config.yaml | MUCGPT_ASSISTANT_ |
Initial Setup
cd stack
cp .env.example .env
cp core.config.yaml.example core.config.yaml
cp assistant.config.yaml.example assistant.config.yaml
Models Configuration (YAML)
Configure your LLM models in core.config.yaml:
MODELS:
- type: "OPENAI"
llm_name: "<your-llm-name>"
endpoint: "<your-endpoint>"
api_key: "<your-sk>"
model_info:
auto_enrich_from_model_info_endpoint: true
max_output_tokens: 16384
max_input_tokens: 128000
description: "<description>"
input_cost_per_token: 0.00000009
output_cost_per_token: 0.00000036
supports_function_calling: true
supports_reasoning: false
supports_vision: true
litellm_provider: "<provider>"
inference_location: "<region>"
knowledge_cut_off: "2024-07-01"
See mucgpt-core-service/config.yaml.example and mucgpt-assistant-service/config.yaml.example for complete examples.
Models Configuration (Environment Variable)
Alternatively, models can be configured via the MUCGPT_CORE_MODELS environment variable as a JSON array:
MUCGPT_CORE_MODELS='[
{
"type": "OPENAI",
"llm_name": "<your-llm-name>",
"endpoint": "<your-endpoint>",
"api_key": "<your-sk>",
"model_info": {
"auto_enrich_from_model_info_endpoint": true,
"max_output_tokens": "<number>",
"max_input_tokens": "<number>",
"description": "<description>"
}
}
]'
Configuration Priority
Settings are loaded in this order (highest priority wins):
- Init values – constructor kwargs /
init_settings - Environment variables –
MUCGPT_CORE_*/MUCGPT_ASSISTANT_*, using__for nested sections - YAML config file –
config.yamlmounted into each container .envfile – lowest priority; values here will not override anything set inconfig.yamlor environment variables
This means environment variables always override YAML values, which is useful for injecting secrets in CI/CD.
Environment Variable Override Examples
Any YAML setting can be overridden. Nested sections use __ (double underscore):
# Top-level field
MUCGPT_CORE_VERSION=1.0.0 # → VERSION: "1.0.0"
# Nested field (DB section in assistant service)
MUCGPT_ASSISTANT_DB__HOST=postgres # → DB: { HOST: "postgres" }
MUCGPT_ASSISTANT_DB__PASSWORD=secret # → DB: { PASSWORD: "secret" }
# Nested field (Redis section in core service)
MUCGPT_CORE_REDIS__HOST=valkey # → REDIS: { HOST: "valkey" }
# Nested field (Langfuse section)
MUCGPT_CORE_LANGFUSE__SECRET_KEY=sk-... # → LANGFUSE: { SECRET_KEY: "sk-..." }
Top-level fields:
type: The provider type (e.g.,OPENAI).llm_name: The name or identifier of your LLM model.endpoint: The API endpoint URL for the model.api_key: The API key or secret for authentication.
model_info fields:
auto_enrich_from_model_info_endpoint: Iftrue(default), missing metadata is fetch
Related Skills
node-connect
346.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
346.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
346.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
