SkillAgentSearch skills...

Mucgpt

MUCGPT provides a web interface based for a given large language model (LLM). It includes different modes of interaction and lets users create individual assistant.

Install / Use

/learn @it-at-m/Mucgpt
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- PROJECT LOGO --> <div align="center"> <a href="#"> <img src="mucgpt-frontend/src/assets/mucgpt_pride.png" alt="Logo" height="200" style="display: block; margin: 0 auto; filter: invert(0)"> </a> </div> <br /> <div align="center"> <!-- Project / Meta -->

Project Information

Made with love by it@M Gitmoij GitHub license GitHub release version Demo-Frontend

<!-- Tech Stack -->

Technology Stack

Supported python versions Supported npm versions uv FastAPI React Postgres LangGraph

<!-- CI -->

Build Status

Assistant-service tests Core service tests

<!-- Container Images -->

Container Images

Frontend version Core service version Assistant service version Migrations service version

</div> <!-- ABOUT THE PROJECT -->

MUCGPT is a system that enables users to interact with a large language model (LLM) through a web interface. This interaction is facilitated by an agentic system that can access several tools. To get a feel for it, take a look at our demo frontend.

Roles and rights management is facilitated by access to an OpenID Connect provider.

Users can create their own assistants and share them within the organization. A personal assistant is a configuration of the MUCGPT agent, particularly the activated tools and system prompts.

See the open issues for a full list of proposed features (and known issues).

Table of contents

⚒️ Built With

Backend

Frontend

Deployment

🏃‍♂️‍➡️ Getting started

⚙️ Configure the environment

Configuration is done via YAML configuration files (primary) with optional environment variable overrides.

Each service reads a config.yaml mounted into the container. Environment variables can override any YAML setting using a service-specific prefix and __ (double underscore) as the nested delimiter.

| Service | YAML file (in stack/) | Env Prefix | |---------|------------------------|------------| | core-service | core.config.yaml | MUCGPT_CORE_ | | assistant-service | assistant.config.yaml | MUCGPT_ASSISTANT_ | | assistant-migrations | assistant.config.yaml | MUCGPT_ASSISTANT_ |

Initial Setup

cd stack
cp .env.example .env
cp core.config.yaml.example core.config.yaml
cp assistant.config.yaml.example assistant.config.yaml

Models Configuration (YAML)

Configure your LLM models in core.config.yaml:

MODELS:
  - type: "OPENAI"
    llm_name: "<your-llm-name>"
    endpoint: "<your-endpoint>"
    api_key: "<your-sk>"
    model_info:
      auto_enrich_from_model_info_endpoint: true
      max_output_tokens: 16384
      max_input_tokens: 128000
      description: "<description>"
      input_cost_per_token: 0.00000009
      output_cost_per_token: 0.00000036
      supports_function_calling: true
      supports_reasoning: false
      supports_vision: true
      litellm_provider: "<provider>"
      inference_location: "<region>"
      knowledge_cut_off: "2024-07-01"

See mucgpt-core-service/config.yaml.example and mucgpt-assistant-service/config.yaml.example for complete examples.

Models Configuration (Environment Variable)

Alternatively, models can be configured via the MUCGPT_CORE_MODELS environment variable as a JSON array:

MUCGPT_CORE_MODELS='[
  {
    "type": "OPENAI",
    "llm_name": "<your-llm-name>",
    "endpoint": "<your-endpoint>",
    "api_key": "<your-sk>",
    "model_info": {
      "auto_enrich_from_model_info_endpoint": true,
      "max_output_tokens": "<number>",
      "max_input_tokens": "<number>",
      "description": "<description>"
    }
  }
]'

Configuration Priority

Settings are loaded in this order (highest priority wins):

  1. Init values – constructor kwargs / init_settings
  2. Environment variablesMUCGPT_CORE_* / MUCGPT_ASSISTANT_*, using __ for nested sections
  3. YAML config fileconfig.yaml mounted into each container
  4. .env file – lowest priority; values here will not override anything set in config.yaml or environment variables

This means environment variables always override YAML values, which is useful for injecting secrets in CI/CD.

Environment Variable Override Examples

Any YAML setting can be overridden. Nested sections use __ (double underscore):

# Top-level field
MUCGPT_CORE_VERSION=1.0.0              # → VERSION: "1.0.0"

# Nested field (DB section in assistant service)
MUCGPT_ASSISTANT_DB__HOST=postgres      # → DB: { HOST: "postgres" }
MUCGPT_ASSISTANT_DB__PASSWORD=secret    # → DB: { PASSWORD: "secret" }

# Nested field (Redis section in core service)
MUCGPT_CORE_REDIS__HOST=valkey          # → REDIS: { HOST: "valkey" }

# Nested field (Langfuse section)
MUCGPT_CORE_LANGFUSE__SECRET_KEY=sk-... # → LANGFUSE: { SECRET_KEY: "sk-..." }

Top-level fields:

  • type: The provider type (e.g., OPENAI).
  • llm_name: The name or identifier of your LLM model.
  • endpoint: The API endpoint URL for the model.
  • api_key: The API key or secret for authentication.

model_info fields:

  • auto_enrich_from_model_info_endpoint: If true (default), missing metadata is fetch

Related Skills

View on GitHub
GitHub Stars44
CategoryDevelopment
Updated3d ago
Forks3

Languages

TypeScript

Security Score

95/100

Audited on Mar 30, 2026

No findings