SkillAgentSearch skills...

GLaDOS

This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.

Install / Use

/learn @dnhkng/GLaDOS
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<a href="https://trendshift.io/repositories/9828" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9828" alt="dnhkng%2FGlaDOS | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>

GLaDOS Personality Core

Prologue

"Science isn't about asking why. It's about asking, 'Why not?'" - Cave Johnson

GLaDOS is the AI antagonist from Valve's Portal series—a sardonic, passive-aggressive superintelligence who views humans as test subjects worthy of both study and mockery.

Back in 2022 when ChatGPT made its debut, I had a realization: we are living in the Sci-Fi future and can actually build her now. A demented, obsessive AI fixated on humanity, super intelligent yet utterly lacking sound judgment; so just like an LLM, right? 2026, and still no moon colonies or flying cars. But a passive-aggressive AI that controls your lights and runs experiments on you? That we can do.

The architecture borrows from Minsky's Society of Mind—rather than one monolithic prompt, multiple specialized agents (vision, memory, personality, planning) each contribute to a dynamic context. GLaDOS's "self" emerges from their combined output, assembled fresh for each interaction.

The hard part was latency. Getting round-trip response time under 600 milliseconds is a threshold—below it, conversation stops feeling stilted and starts to flow. That meant training a custom TTS model and ruthlessly cutting milliseconds from every part of the pipeline.

Since 2023 I've refactored the system multiple times as better models came out. The current version finally adds what I always wanted: vision, memory, and tool use via MCP.

She sees through a camera, hears through a microphone, speaks through a speaker, and judges you accordingly.

Join our Discord! | Sponsor the project

https://github.com/user-attachments/assets/c22049e4-7fba-4e84-8667-2c6657a656a0

Vision

"We've both said a lot of things that you're going to regret" - GLaDOS

Most voice assistants wait for wake words. GLaDOS doesn't wait—she observes, thinks, and speaks when she has something to say. All the while, parts of her minds are tracking what she sees, monitoring system stats, and researching new neurotoxin recipes online.

Goals:

  • Proactive behavior: React to events (vision, sound, time) without being prompted
  • Emotional state: PAD model (Pleasure-Arousal-Dominance) for reactive mood
  • Persistent personality: HEXACO traits provide stable character across sessions
  • Multi-agent architecture: Subagents handle research, memory, emotions; main agent stays focused
  • Real-time conversation: Optimized latency, natural interruption handling

What's New

  • Emotions: PAD model for reactive mood + HEXACO traits for persistent personality
  • Long-term Memory: Facts, preferences, and conversation summaries persist across sessions
  • Observer Agent: Constitutional AI monitors behavior and self-adjusts within bounds
  • Vision: FastVLM gives her eyes. Details | Demo
  • Autonomy: She watches, waits, and speaks when she has something to say. Details
  • MCP Tools: Extensible tool system for home automation, system info, etc. Details
  • 8GB SBC: Runs on a Rock5b with RK3588 NPU. Branch

Roadmap

"Federal regulations require me to warn you that this next test chamber... is looking pretty good.” - GLaDOS

There's still a lot do do; I will be swapping out models are they are released, and then working on anamatronics, once a good model with inverse kinematics comes out. There was a time when I would code that myself; these days it makes more sense to wait until a trained model is released!

  • [x] Train GLaDOS voice
  • [x] Personality that actually sounds like her
  • [x] Vision via VLM
  • [x] Autonomy (proactive behavior)
  • [x] MCP tool system
  • [x] Emotional state (PAD + HEXACO model)
  • [x] Long-term memory
  • [ ] Implement streaming ASR (nvidia/multitalker-parakeet-streaming-0.6b-v1)
  • [ ] Observer agent (behavior adjustment)
  • [ ] 3D-printable enclosure
  • [ ] Animatronics

Architecture

"Let's be honest. Neither one of us knows what that thing does. Just put it in the corner and I'll deal with it later." - GLaDOS

flowchart TB
    subgraph Input
        mic[🎤 Microphone] --> vad[VAD] --> asr[ASR]
        text[⌨️ Text Input]
        tick[⏱️ Timer]
        cam[📷 Camera]--> vlm[VLM]
    end

    subgraph Minds["Subagents"]
        sensors[Sensors]
        weather[Weather]
        emotion[Emotion]
        news[News]
        memory[Memory]
    end

    ctx[📋 Context]

    subgraph Core["Main Agent"]
        llm[🧠 LLM]
        tts[TTS]
    end

    subgraph Output
        speaker[🔊 Speaker]
        logs[Logs]
        images[🖼️ Images]
        motors[⚙️ Animatronics]
    end

    asr -->|priority| llm
    text -->|priority| llm
    vlm --> ctx
    tick -->|autonomy| llm

    Minds -->|write| ctx
    ctx -->|read| llm
    llm --> tts --> speaker
    llm --> logs
    llm <-->|MCP| tools[Tools]
    tools --> images
    tools --> motors

GLaDOS runs a loop: each tick she reads her slots (weather, news, vision, mood), decides if she has something to say, and speaks. No wake word—if she has an opinion, you'll hear it.

Two lanes: Your speech jumps the queue (priority lane). The autonomy lane is just the loop running in the background. User always wins.

<details> <summary><strong>Audio Pipeline</strong></summary>
flowchart LR
    subgraph Capture["Audio Capture"]
        mic[Microphone<br/>16kHz]
        vad[Silero VAD<br/>32ms chunks]
        buffer[Pre-activation<br/>Buffer 800ms]
    end

    subgraph Recognition["Speech Recognition"]
        detect[Voice Detected<br/>VAD > 0.8]
        accumulate[Accumulate<br/>Speech]
        silence[Silence Detection<br/>640ms pause]
        asr[Parakeet ASR]
    end

    subgraph Interruption["Interruption Handling"]
        speaking{Speaking?}
        stop[Stop Playback]
        clip[Clip Response]
    end

    mic --> vad --> buffer
    buffer --> detect --> accumulate
    accumulate --> silence --> asr
    detect --> speaking
    speaking -->|Yes| stop --> clip
  • Microphone captures at 16kHz mono
  • Silero VAD processes 32ms chunks, triggers at probability > 0.8
  • Pre-activation buffer preserves 800ms before voice detected
  • Silence detection waits 640ms pause before finalizing
  • Interruption stops playback and clips the response in conversation history
</details> <details> <summary><strong>Thread Architecture</strong></summary>

| Thread | Class | Daemon | Priority | Queue | Purpose | |--------|-------|--------|----------|-------|---------| | SpeechListener | SpeechListener | ✓ | INPUT | — | VAD + ASR | | TextListener | TextListener | ✓ | INPUT | — | Text input | | LLMProcessor | LanguageModelProcessor | ✗ | PROCESSING | llm_queue_priority | Main LLM | | LLMProcessor-Auto-N | LanguageModelProcessor | ✗ | PROCESSING | llm_queue_autonomy | Autonomy LLM | | ToolExecutor | ToolExecutor | ✗ | PROCESSING | tool_calls_queue | Tool execution | | TTSSynthesizer | TextToSpeechSynthesizer | ✗ | OUTPUT | tts_queue | Voice synthesis | | AudioPlayer | SpeechPlayer | ✗ | OUTPUT | audio_queue | Playback | | AutonomyLoop | AutonomyLoop | ✓ | BACKGROUND | — | Tick orchestration | | VisionProcessor | VisionProcessor | ✓ | BACKGROUND | vision_request_queue | Vision analysis |

Daemon threads can be killed on exit. Non-daemon threads must complete gracefully to preserve state (e.g., conversation history).

Shutdown order: INPUT → PROCESSING → OUTPUT → BACKGROUND → CLEANUP

</details> <details> <summary><strong>Context Building</strong></summary>
flowchart TB
    subgraph Sources["Context Sources"]
        sys[System Prompt<br/>Personality]
        slots[Task Slots<br/>Weather, News, etc.]
        prefs[User Preferences]
        const[Constitutional<br/>Modifiers]
        mcp[MCP Resources]
        vision[Vision State]
    end

    subgraph Builder["Context Builder"]
        merge[Priority-Sorted<br/>Merge]
    end

    subgraph Final["LLM Request"]
        messages[System Messages]
        history[Conversation<br/>History]
        user[User Message]
    end

    Sources --> merge --> messages
    messages --> history --> user

What the LLM sees on each request:

  1. System prompt with personality
  2. Task slots (weather, news, vision state, emotion)
  3. User preferences from memory
  4. Constitutional modifiers (behavior adjustments from observer)
  5. MCP resources (dynamic tool descriptions)
  6. Conversation history (compacted when exceeding token threshold)
</details> <details> <summary><strong>Autonomy System</strong></summary>
flowchart TB
    subgraph Triggers
        tick[⏱️ Time Tick]
        vision[📷 Vision Event]
        task[📋 Task Update]
    end

    subgraph Loop["Autonomy Loop"]
        bus[Event Bus]
        cooldown{Cooldown<br/>Passed?}
        build[Build Context<br/>from Slots]
        dispatch[Dispatch to<br/>LLM Queue]
    end

    subgraph Agents["Subagents"]
        emotion[Emotion Agent<br/>PAD Model]
        compact[Compaction Agent<br/>Token Management]
        observer[Observer Agent<br/>Behavior Adjustment]
        weather[Weather Agent]
        news[HN Agent]
    end

    Triggers --> bus --> cooldown
    cooldown -->|Yes| build --> dispatch
    Agents -->|write| slots[Task Slots]
    slots -->|read| build

Each subagent runs its own loop: timer or camera triggers it, it makes an LLM decision, and writes to a slot the main agent reads. Fully async—subagents never block the main conversation.

See autonomy.md for details.

</details> <details> <summary
View on GitHub
GitHub Stars5.5k
CategoryDevelopment
Updated9h ago
Forks420

Languages

Python

Security Score

95/100

Audited on Mar 28, 2026

No findings