SkillAgentSearch skills...

Docketeer

A reasonably sized autonomous AI construction kit

Install / Use

/learn @chrisguidry/Docketeer
About this skill

Quality Score

0/100

Supported Platforms

Zed

README

Docketeer

Build the AI personal assistant you need with Docket.

What is docketeer?

Docketeer is a toolkit for building the autonomous AI agent you want without bringing in dozens or hundreds of modules you don't. Instead of a huge and sprawling monolithic system, Docketeer is small, opinionated, and designed to be extended through plugins.

The core of Docketeer is an agentic loop, a Docket for scheduling autonomous work, and a small set of tools for managing memory in its workspace. The inference backend is pluggable — bring your own LLM provider. Any other functionality can be added through simple Python plugins that register via standard Python entry points.

Docketeer is currently under heavy early and active development. If you're feeling adventurous, please jump in and send PRs! Otherwise, follow along until things are a little more baked.

The philosophy behind Docketeer's autonomy

Our frontier models don't need much help at all to behave autonomously — they just need an execution model to support it. All we're doing here is giving the agent a Docket of its own, on which it can schedule its own future work. As of today, the agent can use a tool to schedule a nudge Docket task to prompt itself at any future time.

The docketeer-autonomy plugin builds on this with recurring reverie and consolidation cycles that give the agent opportunities throughout the day to evaluate the world, reflect on recent events, schedule new tasks, and update its own memory and knowledge base. It also adds journaling, per-person profiles, and room context — install it for the full "inner life" experience, or leave it out for a plain chatbot.

Most importantly, the agent can direct itself by updating markdown files in its own workspace. This self-prompting and the ability to self-improve its prompts are the heart of Docketeer's autonomy.

Standards

Yes, Docketeer is developed entirely with AI coding tools. Yes, every line of Docketeer has been reviewed by me, the author. Yes, 100% test coverage is required and enforced.

Security

Obviously, there are inherent risks to running an autonomous agent. Docketeer does not attempt to mitigate those risks. By using only well-aligned and intelligent models, I'm hoping to avoid the most catastrophic outcomes that could come from letting an agent loose on your network. However, the largest risks are still likely to come from nefarious human actors who are eager to target these new types of autonomous AIs.

Docketeer's architecture does not require listening to the network at all. There is no web interface and no API. Docketeer starts up, connects to Redis, connects to the chat system, and only responds to prompts that come from you and the people you've allowed to interact with it via chat or from itself via future scheduled tasks.

Prompt injection will remain a risk with any agent that can reach out to the internet for information.

Architecture

graph TD
    People(["👥 People"])
    People <--> ChatClient

    subgraph chat ["🔌 docketeer.chat"]
        ChatClient["Rocket.Chat, TUI, ..."]
    end

    ChatClient <--> Brain

    subgraph agent ["Docketeer Agent"]
        Brain["🧠 Brain / agentic loop"]

        subgraph inference ["🔌 docketeer.inference"]
            API["Anthropic, DeepInfra, ..."]
        end
        Brain <-- "reasoning" --> API
        Brain <-- "memory" --> Workspace["📂 Workspace"]
        Brain <-- "scheduling" --> Docket["⏰ Docket"]

        Docket -- triggers --> CoreTasks["nudge"]
        CoreTasks --> Brain

        subgraph prompt ["🔌 docketeer.prompt"]
            Prompts["agentskills, mcp, ..."]
        end
        Prompts -. system prompt .-> Brain

        Brain -- tool calls --> Registry
        subgraph tools ["🔌 docketeer.tools"]
            Registry["Tool Registry"]
            CoreTools["workspace · chat · docket"]
            PluginTools["web, monty, mcp, ..."]
        end
        Registry --> CoreTools
        Registry --> PluginTools

        Docket -- triggers --> PluginTasks
        subgraph tasks ["🔌 docketeer.tasks"]
            PluginTasks["git backup, reverie, consolidation, ..."]
        end

        subgraph bands ["🔌 docketeer.bands"]
            Bands["wicket, atproto, ..."]
        end
        Bands -- signals --> Brain

        subgraph hooks ["🔌 docketeer.hooks"]
            Hooks["tunings, tasks, mcp, ..."]
        end
        Workspace -- file ops --> Hooks

        subgraph executor ["🔌 docketeer.executor"]
            Sandbox["bubblewrap, subprocess, ..."]
        end
        PluginTools --> Sandbox

        subgraph vault ["🔌 docketeer.vault"]
            Secrets["1password, ..."]
        end
        PluginTools --> Secrets
    end

    Sandbox --> Host["🖥️ Host System"]

    classDef plugin fill:#f0f4ff,stroke:#4a6fa5
    classDef core fill:#fff4e6,stroke:#c77b2a
    class API,ChatClient,Prompts,PluginTools,Sandbox,Secrets,PluginTasks,Bands plugin
    class Brain core

Lines

Everything the agent does happens on a line — a named, persistent context of reasoning with its own conversation history. Chat conversations, scheduled tasks, background research, and realtime event streams each run on their own lines. Lines are just names: a DM with chris uses the line chris, a channel uses general, reverie runs on reverie. A few more examples:

  • The agent schedules a task to research an API — it runs on the line api-research and builds up context across multiple tool-use turns without cluttering any chat.
  • A tuning watches GitHub webhooks for PRs across several repos — signals arrive on the line opensource, where the agent has ongoing context about each project.
  • The agent notices a thread worth following up on tomorrow — it schedules a nudge on the line chris so the reply lands in the same conversation.

All lines share the same workspace. Each line can have a context file at lines/{name}.md whose body gets injected as system context whenever that line is active — whether the message comes from a chat conversation, a scheduled task, or a realtime signal. This gives the agent standing instructions for that context ("only flag important emails", "notify Chris about external contributors") that it can update itself as it learns.

Brain

The Brain is the agentic loop at the center of Docketeer. It receives messages on a line, builds a system prompt, manages per-line conversation history, and runs a multi-turn tool-use loop against the configured inference backend. Each turn sends the conversation, system prompt blocks, and available tool definitions to the LLM and gets back text and/or tool calls — looping until the model responds with text or hits the tool-round limit. Everything else in the system either feeds into the Brain or is called by it.

Workspace

The agent's persistent filesystem — its long-term memory. Plugins can populate it with whatever files they need; for example, the docketeer-autonomy plugin writes SOUL.md, a daily journal, and per-person profiles here. Workspace tools let the agent read and write its own files.

Docket

A Redis-backed task scheduler that gives the agent autonomy. The built-in nudge task lets the agent schedule future prompts for itself — each scheduled task runs on a line with persistent conversation history. If the task specifies a line: and that line has a context file, the line's instructions are injected as system context. Task plugins (like docketeer-autonomy) can add their own recurring tasks.

Antenna

The realtime event feed system. Bands are persistent streaming connections to external services — docketeer-wicket connects to an SSE endpoint, docketeer-atproto connects to the Bluesky Jetstream WebSocket relay. Each band produces signals: structured events with a topic, timestamp, and payload.

Tunings tell the Antenna what to listen for and where to send it. Each tuning routes signals to a line — if that line has a context file at lines/{name}.md, the line's instructions are injected as system context alongside any notes in the tuning file's body. This means multiple tunings can share a line and its behavioral instructions. For example, several GitHub repo tunings might all deliver to the opensource line, which has instructions about when to notify the user vs. log silently.

The agent can set up and tear down tunings at runtime by writing files to tunings/ — no restarts needed. Line context files are read fresh on every signal delivery, so the agent can refine its own instructions over time.

Vault

The agent often needs secrets — API keys, tokens, passwords — to do useful work, but those values should never appear in the conversation context where they'd be visible in logs or could leak through tool results. The vault plugin gives the agent five tools (list_secrets, store_secret, generate_secret, delete_secret, capture_secret) that let it manage secrets by name without ever seeing the raw values. When the agent needs a secret inside a sandboxed command, it passes a secret_env mapping on run or shell and the executor resolves the names through the vault at the last moment, injecting values as environment variables that only the child process can see.

Plugin extension points

All plugins are discovered via standard Python entry points. Single-plugin groups (docketeer.inference, docketeer.chat, docketeer.executor, docketeer.vault, docketeer.search) auto-select when only one is installed, or can be chosen with an environment variable when several are available. Multi-plugin groups (docketeer.tools, docketeer.prompt, docketeer.tasks, docketeer.bands, docketeer.hooks) load everything they find.

| Entry p

Related Skills

View on GitHub
GitHub Stars8
CategoryDevelopment
Updated10d ago
Forks1

Languages

Python

Security Score

85/100

Audited on Mar 25, 2026

No findings