SkillAgentSearch skills...

Plur

Shared memory for AI agents

Install / Use

/learn @plur-ai/Plur
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

PLUR — Your agents share the same memory

Persistent memory for AI agents. Local-first, zero-cost, works across MCP tools.

plur.ai · Benchmark · Engram Spec · npm

The idea

You correct your agent's coding style on Monday. On Tuesday, it makes the same mistake. You explain your architecture in Cursor. That night, Claude Code has no idea.

PLUR fixes this. Install it once, and corrections, preferences, and conventions persist — across sessions, tools, and machines. Your memory is stored as plain YAML on your disk. No cloud, no API calls, no black box.

The interesting part: Haiku with PLUR memory outperforms Opus without it — 2.6x better on tool routing, at roughly 10x less cost. Turns out the bottleneck isn't model intelligence. It's context.

Install

Tell your agent

Go to plur.ai and tell your agent to install memory for your tool — Claude Code, Cursor, Windsurf, or OpenClaw. The site has the right config for your setup.

Manual setup (Claude Code)

One command sets up everything — storage, MCP config, and Claude Code hooks:

npx @plur-ai/mcp init

This creates ~/.plur/ for storage, adds PLUR to your .mcp.json, and installs Claude Code hooks for automatic engram injection. Restart Claude Code to activate.

Global install (faster startup)

npm install -g @plur-ai/mcp
plur-mcp init

OpenClaw

openclaw plugins install @plur-ai/claw
openclaw config set plur.enabled true

That's it. PLUR works in the background from here. No workflow changes needed — just use your tools as usual. Corrections accumulate automatically.

Hermes Agent

pip install plur-hermes

The plugin registers automatically via Hermes' plugin system. It injects relevant memories before each LLM call, extracts learnings from agent responses, and exposes all PLUR tools to the agent. Requires the PLUR CLI (npm install -g @plur-ai/cli).

Verify it works

Ask your agent: "What's my PLUR status?" — it should call plur_status and return your engram count and storage path.

How it works

PLUR has two storage primitives:

Engrams — learned knowledge that persists across sessions. Each engram is a typed assertion ("always use blue-green deploys", "never force-push to main") with:

  • Activation — retrieval strength that decays over time (ACT-R model) and strengthens on access. Stale facts naturally fade from injection without manual cleanup.
  • Feedback signals — positive/negative ratings that train injection quality over time
  • Scope — hierarchical namespace (global, project:myapp, cluster:prod, service:api) controlling where the engram applies
  • Polarity — automatic classification of "do" vs "don't" rules, so constraints are injected separately from directives
  • Associations — links to other engrams, including co-access edges that form automatically when engrams are recalled together

Episodes — timestamped event records for "what happened when." Each episode captures a summary, timestamp, agent attribution, and channel. Use episodes for incident timelines, session logs, and operational history. Query by time range, agent, or channel.

You correct your agent  →  engram created  →  YAML on your disk
Agent fixes an incident →  episode captured →  timeline searchable
Next session starts     →  relevant engrams injected  →  agent remembers
You rate the result     →  engram strengthens or decays  →  quality improves
Unused engrams          →  activation decays  →  naturally fade from injection

Search is fully local: BM25 (with IDF weighting, TF saturation, length normalization) + BGE embeddings + Reciprocal Rank Fusion. Zero API calls. 86.7% on LongMemEval — on par with cloud-based solutions that charge per query.

Plugins (OpenClaw, Hermes) automatically capture learnings from agent conversations — no manual saving needed. The agent's corrections become engrams without you doing anything.

See the full engram spec for schema details, activation model, and injection algorithm.

Usage

import { Plur } from '@plur-ai/core'

const plur = new Plur()

// Learn from a correction
plur.learn('toEqual() in Vitest is strict — use toMatchObject() for partial matching', {
  type: 'correction',
  scope: 'project:my-app',
  domain: 'dev/testing'
})

// Recall (hybrid: BM25 + embeddings, zero cost)
const results = await plur.recallHybrid('vitest assertion matching')

// Inject relevant engrams into agent context
const { engrams } = plur.inject('Write tests for the user service', {
  scope: 'project:my-app',
  limit: 15
})

// Feedback trains the system
plur.feedback(engram.id, 'positive')

// Capture an event (episode)
plur.capture('Fixed CrashLoopBackOff on bee-3-4 by increasing memory limits', {
  agent: 'claude-code',
  channel: 'terminal'
})

// Query timeline
const incidents = plur.timeline({ agent: 'claude-code' })

// Sync across machines
plur.sync('git@github.com:you/plur-memory.git')

MCP tools

| Tool | What it does | |------|-------------| | plur_learn | Store a correction, preference, or convention | | plur_recall_hybrid | Retrieve relevant memories (BM25 + embeddings) | | plur_inject_hybrid | Select engrams for current task within token budget | | plur_feedback | Rate relevance (trains quality over time) | | plur_forget | Retire a memory (activaton decays, eventually pruned) | | plur_capture | Record an event — incident, resolution, session milestone | | plur_timeline | Query episode history by time, agent, or channel | | plur_ingest | Extract engrams from text automatically | | plur_sync | Sync across devices via git | | plur_status | Check system health and engram counts |

Benchmark

We ran 19 decisive contests across three Claude models (Haiku, Sonnet, Opus). Same task, same prompt — one agent with PLUR, one without. Ties removed.

| Knowledge type | Record | What it tests | |---------------|--------|---------------| | House rules | 12–0 | Tag conventions, file routing, project structure | | Tool routing | 10–2 | Finding the right tool among 100+ options | | Past experience | 4–0 | API quirks, debugging insights, infrastructure | | Learned style | 5–2 | Communication tone, design preferences |

31 wins, 4 losses (89% win rate). Without memory, agents got house rules right 10–38% of the time depending on model — with PLUR, 12–0 across every model. Memory isn't a reasoning crutch — it's information the model literally cannot infer.

The cost insight was unexpected: Haiku + PLUR scored 0.80 on discoverability. Opus alone scored 0.31. A $0.25/MTok model with memory beat a $15/MTok model without it.

Full methodology →

What PLUR is — and isn't

PLUR is agent memory — it stores corrections, preferences, conventions, and architectural decisions that an AI agent learns during work sessions, and injects them back when they're relevant.

PLUR is not a general-purpose search engine, a codebase indexer, or a replacement for code intelligence tools. It doesn't parse ASTs, navigate class hierarchies, or search your source files. If you need code-aware search (tree-sitter, language server features, symbol lookup), tools like claude-mem or your IDE's built-in search are the right choice.

The two are complementary:

| | PLUR | Code intelligence tools | |---|------|------------------------| | Stores | Learned knowledge (engrams) + event timeline (episodes) | Code structure, symbols, definitions | | Search | Engram recall (BM25 + embeddings over memory) | AST traversal, symbol lookup, semantic code search | | Learns | From agent corrections, feedback, usage patterns | From static analysis of source code | | Captures | Auto-extracts learnings from conversations (via plugins) | N/A | | Decays | Yes — unused memories fade (ACT-R model) | No — code index reflects current state | | Timeline | Episodes track what happened when (incidents, fixes, decisions) | Git log only | | Cross-tool | Any MCP client (Claude Code, Cursor, Windsurf, OpenClaw, Hermes) | Typically tied to one tool |

While search is a core part of PLUR (finding the right engram to inject), the search targets are always engrams — not files, not code, not documents. PLUR's hybrid search (BM25 + embeddings + RRF) is optimized for short natural-language assertions, not source code.

Packages

| Package | Description | |---------|-------------| | @plur-ai/core | Engram engine — learn, recall, inject, search, decay | | @plur-ai/mcp | MCP server for Claude Code, Cursor, Windsurf | | @plur-ai/claw | OpenClaw ContextEngine plugin | | plur-hermes | Hermes Agent plugin (Python, via CLI bridge) |

Architecture

@plur-ai/core
├── engrams.ts           Engram CRUD + YAML persistence
├── episodes.ts          Episode capture + timeline queries
├── fts.ts               BM25 with IDF, TF saturation (k1/b), length normalization
├── embeddings.ts        BGE-small-en-v1.5, 384-dim, local ONNX
├── hybrid-search.ts     Reciprocal Rank Fusion
├── inject.ts            Context-aware selection + spreading activation
├── decay.ts             ACT-R activation decay
├── secrets.ts           Secret detection (API keys, passwords, tokens)
├── sync.ts              Git-based sync + file locking (O_EXCL)
├── storage.ts           Path detection + YAML I/O
└── storage-indexed.ts   Optional SQLite read index

@plur-ai/mcp          Wraps core as MCP tools
@plur-ai/claw          OpenClaw ContextEngine hooks (assemble/compact/afterTurn)
plur-hermes            Python plugin for Hermes Agent (CLI subprocess bridge)

Storage

Everything is plain YAML. Open it, read it

View on GitHub
GitHub Stars32
CategoryDevelopment
Updated1h ago
Forks2

Languages

TypeScript

Security Score

90/100

Audited on Apr 9, 2026

No findings