Harlo
Harlo — Your AI coach. Built on USD composition semantics for persistent cognitive state management. Patent Pending.
Install / Use
/learn @JosephOIbrahim/HarloQuality Score
Category
Development & EngineeringSupported Platforms
README
Your AI coach. Watches your patterns, predicts your crashes, backs off during flow, and tells you when to stop before you burn out. Built on USD composition semantics for persistent, local-first cognitive state management.
Your memory, your device. Harlo stores all state locally as composable USD layers — no cloud dependency, no data mining, no rented access to your own mind.
Status
PRODUCTION LIVE — Harlo v3.3.1
250 sprint tests · 890 core tests · 41 Rust tests · All passing
458 organic observations collected
5 sprints shipped · Real .usda on disk · Predictions flowing
| Sprint | Tests | What Shipped |
|--------|-------|-------------|
| S1 State Machine | 84 | Pydantic schemas, MockCogExec DAG (networkx), 7 pure computation functions, 26-invariant validator, 10K synthetic trajectories via Profile-Driven Markov Biasing, XGBoost predictor (100% per-field accuracy), Bridge integration |
| S2 OpenExec | -- | USD 26.03 built from source with PXR_BUILD_EXEC=ON. C++ Exec libraries compile. Circuit-breaker triggered: zero Python bindings in v26.03 source. MockCogExec continues to serve. |
| S3 Hydra Delegates | 85 | HdCognitiveDelegate ABC, DelegateRegistry (capability matching), HdClaude + HdClaudeCode, compute_routing (requirements not names), OOB consent tokens (HMAC-signed, TTL), sublayer-per-delegate concurrency, CognitiveEngine singleton, 20-exchange e2e |
| S4 Real USD | 59 | CognitiveStage wrapping pxr.Usd.Stage, stage_factory toggle, .usda files on disk with time-sampled CognitiveObservation, delegate sublayer .usda files, backend parity verified (mock = real USD) |
| S5 Production | 22 | Graceful degradation (independent failure isolation), health check endpoint, kill switches (ENGINE_ENABLED, USE_REAL_USD, OBSERVATION_LOGGING, PREDICTION_ENABLED), first session verified, production docs |
Tech Stack
- USD 26.03 — Cognitive state stored in real
.usdafiles. Time-sampled. Human-readable. Git-trackable. Sublayer composition via LIVRPS. - OpenExec — C++ libs built, Python bindings deferred (Pixar hasn't shipped them yet). Architecture is OpenExec-native; implementation catches up later.
- Hydra Delegates — The
Hdprefix is a naming convention, not an import. Pure Python. Any LLM implements the interface, registers, done. - XGBoost — MultiOutputRegressor predicting momentum, burnout, energy, burst from 111-feature sliding window. Trained on 10K synthetic trajectories (278K exchanges).
- Python 3.12 (USD) / 3.14 (project) — Dual venv. Real USD on 3.12, graceful mock fallback on 3.14.
- Rust — Hippocampus crate via PyO3. 1-bit SDR encoding, XOR popcount kNN, lazy decay. Sub-2ms recall.
- MCP — 8 tools over stdio. Works with Claude Desktop, Claude Code, any MCP client.
Architecture
System Layers
%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#1a1a2e', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#7c3aed', 'lineColor': '#7c3aed', 'secondaryColor': '#16213e', 'tertiaryColor': '#0f3460'}}}%%
graph TB
USER["You · Claude Desktop / Claude Code"]:::user
subgraph MCP["MCP Server · 8 Tools · stdio"]
direction LR
COACH["twin_coach"]:::tool
STORE["twin_store"]:::tool
RECALL["twin_recall"]:::tool
QPE["query_past_experience"]:::tool
PATTERNS["twin_patterns"]:::tool
SESSION["twin_session_status"]:::tool
RESOLVE["resolve_verifications"]:::tool
RECAL["trigger_recalibration"]:::tool
end
subgraph ENGINE["CognitiveEngine · Production Singleton"]
direction TB
DAG["MockCogExec · networkx DAG\nburst → energy → momentum\n→ burnout → allostasis\n+ injection_gain · context_budget · routing"]:::engine
DELEGATES["Hydra Delegates\nHdClaude · HdClaudeCode\ncapability-matched routing"]:::engine
PREDICT["XGBoost Predictor\n3-step window · 111 features\n→ momentum · burnout · energy · burst"]:::engine
end
subgraph STAGE["USD Stage · .usda on Disk"]
direction LR
ROOT["harlo.usda\nTime-sampled state\nCanonical prim hierarchy"]:::usd
CLAUDE_SUB["delegates/claude.usda\nInteractive opinions"]:::usd
CODE_SUB["delegates/claude_code.usda\nBatch opinions"]:::usd
end
subgraph MEMORY["Core Twin · Biologically-Architected Memory"]
direction TB
HOT["Hot Tier · FTS5\n< 0.2ms store"]:::memory
WARM["Warm Tier · SDR Hamming\nRust PyO3 · < 2ms recall"]:::memory
ELENCHUS["Elenchus · GVR\ntrace-excluded verify"]:::memory
HEBBIAN["Hebbian · dual-mask\nSDR evolution"]:::memory
COMPOSITION["Composition · Merkle\nLIVRPS resolution"]:::memory
end
BUFFER["Observation Buffer\nanchor 20% · organic 80%\n458 observations"]:::buffer
USER --> MCP
MCP --> ENGINE
ENGINE --> STAGE
ENGINE --> BUFFER
STAGE --> ENGINE
MCP --> MEMORY
MEMORY --> MCP
ENGINE -->|"enriched context"| USER
classDef user fill:#7c3aed,stroke:#a78bfa,color:#fff,font-weight:bold
classDef tool fill:#0f3460,stroke:#3b82f6,color:#93c5fd
classDef engine fill:#1e3a5f,stroke:#60a5fa,color:#bfdbfe,font-weight:bold
classDef usd fill:#1a4a3a,stroke:#22c55e,color:#bbf7d0,font-weight:bold,stroke-width:3px
classDef memory fill:#2e1a4a,stroke:#a78bfa,color:#ddd6fe
classDef buffer fill:#4a3a1a,stroke:#f59e0b,color:#fde68a
Exchange Loop
Every MCP tool call flows through this 7-step pipeline:
%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#1a1a2e', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#7c3aed', 'lineColor': '#7c3aed'}}}%%
graph LR
CALL["MCP Tool Call"]:::input
subgraph PIPELINE["CognitiveEngine · Per-Exchange Pipeline"]
direction LR
S1["1 · Author\nBuild observation\nfrom tool context"]:::step
S2["2 · Evaluate\nDAG: burst → energy\n→ momentum → burnout\n→ allostasis"]:::step
S3["3 · Route\ncompute_routing →\ncapability requirements"]:::step
S4["4 · Delegate\nSync → Execute\n→ CommitResources\nto sublayer"]:::step
S5["5 · Observe\nEmit to buffer\nanchor/organic split"]:::step
S6["6 · Predict\nXGBoost forecast\nauthor to /prediction"]:::step
S7["7 · Save\n.usda to disk\ngraceful on failure"]:::step
S1 --> S2 --> S3 --> S4 --> S5 --> S6 --> S7
end
RESPONSE["Enriched Response\ncognitive_context\ndelegate_id · expert\nprediction"]:::output
CALL --> PIPELINE --> RESPONSE
classDef input fill:#7c3aed,stroke:#a78bfa,color:#fff,font-weight:bold
classDef step fill:#1e3a5f,stroke:#60a5fa,color:#bfdbfe
classDef output fill:#22c55e,stroke:#4ade80,color:#fff,font-weight:bold
Cognitive State Machines
Five state machines evaluated via topologically-sorted DAG on every exchange:
%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#1a1a2e', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#7c3aed', 'lineColor': '#7c3aed'}}}%%
stateDiagram-v2
direction LR
state Momentum {
direction LR
[*] --> COLD_START
CRASHED --> COLD_START: always
COLD_START --> BUILDING: tasks >= threshold
BUILDING --> ROLLING: coherence + velocity
ROLLING --> PEAK: exchanges + burst
PEAK --> CRASHED: burnout >= ORANGE
}
state Burnout {
direction LR
[*] --> GREEN
GREEN --> YELLOW: frustration or duration
YELLOW --> ORANGE: sustained frustration
ORANGE --> RED: extreme frustration
note right of RED: ANY -> RED via exogenous override
}
state Energy {
direction LR
[*] --> MEDIUM
HIGH --> MEDIUM: natural decay
MEDIUM --> LOW: session length
LOW --> DEPLETED: continued work
note right of DEPLETED: Burst suspends decay\nDebt applies on exit
}
state Burst {
direction LR
[*] --> NONE_B
NONE_B --> DETECTED: velocity + coherence
DETECTED --> PROTECTED: sustained
PROTECTED --> WINDING: exchange threshold
WINDING --> EXIT_PREP: exit threshold
EXIT_PREP --> NONE_B: next exchange
}
Hydra Delegate Pattern
The DAG outputs what's needed. The registry selects who fulfills it. The DAG never names a specific LLM.
%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#1a1a2e', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#7c3aed', 'lineColor': '#7c3aed'}}}%%
graph TB
ROUTING["compute_routing\nOutputs: requirements\nNOT delegate names"]:::route
subgraph REQUIREMENTS["Capability Requirements"]
direction LR
REQ_TASKS["supported_tasks\nreasoning · coaching\ncode_generation"]:::req
REQ_LATENCY["latency_max\nrealtime · interactive\nbatch"]:::req
REQ_CODING["requires_coding\ntrue / false"]:::req
REQ_CTX["context_budget\nlight · medium · heavy"]:::req
end
subgraph SAFETY["Safety Overrides"]
direction LR
RED["RED burnout\n-> force restorer\nconsent ignored"]:::red
ORANGE["ORANGE + no consent\n-> force restorer"]:::orange
CONSENT["OOB Consent\nHMAC-signed\nTTL · revocable"]:::consent
end
subgraph REGISTRY["DelegateRegistry · Capability Match"]
direction TB
MATCH["Filter → Sort → Select\nprefer lower latency\nthen higher context"]:::registry
subgraph DELEGATES["Registered Delegates"]
direction LR
CLAUDE["HdClaude\nreasoning · coaching\nanalysis · exploration\ninteractive · 200K"]:::claude
CODE["HdClaudeCode\nimplementation · debugging\ncode_generation · testing\nbatch · 200K"]:::code
Related Skills
himalaya
353.3kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
node-connect
353.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
taskflow
353.3kname: taskflow description: Use when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layer
claude-opus-4-5-migration
111.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
