TheAlgorithm
General problem-solving algorithm for achieving Euphoric Surprise through verifiable Ideal State Criteria
Install / Use
/learn @danielmiessler/TheAlgorithmREADME
TheAlgorithm
An experiment in systematic problem-solving
The Idea • How It Works • PAI Integration • Versioning • Documentation
</div>🎯 The Idea
I've been working on a general problem-solving framework that I'm calling TheAlgorithm. The core idea is pretty simple: systematically move from current state to ideal state through verifiable criteria.
I'm using it as the foundation for my PAI (Personal AI Infrastructure) system, and early results are promising.
The goal: Every response should surprise and delight ("Euphoric Surprise")
The method: Hill-climb toward the ideal state using testable criteria
This is v0.1 - my first real attempt at codifying this. I'm sure it'll evolve significantly as I learn what works and what doesn't.
💡 The Core Insight
I think the most important thing in any iterative improvement process is the transition from CURRENT STATE to IDEAL STATE.
This seems obvious, but I don't think most systems actually operationalize it well. Here's what I'm exploring:
-
You need granular, verifiable state If you can't measure where you are, you can't tell if you're making progress.
-
Criteria need to be testable Vague goals like "make it better" don't work. You need discrete, binary tests.
-
Ideal state is your north star You can't build good criteria without understanding what "done" looks like.
-
The ideal state changes As you learn more, your understanding of "ideal" evolves. The system needs to capture that.
⚙️ How It Works
I'm testing three main components:
1. Ideal State Criteria (ISC)
Specific, testable statements about what success looks like:
- Exactly 8 words - Keeps them focused
- Granular - One thing per criterion
- Discrete - Clear boundaries
- Testable - Binary YES/NO you can check quickly
- State-based - What IS true, not what to DO
2. Seven-Phase Execution
A loop inspired by the scientific method:
OBSERVE → What's the current state and what was requested?
THINK → What's the underlying intent and ideal outcome?
PLAN → What criteria define success?
BUILD → Create the solution components
EXECUTE → Take actions toward the criteria
VERIFY → Confirm each criterion with evidence
LEARN → Capture insights for next time
3. Euphoric Surprise
I'm shooting for responses that make you go "wow, I didn't expect that!" instead of just "yeah, that works."
Is this realistic? Not sure yet. But setting a high bar seems better than settling for "good enough."
🔗 PAI Integration
I'm using this in PAI - every interaction follows the algorithm structure. It's working well so far, but I'm still experimenting.
Configuration
PAI can load TheAlgorithm three ways:
1. Always Latest (Default)
{
"algorithmSource": "latest"
}
Pulls from: TheAlgorithm.md (main branch)
2. Pin to Specific Version
{
"algorithmSource": "v0.1"
}
Pulls from: versions/v0.1.md (doesn't change)
3. Use Your Own Version
{
"algorithmSource": "local",
"algorithmLocalPath": "/path/to/your-algorithm.md"
}
Test your own ideas before publishing
How PAI Uses It
// PAI fetches at build time
const algorithm = await fetchAlgorithm({
version: config.algorithmSource,
cacheDir: "~/.claude/cache/algorithm",
localOverride: process.env.ALGORITHM_LOCAL_OVERRIDE
});
Caching:
- Specific versions: Cached permanently
- Latest: Refreshes on builds
- Fallback: Uses bundled version if fetch fails
📦 Versioning
I'm using semantic versioning:
TheAlgorithm/
TheAlgorithm.md # Current version
versions/
v0.1.md # Frozen snapshots
v0.2.md
CHANGELOG.md # What changed
Version bumps:
- MAJOR (0.x → 1.0): Breaking changes to format
- MINOR (0.1 → 0.2): New features, backward compatible
- PATCH (0.1.0 → 0.1.1): Typos, clarifications
| Your Config | Behavior |
|-------------|----------|
| "latest" | Auto-updates with each change |
| "v0.1" | Stays on v0.1 until you change it |
| "local" | Uses your file |
📚 Documentation
The full spec is in TheAlgorithm.md:
- All 7 phases in detail
- ISC criteria requirements
- Examples and anti-patterns
- Common failure modes
To try it:
- Read the philosophy above to get the idea
- Check out the spec to see how it works
- Look at PAI to see it in action
- Fork it and try your own version
🎓 Key Concepts
ISC (Ideal State Criteria)
Instead of "fix the auth bug", try:
- "All authentication tests pass after fix applied" (8 words, testable)
Instead of "improve the UI", try:
- "Login button centered on screen with correct spacing" (8 words, verifiable)
The constraint forces clarity.
Anti-Criteria
What must NOT happen:
- "No credentials exposed in git commit history"
- "No breaking changes to existing public API endpoints"
- "Database migrations do not lose any user data"
Euphoric Surprise
I'm aiming for reactions like:
- "Wow, I didn't expect that!"
- "This is exactly what I needed and more"
- "How did it know to do that?"
Instead of:
- "Good enough"
- "Met requirements"
- "No complaints"
Not sure if this is achievable consistently, but that's the experiment.
🔄 Version History
v0.5.3 (2026-02-12)
- PRD Integration — Every Algorithm run creates or continues a PRD (Product Requirements Document) on disk as persistent memory
- Dual-Tracking — ISC lives in both working memory (TaskCreate) and persistent memory (PRD file) with sync rules
- ISC Quality Gate — 6-check gate (count, word count, state-not-action, binary testable, anti-criteria, coverage) blocks THINK until passed
- Effort Level System — 8 tiers (Instant→Loop) replacing TIME SLA, with phase budget guides and auto-compress at 150% overage
- Plan Mode Integration — Structured ISC construction workshop at PLAN phase for Extended+ effort levels
- Inline Verification Methods — Each criterion carries
| Verify: CLI|Test|Static|Browser|Grep|Read|Customsuffix - Confidence Tags —
[E]xplicit,[I]nferred,[R]everse-engineered on each criterion for THINK phase pressure testing - ISC Scale Tiers — Simple (4-8), Medium (12-40), Large (40-150), Massive (150-500+) with structure rules
- Capability Registry — 25 capabilities across 6 sections (Foundation, Thinking, Agents, Collaboration, Execution, Verification)
- Full Scan Mandate — Every task evaluates all 25 capabilities; format scales by effort level (one-line → compact → full matrix)
- No Silent Stalls — Critical execution principle: no chained infrastructure, no sleep, 5s timeouts, background for long ops
- Discrete Phase Enforcement — BUILD and EXECUTE are always separate phases, never merged
- Loop Mode Effort Decay — Late iterations auto-drop effort level as criteria converge (Extended→Standard→Fast)
- Agent Teams / Swarm — Multi-agent coordination with shared task lists and child PRD decomposition
- PRD Status Progression — DRAFT→CRITERIA_DEFINED→PLANNED→IN_PROGRESS→VERIFYING→COMPLETE/FAILED/BLOCKED
- Voice Phase Announcements — Effort-level-gated voice curls (none for Instant/Fast, entry+verify for Standard, all for Extended+)
v0.3.4 (2026-02-03)
- CAPABILITY AUDIT block — Mandatory in OBSERVE phase, shows CONSIDERED vs SELECTED capabilities
- TIME SLA system — Instant/Fast/Standard/Deep determines agent budget
- Reverse Engineering expansion — Explicit/implied wants AND don't-wants, plus gotchas
- Agent Instructions — CRITICAL requirement for context, SLA, and output format when spawning agents
- Algorithm Concept section — Full 9-point philosophy explaining why ISC matters
- Voice Phase Announcements — Progress visibility during long operations
v0.2.34 (2026-02-02)
- Builder-Validator Pair Pattern -- New
Paircomposition pattern: every work unit gets a Builder agent and an independent Validator agent - Agent Self-Validation -- Agents receive validation contracts (mechanical checks) and verify their own output before reporting completion
- ISC Dependency Graph -- ISC criteria declare dependencies via
addBlockedBy/addBlocksfor wave-based parallel execution
v0.2.33 (2026-02-02)
- Continuous Recommendation -- Replaces Two-Pass Selection; CapabilityRecommender is re-invocable at any phase boundary with enriched context
- Dynamic Ecosystem Discovery -- Hook reads Agents/ directory and skill-index.json at runtime instead of hardcoded lists
- Holistic Capability Matrix -- Hook output is a coherent strategy (strategy, agents, skills, timing, pattern, sequence, quality, constraints)
v0.2.32 (2026-02-02)
- Structured Evidence Requirements -- ISC verification requires evidence type, source, and content (no more "verified" without proof)
- Retry Loop -- DIAGNOSE -> CHANGE -> RE-EXECUTE loop (max 3 iterations) when VERIFY fails; change is mandatory
- Ownership Check -- VERIFY begins with approach reflection: what I did, alternatives
Security Score
Audited on Mar 31, 2026
