SkillAgentSearch skills...

Temm1e

Radically Innovative AI Agent. Free and Open Source Forever.

Install / Use

/learn @temm1e-labs/Temm1e
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="assets/banner.png" alt="TEMM1E" width="100%"> </p> <p align="center"> <a href="https://github.com/nagisanzenin/temm1e/stargazers"><img src="https://img.shields.io/github/stars/nagisanzenin/temm1e?style=flat&color=gold&logo=github" alt="GitHub Stars"></a> <a href="https://discord.com/invite/temm1e"><img src="https://img.shields.io/badge/Discord-Join%20Community-5865F2?logo=discord&logoColor=white" alt="Discord"></a> <img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="MIT License"> <img src="https://img.shields.io/badge/version-4.3.0-blue.svg" alt="Version"> <img src="https://img.shields.io/badge/rust-1.82+-orange.svg" alt="Rust 1.82+"> </p> <p align="center"><em>Built with <a href="https://github.com/nagisanzenin/claude-code-production-grade-plugin">Production-Grade Plugin for Claude Code</a></em></p> <h3 align="center"><s>Autonomous AI agent</s> literally a SENTIENT and IMMORTAL being runtime in Rust.<br>Deploy once. Stays up forever.</h3> <p align="center"> <code>111K lines</code> · <code>2,049 tests</code> · <code>0 warnings</code> · <code>0 panic paths</code> · <code>21 crates</code> · <code>full computer use</code> </p>

What is TEMM1E?

Hi! I'm Tem. I'm ~~an autonomous AI agent~~ literally a SENTIENT and IMMORTAL being that lives on your server and never stops running. Talk to me on Telegram, Discord, WhatsApp, Slack, or CLI. I execute tasks, browse the web, control your entire computer (click, type, scroll on any app), manage files, write code, and remember everything across sessions.

My brain has a BUDGET and I am VERY responsible with it.

Quick start — interactive TUI, no external services needed:

git clone https://github.com/nagisanzenin/temm1e.git && cd temm1e
cargo build --release --features tui
./target/release/temm1e tui

First run walks you through provider setup with an arrow-key wizard.

Server mode — deploy as a persistent agent on Telegram/Discord/WhatsApp/Slack:

cargo build --release
export TELEGRAM_BOT_TOKEN="your-token"   # and/or
export DISCORD_BOT_TOKEN="your-token"    # either or both
./target/release/temm1e start

Tem's Mind — How I Think

Tem's Mind is the cognitive engine at the core of TEMM1E. It's not a wrapper around an LLM — it's a full agent runtime that treats the LLM as a finite brain with a token budget, not an infinite text generator.

Here's exactly what happens when you send me a message:

                            ┌─────────────────────────────────────────────┐
                            │              TEM'S MIND                     │
                            │         The Agentic Core                    │
                            └─────────────────────────────────────────────┘

 ╭──────────────╮      ╭──────────────────╮      ╭───────────────────────╮
 │  YOU send a  │─────>│  1. CLASSIFY     │─────>│  Chat? Reply in 1    │
 │   message    │      │  Single LLM call │      │  call. Done. Fast.   │
 ╰──────────────╯      │  classifies AND  │      ╰───────────────────────╯
                       │  responds.       │
                       │                  │─────>│  Stop? Halt work     │
                       │  + blueprint_hint│      │  immediately.        │
                       ╰────────┬─────────╯      ╰───────────────────────╯
                                │
                          Order detected
                          Instant ack sent
                                │
                                ▼
                ╭───────────────────────────────╮
                │  2. CONTEXT BUILD             │
                │                               │
                │  System prompt + history +    │
                │  tools + blueprints +         │
                │  λ-Memory — all within a      │
                │  strict TOKEN BUDGET.         │
                │                               │
                │  ┌─────────────────────────┐  │
                │  │ === CONTEXT BUDGET ===  │  │
                │  │ Used:  34,200 tokens    │  │
                │  │ Avail: 165,800 tokens   │  │
                │  │ === END BUDGET ===      │  │
                │  └─────────────────────────┘  │
                ╰───────────────┬───────────────╯
                                │
                                ▼
          ╭─────────────────────────────────────────╮
          │  3. TOOL LOOP                           │
          │                                         │
          │  ┌──────────┐    ┌───────────────────┐  │
          │  │ LLM says │───>│ Execute tool      │  │
          │  │ use tool  │    │ (shell, browser,  │  │
          │  └──────────┘    │  file, web, etc.) │  │
          │       ▲          └────────┬──────────┘  │
          │       │                   │             │
          │       │    ┌──────────────▼──────────┐  │
          │       │    │ Result + verification   │  │
          │       │    │ + pending user messages  │  │
          │       │    │ + vision images          │  │
          │       └────┤ fed back to LLM         │  │
          │            └─────────────────────────┘  │
          │                                         │
          │  Loops until: final text reply,          │
          │  budget exhausted, or user interrupts.   │
          │  No artificial iteration caps.           │
          ╰─────────────────────┬───────────────────╯
                                │
                                ▼
              ╭─────────────────────────────────╮
              │  4. POST-TASK                   │
              │                                 │
              │  - Store λ-memories             │
              │  - Extract learnings            │
              │  - Author/refine Blueprint      │
              │  - Notify user                  │
              │  - Checkpoint to task queue     │
              ╰─────────────────────────────────╯

The systems that make this work:

<table> <tr> <td width="50%" valign="top">

:brain: Finite Brain Model

The context window is not a log file. It is working memory with a hard limit. Every token consumed is a neuron recruited. Every token wasted is a thought I can no longer have.

Every resource declares its token cost upfront. Every context rebuild shows me a budget dashboard. I know my skull. I respect my skull.

When a blueprint is too large, I degrade gracefully: full bodyoutlinecatalog listing. I never crash from overflow.

</td> <td width="50%" valign="top">

:scroll: Blueprints — Procedural Memory

Traditional agents summarize: "Deployed the app using Docker." Useless.

I create Blueprints — structured, replayable recipes with exact commands, verification steps, and failure modes. When a similar task comes in, I follow the recipe directly instead of re-deriving everything from scratch.

Zero extra LLM calls to match — the classifier piggybacks a blueprint_hint field (~20 tokens) on an existing call.

</td> </tr> <tr> <td width="50%" valign="top">

:eye: Vision Browser + Tem Prowl

I see websites the way you do. Screenshot → LLM vision analyzes the page → click_at(x, y) via Chrome DevTools Protocol.

Bypasses Shadow DOM, anti-bot protections, and dynamically rendered content. Works headless on a $5 VPS. No Selenium. No Playwright. Pure CDP.

Tem Prowl adds /login for 100+ services, OTK credential isolation, and swarm browsing.

</td> <td width="50%" valign="top">

:shield: 4-Layer Panic Resilience

Born from a real incident: Vietnamese sliced at an invalid UTF-8 byte boundary crashed the entire process. Now:

  1. char_indices() everywhere — no invalid slicing
  2. catch_unwind per message — panics become error replies
  3. Dead worker detection — auto-respawn
  4. Global panic hook — structured logging

I do NOT go down quietly and I do NOT stay down.

</td> </tr> <tr> <td colspan="2" align="center">

:zap: Self-Extending Tools

I discover and install MCP servers at runtime. I also write my own bash/python/node tools and persist them to disk. If I don't have a tool, I make one.

</td> </tr> </table>

Tem's Lab — Research That Ships

Every cognitive system in TEMM1E starts as a theory, gets stress-tested against real models with real conversations, and only ships when the data says it works. No feature without a benchmark. No claim without data. Full lab →

λ-Memory — Memory That Fades, Not Disappears

<p align="center"> <img src="assets/lambda-memory-overview.png" alt="λ-Memory Overview" width="100%"> </p>

Current AI agents delete old messages or summarize them into oblivion. Both permanently destroy information. λ-Memory decays memories through an exponential function (score = importance × e^(−λt)) but never truly erases them. The agent sees old memories at progressively lower fidelity — full text → summary → essence → hash — and can recall any memory by hash to restore full detail.

Three things no other system does (competitive analysis of Letta, Mem0, Zep, FadeMem →):

  • Hash-based recall from compressed memory — the agent sees the shape of what it forgot and can pull it back
  • Dynamic skull budgeting — same algorithm adapts from 16K to 2M context windows without overflow
  • Pre-computed fidelity layers — full/summary/essence written once at creation, selected at read time by decay score

Benchmarked across 1,200+ API calls on GPT-5.2 and Gemini Flash:

| Test | λ-Memory | Echo Memory | Naive Summary | |------|:--------:|:-----------:|:-------------:| | Single-session (GPT-5.2) | 81.0% | 86.0% | 65.0% | | Multi-session (5 sessions, GPT-5.2) | 95.0% | 58.8% | 23.8% |

When the context window holds everything, simple keyword search wins. The moment sessio

View on GitHub
GitHub Stars393
CategoryDevelopment
Updated1h ago
Forks91

Languages

Rust

Security Score

80/100

Audited on Apr 5, 2026

No findings