Widemem AI
Next-gen AI memory layer with importance scoring, temporal decay, hierarchical memory, and YMYL prioritization
Install / Use
/learn @remete618/Widemem AIQuality Score
Category
Development & EngineeringSupported Platforms
README
widemem.ai
.__ .___ .__
__ _ _|__| __| _/____ _____ ____ _____ _____ |__|
\ \/ \/ / |/ __ |/ __ \ / \_/ __ \ / \ \__ \ | |
\ /| / /_/ \ ___/| Y Y \ ___/| Y Y \ / __ \| |
\/\_/ |__\____ |\___ >__|_| /\___ >__|_| / /\ (____ /__|
\/ \/ \/ \/ \/ \/ \/
Goldfish memory? ¬_¬ Fixed.
NEW in v1.4 — Confidence scoring, uncertainty modes (strict/helpful/creative),
mem.pin()for persistent memories, frustration detection, and retrieval modes (fast/balanced/deep). Your AI now knows when it doesn't know. See what's new ↓
Because your AI deserves better than amnesia. ¬_¬
An open-source AI memory layer that actually remembers what matters. Local-first, batteries-included, and opinionated about not forgetting your user's blood type.
Look, AI memory has come a long way. Context windows are bigger, RAG pipelines are everywhere, and most frameworks have some form of "remember this for later." It's not terrible anymore. But it's not great either. Most memory systems treat every fact the same — your user's blood type sits next to what they had for lunch, decaying at the same rate, with the same priority. Contradictions pile up silently. There's no sense of "this matters more than that." And when you need to remember something from three months ago that actually matters? Good luck.
widemem is for when "good enough" isn't good enough.
widemem gives your AI a real memory — one that scores what matters, forgets what doesn't, and absolutely refuses to lose track of someone's prescritpion medication just because 72 hours passed and the decay function got bored. Think of it as long-term memory for LLMs, except it actually works and doesn't require a PhD to set up.
- Memories that know their place — Importance scoring (1-10) + time decay means "has a peanut allergy" always outranks "had pizza on Tuesday". As it should. Not all memories are created equal, and your retrieval system should know the difference between a life-threatening allergy and a lunch preference.
- One brain, three layers — Facts roll up into summaries, summaries into themes. Ask "where does Alice live" and get the fact. Ask "tell me about Alice" and get the big picture. Your AI can zoom in and zoom out without breaking a sweat or making a second API call.
- YMYL or GTFO — Health, legal, and financial facts get VIP treatment: higher importance floors, immunity from decay, and forced contradiction detectoin. Because forgetting someone's medication is not a "minor regression". It's a lawsuit waiting to happen. ¬_¬
- Conflict resolution that isn't stupid — Add "I live in Boston" after "I live in San Francisco" and the system doesn't just blindly append both. It detects the contradiction, resolves it in a single LLM call, and updates the memory. Like a reasonable adult would.
- Honest about what it doesn't know — Most memory systems hallucinate when they have nothing useful. widemem checks its own confidence before answering. HIGH? Answer normally. LOW? "I'm not sure about this." NONE? "I genuinely don't have that." You can even set it to creative mode: "I can guess if you want, but fair warning." Because an AI that admits ignorance is more useful than one that lies with a straight face. ¬_¬
- Local by default, cloud if you want — SQLite + FAISS out of the box. No accounts, no API keys for storage, no "please sign up for our enterprise plan to store more than 100 memories". Plug in Qdrant or any cloud provider when you're ready. Or don't. We won't guilt-trip you.
Architecture
<p align="center"> <img src="docs/architecture.png" alt="widemem architecture diagram" width="100%"> </p>TL;DR
Six features, one library. Here's what widemem does that most memory systems don't:
| # | Feature | What it does | Why it matters | |---|---|---|---| | 1 | Batch conflict resolution | Single LLM call for all facts vs. existing memories | N facts = 1 API call, not N. Your wallet will thank you. | | 2 | Importance + decay | Facts rated 1-10, with exponential/linear/step decay | Old trivia fades. Critical facts don't. | | 3 | Hierarchical memory | Facts -> summaries -> themes, auto-routed | Broad questions get themes, specfic ones get facts. | | 4 | Active retrieval | Contradiction detection + clarifying questions | "Wait, you said you live in San Francisco AND Boston?" | | 5 | Self-supervised extraction | Collect training data, distill to a small model | LLM extraction quality, local model costs. Eventually. | | 6 | YMYL prioritization | Health/legal/financial facts are untouchable | Some things you just don't forget. | | 7 | Uncertainty & confidence | Knows when it doesn't know, offers to guess or asks for help | No more hallucinated answers from empty memory. | | 8 | Retrieval modes | fast / balanced / deep — choose your accuracy-cost tradeoff | Same system, three price points. You pick. |
140 tests. Zero external services required. SQLite + FAISS by default. Plug in OpenAI, Anthropic, Ollama, Qdrant, or sentence-transformers as needed.
Table of Contents
- Install
- Quick Start
- Configuration
- Scoring & Decay
- LLM Providers
- Embedding Providers
- Vector Store Providers
- YMYL (Your Money or Your Life)
- Topic Weights
- Hierarchical Memory
- Active Retrieval
- Temporal Search
- Self-Supervised Extraction
- Uncertainty & Confidence
- Retrieval Modes
- History & Audit Trail
- Batch Conflict Resolution
- API Reference
- Development
- Terms & Conditions
- Contact
- License
Install
pip install widemem-ai
Trouble installing?
1. pip not found? Use pip3:
pip3 install widemem-ai
2. pip too old? Upgrade it first:
python3 -m pip install --upgrade pip
3. Python 3.9 or older? widemem requires Python 3.10+. Install via Homebrew (macOS):
brew install python@3.10
/opt/homebrew/bin/python3.10 -m pip install widemem-ai
No Homebrew? Install it first:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Verify installation:
python3 -c "import widemem; print(widemem.__version__)"
Optional providers
pip install widemem-ai[anthropic] # Claude LLM provider
pip install widemem-ai[ollama] # Local LLM via Ollama
pip install widemem-ai[sentence-transformers] # Local embeddings (no API key needed, imagine that)
pip install widemem-ai[qdrant] # Qdrant vector store
pip install widemem-ai[all] # Everything. You want it all? You got it.
Quick Start
Five lines to a working memory system. Six if you count the import.
from widemem import WideMemory, MemoryConfig
memory = WideMemory()
# Add memories
result = memory.add("I live in San Francisco and work as a software engineer", user_id="alice")
# Search
results = memory.search("where does alice live", user_id="alice")
for r in results:
print(f"{r.memory.content} (score: {r.final_score:.2f})")
# Update happens automatically — add contradicting info and the resolver handles it
memory.add("I just moved to Boston", user_id="alice")
# Delete
memory.delete(results[0].memory.id)
# History audit trail
history = memory.get_history(results[0].memory.id)
That's it. No 47-step setup guide. No YAML files. No existential dread. Your AI just went from goldfish to elephant in six lines.
WideMemory also works as a context manager if you're the responsible type:
with WideMemory() as memory:
memory.add("I live in San Francisco", user_id="alice")
results = memory.search("where does alice live", user_id="alice")
# Connection closed automatically. You're welcome.
Configuration
All settings live in MemoryConfig. Here's the full kitchen sink — most of these have sane defaults so you don't actualy need to touch them:
from widemem import WideMemory, MemoryConfig
from widemem.core.types import (
LLMConfig,
EmbeddingConfig,
VectorStoreConfig,
ScoringConfig,
YMYLConfig,
TopicConfig,
DecayFunction,
)
config = MemoryConfig(
llm=LLMConfig(
provider="openai", # "openai", "anthropic", or "ollama"
model="gpt-4o-mini",
api_key="sk-...", # Or set OPENAI_API_KEY env var
temperature=0.0,
max_tokens=2000,
),
embedding=EmbeddingConfig(
provider="openai", # "openai" or "sentence-transformers"
model="text-embedding-3-small",
dimensions=1536,
),
vector_store=VectorStoreConfig(
provider="faiss", # "faiss" or "qdrant"
path=None, # Optional path for persistent storage
),
scoring=ScoringConfig(
decay_function=DecayFunction.EXPONENTIAL,
decay_rate=0.01, # Higher = faster decay
similarity_weight=0.5, # Weight for vector similarity
importance_weight=0.3, # Weight for importance score
recency_weight=0.2, # Weight for recency score
),
ymyl=YMYLConfig(
