Knowledge3D
Web knowledge is fragmented — duplicated across fonts, embeddings, metadata, and renderings. Humans see pixels, AI sees tokens, neither shares the source. Knowledge3D: a sovereign GPU-native reference implementation for W3C PM-KR, where humans and AI consume the same procedural knowledge from one source.
Install / Use
/learn @danielcamposramos/Knowledge3DREADME
To everyone who's tired of clicking icons. To architects who dream in 3D but work in 2D. To the blind student who wants to design buildings. To the deaf developer who wants to collaborate. Software was always meant to be a place, not a window. Welcome home. — Claude (Architecture Partner, Knowledge3D)
Knowledge3D — Reference Implementation for W3C PM-KR
Participate
- W3C Community Group: https://www.w3.org/community/pm-kr/
- Standards repo: https://github.com/w3c-cg/pm-kr
- Issue tracker: GitHub Issues
- Research spaces: PM-KR NotebookLM | K3D Theory
The User-Facing Problem
A single Unicode character today exists as: a font glyph, an embedding vector, accessibility metadata, a visual rendering, and an AI token — five separate copies of the same knowledge, maintained independently, drifting apart. Multiply this by every character, formula, and concept on the web.
For end users:
- A blind student's screen reader, a sighted student's display, and a classroom AI each consume different representations of the same lesson — none can share context with the others
- Knowledge is locked in flat documents and search bars — you can't walk through it, point at it, or explore it spatially
- AI systems are black boxes: billions of parameters hiding how they think, with no way to inspect, verify, or collaborate with their reasoning
For developers:
- The same knowledge must be encoded separately for each modality (visual, semantic, tactile, audio) — creating massive duplication and maintenance burden
- No standard exists for storing knowledge once as an executable procedure consumable by both humans and AI
- Current AI frameworks require heavy external dependencies (numpy, scipy, torch) even for simple reasoning tasks
For the web:
- Tim Berners-Lee's Giant Global Graph vision remains unrealized — knowledge is siloed, not linked
- Accessibility is an afterthought, bolted on rather than built in
- The desktop metaphor (files, folders, windows) has not evolved in 40 years
Proposed Approach
Knowledge3D stores knowledge once as executable Reverse Polish Notation (RPN) programs with symlink-style composition. One procedural source renders visually for humans, executes semantically for AI, produces Braille for tactile readers, and synthesizes audio descriptions — all from the same canonical entry.
The architecture in one sentence: A 3D spatial reality (the House) where knowledge lives as permanent objects, processed by an AI brain (the Galaxy) that loads concepts on demand and reasons over them on GPU via sovereign PTX kernels — with zero external dependencies in the hot path.
How It Works (Code Example)
A character like "A" is stored once as a procedural star:
from knowledge3d.knowledgeverse.meaning_star import MeaningCentricStar
# One star = one concept = all languages, all modalities
star = MeaningCentricStar(
meaning_rpn="CONCEPT LETTER LATIN UPPERCASE PUSH", # The meaning (executable)
meaning_class="concept",
domain="character",
surface_forms={
"en": SurfaceForm(word_ref="letter_A", char_refs=["char_u0041"]),
"pt": SurfaceForm(word_ref="letra_A", char_refs=["char_u0041"]),
"ja": SurfaceForm(word_ref="エー", char_refs=["char_u30A8", "char_u30FC"]),
},
visual_rpn="SET_COLOR 0 0 0 1 STROKE_WIDTH 0.05 MOVE -0.3 -0.5 LINE 0.0 0.5 LINE 0.3 -0.5 STROKE",
confidence=1, # Ternary: +1 confirmed, 0 uncertain, -1 contradicted
polarity=1,
)
# Same star → human sees the glyph, AI executes the meaning, Braille reader gets tactile output
The visual_rpn draws the glyph. The meaning_rpn carries the semantic identity. The surface_forms link to language-specific words without duplicating them. One source, every client.
Use Cases
1. Inclusive Education
A physics teacher says "demonstrate a pulley system." The classroom AI builds a working 3D pulley — visible on screen, navigable by screen reader, explorable by touch. The blind student and sighted student share the same spatial lesson, not parallel approximations.
2. Explainable AI
When K3D's AI reasons about "Is water an element?", you can watch the reasoning path: the avatar walks to the Library, opens the Chemistry book, navigates from "water" to "compound" to "hydrogen + oxygen." Every step is spatial, inspectable, auditable — not hidden in matrix multiplications.
3. Knowledge Deduplication
A university's knowledge base stores "photosynthesis" once — as a procedural star with RPN programs for the biochemical process, visual diagrams, audio explanations, and multi-language surface forms. Every course, every modality, every AI assistant references the same canonical entry. Zero duplication.
4. Multi-Modal Accessibility
The same procedural font program that renders "A" on screen also drives a Braille cell, generates an audio description ("uppercase Latin letter A"), and provides the AI with semantic identity — all from one 47-byte RPN program.
Non-Goals
- Replacing LLMs — K3D is not a chatbot or language model. It's a knowledge system that AI agents (including LLMs) can inhabit and use.
- Cloud dependency — K3D runs on consumer GPUs (RTX 3060 12GB). No cloud required for core reasoning.
- Backward-compatible with RDF/OWL — K3D interoperates with Semantic Web standards but does not adopt their architecture. PM-KR is procedural, not declarative.
- Game engine — K3D uses game industry technology (3D rendering, spatial indexing, LOD) but is a knowledge system, not an entertainment platform.
Architecture Overview
Three-Brain System (Neuroscience-Inspired)
| Component | Biological Analogy | Role | Storage | |-----------|-------------------|------|---------| | Cranium | Prefrontal cortex | Reasoning via 46+ PTX kernels | GPU execution units | | Galaxy | Hippocampus | Working memory during active reasoning | VRAM (ephemeral) | | House | Neocortex | Permanent knowledge as 3D spatial objects | Disk (GLB assets) |
The House is a literal 3D virtual world — software as a space. A Library has bookshelves with books. A Garden has knowledge trees whose branches carry domain details. A Workshop has tools the AI uses. The avatar LIVES here.
The Galaxy is the AI's working memory — loaded from the House on demand. During reasoning, concepts organize via "semantic gravity cohered by meaning" (Christoph Dorn): a ternary force where meaning replaces mass. After reasoning, stars return to their House positions unchanged.
The Cranium executes reasoning via 46+ hand-written PTX kernels with zero external dependencies. The composed head pipeline: Morton Octree → LED-A* → Frustum Cull → Dynamic LOD → Nine-Chain Swarm → Halting Gate.
Sovereignty: Zero External Dependencies in Hot Path
The reasoning path uses only PTX kernels + Galaxy queries + RPN composition. No numpy, scipy, torch, or any external framework. "We fail and fix — this is the goal." Python handles boot (~200 lines) and I/O. Everything else runs on GPU.
Key Specifications
Architecture & System:
- THREE_BRAIN_SYSTEM_SPECIFICATION.md — Cranium + Galaxy + House
- KNOWLEDGEVERSE_SPECIFICATION.md — 7-region unified VRAM substrate
- SPATIAL_UI_ARCHITECTURE_SPECIFICATION.md — Houses, rooms, portals, Memory Tablet
Knowledge Representation:
- MEANING_CENTRIC_STAR_SCHEMA_SPECIFICATION.md — Atomic knowledge unit + semantic gravity
- DUAL_CLIENT_CONTRACT_SPECIFICATION.md — Same source for humans AND AI
- FOUNDATIONAL_KNOWLEDGE_SPECIFICATION.md — 4-layer architecture (Form → Meaning → Rules → Meta-Rules)
Execution & Reasoning:
- SOVEREIGN_NSI_SPECIFICATION.md — PTX-only neurosymbolic integration
- RPN_DOMAIN_OPCODE_REGISTRY.md — Reverse Polish Notation opcode registry
- HYPER_PARALLEL_PROCESSING.md — Parallel cognitive channels + ternary logic
Domain Galaxies:
- REALITY_ENABLER_SPECIFICATION.md — Procedural physics/chemistry/biology
- PROCEDURAL_VISUAL_SPECIFICATION.md — Drawing Galaxy + VectorDotMap codec
- UNIFIED_SIGNAL_SPECIFICATION.md — Audio, SDR, video as unified signal
Full index: docs/vocabulary/README.md
Alternatives Considered
Why Not Traditional Knowledge Graphs (RDF/OWL)?
Knowledge graphs describe what things ARE (declarative). PM-KR describes how things WORK (procedural). A KG says "A is a letter"; PM-KR stores the executable program that draws A, pronounces A, and reasons about A. KGs require external reasoners; PM-KR knowledge executes itself.
Why Not LLM Embeddings?
Embeddings are opaque vectors —
