Tensorlogic
TensorLogic compiles logical rules (predicates, quantifiers, implications) into tensor equations (einsum graphs) with a minimal DSL + IR, enabling neural/symbolic/probabilistic models within a unified tensor computation framework.
Install / Use
/learn @cool-japan/TensorlogicREADME
TensorLogic
Logic-as-Tensor Planning Layer for Neural-Symbolic AI
TensorLogic compiles logical rules (predicates, quantifiers, implications) into tensor equations (einsum graphs) with a minimal DSL + IR, enabling neural/symbolic/probabilistic models within a unified tensor computation framework.
✨ Key Features
- 🧠 Logic-to-Tensor Compilation: Compile complex logical rules into optimized tensor operations
- ⚡ High Performance: SciRS2 backend with SIMD acceleration (2-4x speedup)
- 🐍 Python Bindings: Production-ready PyO3 bindings with NumPy integration
- 🔧 Multiple Backends: CPU, SIMD-accelerated CPU, GPU (future)
- 📊 Comprehensive Benchmarks: 24 benchmark groups across 5 suites
- 🧪 Extensively Tested: 4,415 tests with 100% pass rate
- 📚 Rich Documentation: Tutorials, examples, API docs
- 🔗 Ecosystem Integration: OxiRS (RDF*/SHACL), SkleaRS, QuantrS2, TrustformeRS, ToRSh
- 🤖 Neurosymbolic AI: Bidirectional tensor conversion with ToRSh (pure Rust PyTorch alternative)
🎉 Release Candidate
Version: 0.1.0-rc.1 | Status: Release Candidate
TensorLogic has reached release candidate status with comprehensive testing, benchmarking, and documentation:
RC.1 Key Improvements:
-
✅ SciRS2 ecosystem upgraded to 0.3.0 - Latest scientific computing backend
-
✅ SkleaRS upgraded to 0.1.0-rc.1 - Aligned release candidate versioning
-
✅ ToRSh upgraded to 0.1.0 (stable) - Production-ready neurosymbolic tensor interop
-
✅ rand 0.10 compatibility - Updated to latest random number generation API
-
✅ 4,415/4,415 tests passing (100% pass rate) - Comprehensive coverage across all crates
-
✅ Zero compiler warnings - Clean build with latest dependencies
-
✅ Complete benchmark suite - 24 groups covering SIMD, memory, gradients, throughput
-
✅ Production packaging - Ready for PyPI with cross-platform wheels
-
✅ Comprehensive docs - README, CHANGELOG, packaging guide, tutorials
-
✅ All 8 development phases complete - From IR to Python bindings
Ready for real-world use in research, production systems, and educational contexts!
🚀 Quick Start
Rust
use tensorlogic_compiler::compile_to_einsum;
use tensorlogic_ir::{TLExpr, Term};
use tensorlogic_scirs_backend::Scirs2Exec;
use tensorlogic_infer::TlAutodiff;
// Define a logical rule: knows(x, y) ∧ knows(y, z) → knows(x, z)
let x = Term::var("x");
let y = Term::var("y");
let z = Term::var("z");
let knows_xy = TLExpr::pred("knows", vec![x.clone(), y.clone()]);
let knows_yz = TLExpr::pred("knows", vec![y.clone(), z.clone()]);
let premise = TLExpr::and(knows_xy, knows_yz);
// Compile to tensor graph
let graph = compile_to_einsum(&premise)?;
// Execute with SciRS2 backend
let mut executor = Scirs2Exec::new();
// Add tensor data...
let result = executor.forward(&graph)?;
Python
import pytensorlogic as tl
import numpy as np
# Create logical expressions
x, y = tl.var("x"), tl.var("y")
knows = tl.pred("knows", [x, y])
knows_someone = tl.exists("y", "Person", knows)
# Create compiler context and register domain (required for quantifiers)
ctx = tl.compiler_context()
ctx.add_domain("Person", 100)
# Compile to tensor graph with context
graph = tl.compile_with_context(knows_someone, ctx)
# Execute with data
knows_matrix = np.random.rand(100, 100)
result = tl.execute(graph, {"knows": knows_matrix})
print(f"Result shape: {result['output'].shape}") # (100,)
📦 Installation
Rust
Add to your Cargo.toml:
[dependencies]
tensorlogic-ir = "0.1"
tensorlogic-compiler = "0.1"
tensorlogic-scirs-backend = { version = "0.1", features = ["simd"] }
Python
# From PyPI (when published)
pip install pytensorlogic
# From source
cd crates/tensorlogic-py
pip install maturin
maturin develop --release
For detailed installation instructions, see crates/tensorlogic-py/PACKAGING.md.
📖 Documentation
Guides
- Project Guide: Complete project overview and development guide
- Python Packaging: Building and distributing Python wheels
- SciRS2 Integration Policy: Using SciRS2 as the tensor backend
- Security Policy: Reporting vulnerabilities
- Contributing: How to contribute
Tutorials
- Getting Started: Beginner-friendly introduction (Jupyter)
- Advanced Topics: Multi-arity predicates, optimization (Jupyter)
Examples
Rust Examples (in examples/):
00_minimal_rule- Basic predicate and compilation01_exists_reduce- Existential quantifier with reduction02_scirs2_execution- Full execution with SciRS2 backend03_rdf_integration- OxiRS bridge with RDF* data04_compilation_strategies- Comparing 6 strategy presets
Python Examples (in crates/tensorlogic-py/python_examples/):
- 10+ examples covering all features
- Backend selection and capabilities
- Compilation strategies
- Integration patterns
🏗️ Architecture
TensorLogic follows a modular architecture with clear separation of concerns:
┌─────────────────────────────────────────────────────────┐
│ Python Bindings │
│ (tensorlogic-py via PyO3) │
├─────────────────────────────────────────────────────────┤
│ Planning Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ IR & AST │ │ Compiler │ │ Adapters │ │
│ │ (types) │→ │ (logic→IR) │→ │ (metadata) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
├─────────────────────────────────────────────────────────┤
│ Execution Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Traits │ │ SciRS2 │ │ Training │ │
│ │ (interfaces) │ │ (CPU/SIMD) │ │ (loops) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
├─────────────────────────────────────────────────────────┤
│ Integration Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ OxiRS │ │ SkleaRS │ │ TrustformeRS │ │
│ │ (RDF*/SHACL)│ │ (kernels) │ │ (attention) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ QuantrS2 │ │ ToRSh │ │
│ │ (PGM/BP) │ │ (PyTorch Alt)│ │
│ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────┘
📚 Workspace Structure
The project is organized as a Cargo workspace with 11 specialized crates:
Planning Layer (Engine-Agnostic)
| Crate | Purpose | Status |
|-------|---------|--------|
| tensorlogic-ir | AST and IR types (Term, TLExpr, EinsumGraph) | ✅ Complete |
| tensorlogic-compiler | Logic → tensor mapping with static analysis | ✅ Complete |
| tensorlogic-infer | Execution/autodiff traits (TlExecutor, TlAutodiff) | ✅ Complete |
| tensorlogic-adapters | Symbol tables, axis metadata, domain masks | ✅ Complete |
Execution Layer (SciRS2-Powered)
| Crate | Purpose | Status | |-------|---------|--------| | tensorlogic-scirs-backend | Runtime executor (CPU/SIMD/GPU via features) | ✅ Production Ready | | tensorlogic-train | Training loops, loss wiring, schedules, callbacks | ✅ Complete |
Integration Layer
| Crate | Purpose | Status |
|-------|---------|--------|
| tensorlogic-oxirs-bridge | RDF*/GraphQL/SHACL → TL rules; provenance binding | ✅ Complete |
| tensorlogic-sklears-kernels | Logic-derived similarity kernels for SkleaRS | ✅ Core Features |
| tensorlogic-quantrs-hooks | PGM/message-passing interop for QuantrS2 | ✅ Core Features |
| tensorlogic-trustformers | Transformer-as-rules (attention/FFN as einsum) | ✅ Complete |
| tensorlogic-py | PyO3 bindings with abi3-py39 support | ✅ Production Ready |
| torsh_interop | ToRSh tensor interoperability (neurosymbolic AI) | ✅ Complete (feature-gated) |
🔬 Logic-to-Tensor Mapping
TensorLogic uses these default mappings (configurable per use case):
| Logic Operation | Tensor Equivalent | Configurable Via |
|-----------------|-------------------|------------------|
| AND(a, b) | a * b (Hadamard product) | CompilationStrategy |
| OR(a, b) | max(a, b) or soft variant | CompilationStrategy |
| NOT(a) | 1 - a | CompilationStrategy |
| ∃x. P(x) | sum(P, axis=x) or max | Quantifier config |
| ∀x. P(x) | NOT(∃x. NOT(P(x))) (dual) | Quantifier config |
| a → b | max(1-a, b) or ReLU(b-a) | ImplicationStrategy |
Compilation Strategies
Six preset strategies for different use cases:
- soft_differentiable - Neural network training (smooth gradients)
- hard_boolean - Discrete Boolean logic (exact semantics)
- fuzzy_godel - Gödel fuzzy logic (min/max operations)
- fuzzy_product - Product fuzzy logic (probabilistic)
- fuzzy_lukasiewicz - Łukasiewicz fuzzy logic (bounded)
- probabilistic - Probabilistic interpretation
⚡ Performance
Benchmark Suite
TensorLogic includes comprehensive benchmarks across 5 suites (24 benchmark groups):
# Run all benchmarks
cargo bench -p tensorlogic-scirs-backend
# Individual suites
cargo bench --bench forward_pas
