SkillAgentSearch skills...

Psionic

Rust ML stack

Install / Use

/learn @OpenAgentsInc/Psionic
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Psionic

Psionic is a Rust-native ML and inference stack.

It owns the machine-facing execution substrate behind local inference, serving, training, distributed execution, artifact truth, and clustered compute. The project is broader than one app or one benchmark lane. It is the crate family that OpenAgents uses for inference, training, cluster bring-up, and execution evidence.

Psionic should be read hardware-first. It owns the admitted hardware strategy for each lane: backend family, residency mode, topology, serving or training role, and the capability, refusal, and evidence surfaces that higher layers consume. Upstream systems such as llama.cpp, vLLM, SGLang, MLX, and other reference repos are inputs for specific layers or hardware classes, not the identity of the shipped Psionic stack.

The training side now also carries one bounded gemma4:e4b CUDA adapter-SFT trainer above the shared adapter substrate: LM-head-only final-hidden-state supervision, frozen-base semantics, typed export, exact checkpoint resume, served-base plus tokenizer compatibility checks, and explicit refusal truth for wider Gemma regions that remain out of scope. The same bounded lane now also closes the first trainer-to-serving refresh seam: typed Gemma checkpoints plus exported adapter artifacts can be revalidated into the live CUDA mesh lane without a process restart, the active served revision is surfaced in response provenance, stale or mismatched revisions fail closed, and operators can roll back to the last known-good promoted revision. The same lane is now also eval-first: it binds one canonical held-out eval pack, one four-split dataset contract, one short baseline sweep against the untuned base, one overlap and decontam gate, one canned promoted-checkpoint vibe-review packet, and one promotion decision that refuses held-out regressions or failed operator review.

Start Here

Main Tracks

Psion Training Shortcut

If you want the current top Psion training lane instead of guessing among benchmark-adjacent lanes, run:

./TRAIN

That command now targets the actual Psion pretraining lane and materializes the retained launch, status, preflight, checkpoint, dashboard, alert, and closeout surfaces under ~/scratch/psion_actual_pretraining_runs/<run_id>.

Use:

./TRAIN --dry-run
./TRAIN resume --run-root <path>
./TRAIN status --run-root <path>

for plan inspection and operator follow-up on the actual lane.

The older bounded reference pilot still exists as the smoke/reference lane:

./TRAIN --lane reference_pilot --dry-run
./TRAIN --lane reference_pilot --mode local_reference

Tassadar Training Shortcut

If you want the current default Tassadar training lane instead of guessing among older bounded benchmark lanes, run:

./TRAIN_TASSADAR

That command now means the bounded trace-bound article-transformer weight-production lane that produces the retained tassadar-article-transformer-trace-bound-trained-v0 family under fixtures/tassadar/runs/tassadar_article_transformer_weight_production_v1.

The lane contract lives in docs/TASSADAR_DEFAULT_TRAIN_LANE.md.

The operator launcher lives in docs/TASSADAR_TRAIN_LAUNCHER.md.

The bounded default-lane rehearsal lives in docs/TASSADAR_DEFAULT_TRAIN_REHEARSAL.md.

Tassadar Executor Lane

Executor-class research and runtime work for exact computation starts with docs/ROADMAP_TASSADAR.md.

Local GPT-OSS Inference

Psionic ships a dedicated local GPT-OSS server in crates/psionic-serve/src/bin/psionic-gpt-oss-server.rs. It exposes:

  • GET /health
  • GET /v1/models
  • POST /v1/chat/completions

Build it:

cargo build -p psionic-serve --bin psionic-gpt-oss-server --release

Run it on a Linux NVIDIA host:

./target/release/psionic-gpt-oss-server \
  -m /path/to/
View on GitHub
GitHub Stars17
CategoryDevelopment
Updated23m ago
Forks2

Languages

Rust

Security Score

85/100

Audited on Apr 7, 2026

No findings