Lampe
Likelihood-free AMortized Posterior Estimation with PyTorch
Install / Use
/learn @probabilists/LampeREADME
LAMPE
LAMPE is a simulation-based inference (SBI) package that focuses on amortized estimation of posterior distributions, without relying on explicit likelihood functions; hence the name Likelihood-free AMortized Posterior Estimation (LAMPE). The package provides PyTorch implementations of modern amortized simulation-based inference algorithms like neural ratio estimation (NRE), neural posterior estimation (NPE) and more. Similar to PyTorch, the philosophy of LAMPE is to avoid obfuscation and expose all components, from network architecture to optimizer, to the user such that they are free to modify or replace anything they like.
As part of the inference pipeline, lampe provides components to efficiently store and load data from disk, diagnose predictions and display results graphically.
[!IMPORTANT] In an effort to unite communities, the development of LAMPE has stopped in favor of the sbi project. The
sbipackage already supports many oflampe's features, and you are welcome to submit issues and PRs for the features you would like to be ported tosbi.
Installation
The lampe package is available on PyPI, which means it is installable via pip.
pip install lampe
Alternatively, if you need the latest features, you can install it from the repository.
pip install git+https://github.com/probabilists/lampe
Documentation
The documentation is made with Sphinx and Furo and is hosted at lampe.readthedocs.io.
Contributing
If you have a question, an issue or would like to contribute, please read our contributing guidelines.
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
90.0kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
343.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
