SkillAgentSearch skills...

Pyvbmc

PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python

Install / Use

/learn @acerbilab/Pyvbmc
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

PyVBMC

PyVBMC: Variational Bayesian Monte Carlo in Python

Version Conda PyPI <br /> Discussion tests docs build

What is it?

PyVBMC is a Python implementation of the Variational Bayesian Monte Carlo (VBMC) algorithm for posterior and model inference, previously implemented in MATLAB. VBMC is an approximate inference method designed to fit and evaluate Bayesian models with a limited budget of potentially noisy likelihood evaluations (e.g., for computationally expensive models). Specifically, VBMC simultaneously computes:

  • an approximate posterior distribution of the model parameters;
  • an approximation — technically, an approximate lower bound — of the log model evidence (also known as log marginal likelihood or log Bayes factor), a metric used for Bayesian model selection.

Extensive benchmarks on both artificial test problems and a large number of real model-fitting problems from computational and cognitive neuroscience show that VBMC generally — and often vastly — outperforms alternative methods for sample-efficient Bayesian inference [2,3].

Documentation

The full documentation is available at: https://acerbilab.github.io/pyvbmc/

When should I use PyVBMC?

PyVBMC is effective when:

  • the model log-likelihood function is a black-box (e.g., the gradient is unavailable);
  • the likelihood is at least moderately expensive to compute (say, half a second or more per evaluation);
  • the model has up to D = 10 continuous parameters (maybe a few more, but no more than D = 20);
  • the target posterior distribution is continuous and reasonably smooth (see here);
  • optionally, log-likelihood evaluations may be noisy (e.g., estimated via simulation).

Conversely, if your model can be written analytically, you should exploit the powerful machinery of probabilistic programming frameworks such as Stan or PyMC.

Note: If you are interested in point estimates or in finding better starting points for PyVBMC, check out Bayesian Adaptive Direct Search in Python (PyBADS), our companion method for fast Bayesian optimization.

Installation

PyVBMC is available via pip and conda-forge.

  1. Install with:
    python -m pip install pyvbmc
    
    or:
    conda install --channel=conda-forge pyvbmc
    
    PyVBMC requires Python version 3.10 or newer.
  2. (Optional): Install Jupyter to view the example Notebooks. You can skip this step if you're working from a Conda environment which already has Jupyter, but be aware that if the wrong jupyter executable is found on your path then import errors may arise.
    conda install jupyter
    
    If you are running Python 3.11 and get an UnsatisfiableError you may need to install Jupyter from conda-forge:
    conda install --channel=conda-forge jupyter
    
    The example notebooks can then be accessed by running
    python -m pyvbmc
    

If you wish to install directly from latest source code, please see the instructions for developers and contributors.

Quick start

A typical PyVBMC workflow follows four steps:

  1. Define the model, which defines a target log density (i.e., an unnormalized log posterior density);
  2. Setup the parameters (parameter bounds, starting point);
  3. Initialize and run the inference;
  4. Examine and visualize the results.

PyVBMC is not concerned with how you define your model in step 1, as long as you can provide an (unnormalized) target log density. Running the inference in step 3 only involves a couple of lines of code:

from pyvbmc import VBMC
# ... define your model/target density here
vbmc = VBMC(target, x0, LB, UB, PLB, PUB)
vp, results = vbmc.optimize()

with input arguments:

  • target: the target (unnormalized) log density — often an unnormalized log posterior. target is a callable that should take as input a parameter vector and return the log density at the point. The returned log density must return a finite real value, i.e. non NaN or -inf. See the VBMC FAQ for more details;
  • x0: an array representing the starting point of the inference in parameter space;
  • LB and UB: arrays of hard lower (resp. upper) bounds constraining the parameters (possibly -/+np.inf for unbounded parameters);
  • PLB and PUB: arrays of plausible lower (resp. upper) bounds: that is, a box that ideally brackets a high posterior density region of the target.

The outputs are:

  • vp: a VariationalPosterior object which approximates the true target density;
  • results: a dict containing additional information. Important keys are:
    • "elbo": the estimated lower bound on the log model evidence (log normalization constant);
    • "elbo_sd": the standard deviation of the estimate of the ELBO (not the error between the ELBO and the true log model evidence, which is generally unknown).

The vp object can be manipulated in various ways. For example, we can draw samples from vp with the vp.sample() method, or evaluate its density at a point with vp.pdf() (or log-density with vp.log_pdf()). See the VariationalPosterior class documentation for details.

PyVBMC with noisy targets

The quick start example above works for deterministic (noiseless) evaluations of the target log-density. Py(VBMC) also supports noisy evaluations of the target. Noisy evaluations often arise from simulation-based models, for which a direct expression of the (log) likelihood is not available.

For information on how to run PyVBMC on a noisy target, see this example notebook and the VBMC FAQ (for MATLAB, but most concepts still apply).

Next steps

Once installed, example Jupyter notebooks can be found in the pyvbmc/examples directory. They can also be viewed statically on the main documentation pages. These examples will walk you through the basic usage of PyVBMC as well as some if its more advanced features.

For practical recommendations, such as how to set LB and UB and the plausible bounds, check out the FAQ on the VBMC wiki. The wiki was written with the MATLAB toolbox in mind, but the general advice applies to the Python version as well.

How does it work?

VBMC/PyVBMC combine two machine learning techniques in a novel way:

PyVBMC iteratively builds an approximation of the true, expensive target posterior via a Gaussian process (GP), and it matches a variational distribution — an expressive mixture of Gaussians — to the GP.

This matching process entails optimization of the evidence lower bound (ELBO), that is a lower bound on the log marginal likelihood (LML), also known as log model evidence. Crucially, we estimate the ELBO via Bayesian quadrature, which is fast and does not require further evaluation of the true target posterior.

In each iteration, PyVBMC uses active sampling to select which points to evaluate next in order to explore the posterior landscape and reduce uncertainty in the approximation.

VBMC Demo

In the figure above, we show an example PyVBMC run on a Rosenbrock "banana" function. The bottom-left panel shows PyVBMC at work: in grayscale are samples from the variational posterior (drawn as small points) and the corresponding estimated density (drawn as contours). The solid orange circles are the active sampling points chosen at each iteration, and the hollow blue circles are the previously sampled points. The topmost and rightnmost panels show histograms of the marginal densities along the $x_1$ and $x_2$ dimensions, respectively. PyVBMC converges to an excellent approximation of the true posterior with a few dozens evaluations of the target density.

Se

View on GitHub
GitHub Stars125
CategoryData
Updated2d ago
Forks10

Languages

Python

Security Score

100/100

Audited on Mar 29, 2026

No findings