SkillAgentSearch skills...

Alibi

Algorithms for explaining machine learning models

Install / Use

/learn @SeldonIO/Alibi
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="https://raw.githubusercontent.com/SeldonIO/alibi/master/doc/source/_static/Alibi_Explain_Logo_rgb.png" alt="Alibi Logo" width="50%"> </p> <!--- BADGES: START --->

Build Status Documentation Status codecov PyPI - Python Version PyPI - Package Version Conda (channel only) GitHub - License Slack channel

<!--- Hide platform for now as platform agnostic ---> <!--- [![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/alibi?logo=anaconda&style=flat)][#conda-forge-package]---> <!--- BADGES: END --->

Alibi is a source-available Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.

<table> <tr valign="top"> <td width="50%" > <a href="https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html"> <br> <b>Anchor explanations for images</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/anchor_image.png"> </a> </td> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html"> <br> <b>Integrated Gradients for text</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ig_text.png"> </a> </td> </tr> <tr valign="top"> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html"> <br> <b>Counterfactual examples</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/cf.png"> </a> </td> <td width="50%"> <a href="https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html"> <br> <b>Accumulated Local Effects</b> <br> <br> <img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ale.png"> </a> </td> </tr> </table>

Table of Contents

Installation and Usage

Alibi can be installed from:

  • PyPI or GitHub source (with pip)
  • Anaconda (with conda/mamba)

With pip

  • Alibi can be installed from PyPI:

    pip install alibi
    
  • Alternatively, the development version can be installed:

    pip install git+https://github.com/SeldonIO/alibi.git 
    
  • To take advantage of distributed computation of explanations, install alibi with ray:

    pip install alibi[ray]
    
  • For SHAP support, install alibi as follows:

    pip install alibi[shap]
    

With conda

To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:

conda install mamba -n base -c conda-forge
  • For the standard Alibi install:

    mamba install -c conda-forge alibi
    
  • For distributed computing support:

    mamba install -c conda-forge alibi ray
    
  • For SHAP support:

    mamba install -c conda-forge alibi shap
    

Usage

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the AnchorTabular explainer to illustrate the API:

from alibi.explainers import AnchorTabular

# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)

# explain an instance
explanation = explainer.explain(x)

The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary containing the explainer metadata and any hyperparameters and data is a dictionary containing everything related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies from method to method so we encourage the reader to become familiar with the types of methods supported.

Supported Methods

The following tables summarize the possible use cases for each method.

Model Explanations

| Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed | |:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:| | ALE | BB | global | ✔ | ✔ | ✔ | | | | | | | Partial Dependence | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | | | PD Variance | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | | | Permutation Importance | BB | global | ✔ | ✔ | ✔ | | | ✔ | | | | Anchors | BB | local | ✔ | | ✔ | ✔ | ✔ | ✔ | For Tabular | | | CEM | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | Optional | | | Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | No | | | Prototype Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | ✔ | Optional | | | Counterfactuals with RL | BB | local | ✔ | | ✔ | | ✔ | ✔ | ✔ | | | Integrated Gradients | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | Optional | | | Kernel SHAP | BB | local <br></br>global | ✔ | ✔ | ✔ | | | ✔ | ✔ | ✔

View on GitHub
GitHub Stars2.6k
CategoryEducation
Updated1h ago
Forks263

Languages

Python

Security Score

85/100

Audited on Mar 27, 2026

No findings