Getting Started

Installation

Alibi works with Python 3.7+ and can be installed from PyPI or conda-forge by following the instructions below.

Install via PyPI
  • Alibi can be installed from PyPI with pip:

Default installation.

pip install alibi

Installation with support for computing SHAP values.

pip install alibi[shap]

Installation with support for distributed Kernel SHAP.

pip install alibi[ray]

Installation with support for tensorflow backends. Required for

pip install alibi[tensorflow]

Installation with support for torch backends. One of Torch or TensorFlow is required for:

pip install alibi[torch]

Installs all optional dependencies.

pip install alibi[all]
Install via conda-forge
  • To install the conda-forge version it is recommended to use mamba, which can be installed to the base conda enviroment with:

conda install mamba -n base -c conda-forge
  • mamba can then be used to install alibi in a conda enviroment:

Default installation.

mamba install -c conda-forge alibi

Installation with support for computing SHAP values.

mamba install -c conda-forge alibi shap

Installation with support for distributed computation of explanations.

mamba install -c conda-forge alibi ray 

Features

Alibi is a Python package designed to help explain the predictions of machine learning models and gauge the confidence of predictions. The focus of the library is to support the widest range of models using black-box methods where possible.

To get a list of the latest available model explanation algorithms, you can type:

import alibi
alibi.explainers.__all__
['ALE', 
'AnchorTabular',
'DistributedAnchorTabular', 
'AnchorText', 
'AnchorImage', 
'CEM', 
'Counterfactual', 
'CounterfactualProto', 
'CounterfactualRL', 
'CounterfactualRLTabular',
'PartialDependence',
'TreePartialDependence',
'PartialDependenceVariance',
'PermutationImportance',
'plot_ale',
'plot_pd',
'plot_pd_variance',
'plot_permutation_importance',
'IntegratedGradients', 
'KernelShap', 
'TreeShap',
'GradientSimilarity']

For gauging model confidence:

alibi.confidence.__all__
['linearity_measure',
 'LinearityMeasure',
 'TrustScore']

For dataset summarization

alibi.prototypes.__all__
['ProtoSelect',
 'visualize_image_prototypes']

For detailed information on the methods:

Basic Usage

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the Anchor method on tabular data to illustrate the API.

First, we import the explainer:

from alibi.explainers import AnchorTabular

Next, we initialize it by passing it a prediction function and any other necessary arguments:

explainer = AnchorTabular(predict_fn, feature_names)

Some methods require an additional .fit step which requires access to the training set the model was trained on:

explainer.fit(X_train)
AnchorTabular(meta={
    'name': 'AnchorTabular',
    'type': ['blackbox'],
    'explanations': ['local'],
    'params': {'seed': None, 'disc_perc': (25, 50, 75)}
})

Finally, we can call the explainer on a test instance which will return an Explanation object containing the explanation and any additional metadata returned by the computation:

 explanation = explainer.explain(x)

The returned Explanation object has meta and data attributes which are dictionaries containing any explanation metadata (e.g. parameters, type of explanation) and the explanation itself respectively:

explanation.meta
{'name': 'AnchorTabular',
 'type': ['blackbox'],
 'explanations': ['local'],
 'params': {'seed': None,
  'disc_perc': (25, 50, 75),
  'threshold': 0.95,
  'delta': ...truncated output...
explanation.data
{'anchor': ['petal width (cm) > 1.80', 'sepal width (cm) <= 2.80'],
 'precision': 0.9839228295819936,
 'coverage': 0.31724137931034485,
 'raw': {'feature': [3, 1],
  'mean': [0.6453362255965293, 0.9839228295819936],
  'precision': [0.6453362255965293, 0.9839228295819936],
  'coverage': [0.20689655172413793, 0.31724137931034485],
  'examples': ...truncated output...

The top level keys of both meta and data dictionaries are also exposed as attributes for ease of use of the explanation:

explanation.anchor
['petal width (cm) > 1.80', 'sepal width (cm) <= 2.80']

Some algorithms, such as Kernel SHAP, can run batches of explanations in parallel, if the number of cores is specified in the algorithm constructor:

distributed_ks = KernelShap(predict_fn, distributed_opts={'n_cpus': 10})

Note that this requires the user to run pip install alibi[ray] to install dependencies of the distributed backend.

The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported in Alibi.