Overview
Introduction
Getting Started
Algorithm overview
White-box and black-box models
Saving and loading
Frequently Asked Questions
Explanations
Methods
Examples
Alibi Overview Example
Accumulated Local Effects
Accumulated Local Effects for classifying flowers
Accumulated Local Effects for predicting house prices
Anchors
Anchor explanations for fashion MNIST
Anchor explanations for ImageNet
Anchor explanations for income prediction
Anchor explanations on the Iris dataset
Anchor explanations for movie sentiment
Contrastive Explanation Method
Contrastive Explanations Method (CEM) applied to Iris dataset
Contrastive Explanations Method (CEM) applied to MNIST
Counterfactual Instances
Counterfactual instances on MNIST
Counterfactuals Guided by Prototypes
Counterfactual explanations with one-hot encoded categorical variables
Counterfactual explanations with ordinally encoded categorical variables
Counterfactuals guided by prototypes on California housing dataset
Counterfactuals guided by prototypes on MNIST
Counterfactuals with Reinforcement Learning
Counterfactual with Reinforcement Learning (CFRL) on Adult Census
Counterfactual with Reinforcement Learning (CFRL) on MNIST
Integrated Gradients
Integrated gradients for a ResNet model trained on Imagenet dataset
Integrated gradients for text classification on the IMDB dataset
Integrated gradients for MNIST
Integrated gradients for transformers models
Kernel SHAP
Distributed KernelSHAP
KernelSHAP: combining preprocessor and predictor
Handling categorical variables with KernelSHAP
Kernel SHAP explanation for SVM models
Kernel SHAP explanation for multinomial logistic regression models
Partial Dependence
Partial Dependence and Individual Conditional Expectation for predicting bike renting
Partial Dependence Variance
Feature importance and feature interaction based on partial dependece variance
Permutation Importance
Permutation Feature Importance on “Who’s Going to Leave Next?”
Similarity explanations
Similarity explanations for 20 newsgroups dataset
Similarity explanations for ImageNet
Similarity explanations for MNIST
Tree SHAP
Explaining Tree Models with Interventional Feature Perturbation Tree SHAP
Explaining Tree Models with Path-Dependent Feature Perturbation Tree SHAP
Model Confidence
Methods
Examples
Prototypes
Methods
Examples
API reference
API reference
Alibi
Examples
View page source
Examples
Alibi Overview Example
Accumulated Local Effects
Accumulated Local Effects for classifying flowers
Accumulated Local Effects for predicting house prices
Anchors
Anchor explanations for fashion MNIST
Anchor explanations for ImageNet
Anchor explanations for income prediction
Anchor explanations on the Iris dataset
Anchor explanations for movie sentiment
Contrastive Explanation Method
Contrastive Explanations Method (CEM) applied to Iris dataset
Contrastive Explanations Method (CEM) applied to MNIST
Counterfactual Instances
Counterfactual instances on MNIST
Counterfactuals Guided by Prototypes
Counterfactual explanations with one-hot encoded categorical variables
Counterfactual explanations with ordinally encoded categorical variables
Counterfactuals guided by prototypes on California housing dataset
Counterfactuals guided by prototypes on MNIST
Counterfactuals with Reinforcement Learning
Counterfactual with Reinforcement Learning (CFRL) on Adult Census
Counterfactual with Reinforcement Learning (CFRL) on MNIST
Integrated Gradients
Integrated gradients for a ResNet model trained on Imagenet dataset
Integrated gradients for text classification on the IMDB dataset
Integrated gradients for MNIST
Integrated gradients for transformers models
Kernel SHAP
Distributed KernelSHAP
KernelSHAP: combining preprocessor and predictor
Handling categorical variables with KernelSHAP
Kernel SHAP explanation for SVM models
Kernel SHAP explanation for multinomial logistic regression models
Partial Dependence
Partial Dependence and Individual Conditional Expectation for predicting bike renting
Partial Dependence Variance
Feature importance and feature interaction based on partial dependece variance
Permutation Importance
Permutation Feature Importance on “Who’s Going to Leave Next?”
Similarity explanations
Similarity explanations for 20 newsgroups dataset
Similarity explanations for ImageNet
Similarity explanations for MNIST
Tree SHAP
Explaining Tree Models with Interventional Feature Perturbation Tree SHAP
Explaining Tree Models with Path-Dependent Feature Perturbation Tree SHAP