Alibi is a Python package designed to help explain the predictions of machine learning models, gauge the confidence of predictions and eventually support wider capabilities of inspecting the performance of models with respect to concept drift and algorithmic bias. The focus of the library is to support the widest range of models using black-box methods where possible.
To get a list of the latest available model explanation algorithms, you can type:
import alibi alibi.explainers.__all__
['AnchorTabular', 'AnchorText', 'AnchorImage', 'CEM', 'CounterFactual', 'CounterFactualProto']
For gauging model confidence:
For detailed information on the methods:
We will use the Anchor method on tabular data to illustrate the usage of explainers in Alibi.
First, we import the explainer:
from alibi.explainers import AnchorTabular
Next, we initialize it by passing it a prediction function and any other necessary arguments:
explainer = AnchorTabular(predict_fn, feature_names)
Some methods require an additional
.fit step which requires access to the training set the model
was trained on:
Finally, we can call the explainer on a test instance which will return a dictionary containing the explanation and any additional metadata returned by the computation:
The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported in Alibi.