alibi-detect is a Python package focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
To get a list of respectively the latest outlier and adversarial detection algorithms, you can type:
import alibi_detect alibi_detect.od.__all__
['IForest', 'Mahalanobis', 'OutlierAEGMM', 'OutlierVAE', 'OutlierVAEGMM', 'OutlierProphet', 'SpectralResidual']
For detailed information on the methods:
We will use the VAE outlier detector to illustrate the usage of outlier and adversarial detectors in alibi-detect.
First, we import the detector:
from alibi_detect.od import OutlierVAE
Then we initialize it by passing it the necessary arguments:
od = OutlierVAE( threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024 )
Some detectors require an additional
.fit step using training data:
The detectors can be saved or loaded as follows:
from alibi_detect.utils.saving import save_detector, load_detector filepath = './my_detector/' save_detector(od, filepath) od = load_detector(filepath)
Finally, we can make predictions on test data and detect outliers or adversarial examples.
preds = od.predict(X_test)
The predictions are returned in a dictionary with as keys
meta contains the detector’s metadata while
data is in itself a dictionary with the actual predictions. It has either
is_adversarial (filled with 0’s and 1’s) as well as
feature_score as keys with numpy arrays as values.
The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported in alibi-detect.