alibidetect is an open source Python library focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
Getting StartedÂ¶
FeaturesÂ¶
alibidetect is a Python package focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
To get a list of respectively the latest outlier and adversarial detection algorithms, you can type:
import alibi_detect
alibi_detect.od.__all__
['OutlierAEGMM',
'IForest',
'Mahalanobis',
'OutlierAE',
'OutlierVAE',
'OutlierVAEGMM',
'OutlierProphet',
'OutlierSeq2Seq',
'SpectralResidual']
alibi_detect.ad.__all__
['AdversarialAE']
For detailed information on the methods:
Basic UsageÂ¶
We will use the VAE outlier detector to illustrate the usage of outlier and adversarial detectors in alibidetect.
First, we import the detector:
from alibi_detect.od import OutlierVAE
Then we initialize it by passing it the necessary arguments:
od = OutlierVAE(
threshold=0.1,
encoder_net=encoder_net,
decoder_net=decoder_net,
latent_dim=1024
)
Some detectors require an additional .fit
step using training data:
od.fit(X_train)
The detectors can be saved or loaded as follows:
from alibi_detect.utils.saving import save_detector, load_detector
filepath = './my_detector/'
save_detector(od, filepath)
od = load_detector(filepath)
Finally, we can make predictions on test data and detect outliers or adversarial examples.
preds = od.predict(X_test)
The predictions are returned in a dictionary with as keys meta
and data
. meta
contains the detectorâ€™s metadata while data
is in itself a dictionary with the actual predictions. It has either is_outlier
or is_adversarial
(filled with 0â€™s and 1â€™s) as well as instance_score
and feature_score
as keys with numpy arrays as values.
The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported in alibidetect.
Algorithm OverviewÂ¶
The following tables summarize the advised use cases for the current algorithms. Please consult the method specific pages for a more detailed breakdown of each method. The column Feature Level indicates whether the outlier scoring and detection can be done and returned at the feature level, e.g. per pixel for an image.
Outlier DetectionÂ¶
Detector 
Tabular 
Image 
Time Series 
Text 
Categorical Features 
Online 
Feature Level 

âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ” 
âœ˜ 
âœ˜ 

âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ” 
âœ” 
âœ˜ 

âœ” 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
âœ” 

âœ” 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
âœ” 

âœ” 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 

âœ” 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 

âœ˜ 
âœ˜ 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 

âœ˜ 
âœ˜ 
âœ” 
âœ˜ 
âœ˜ 
âœ” 
âœ” 

âœ˜ 
âœ˜ 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ” 
Adversarial DetectionÂ¶
Detector 
Tabular 
Image 
Time Series 
Text 
Categorical Features 
Online 
Feature Level 

âœ” 
âœ” 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
âœ˜ 
RoadmapÂ¶
Alibidetect aims to be the goto library for outlier, adversarial and concept drift detection in Python.
This means that the algorithms in the library need to handle:
Online detection with often stateful detectors.
Offline detection, where the detector is trained on a batch of unsupervised or semisupervised data. This assumption resembles a lot of realworld settings where labels are hard to come by.
The algorithms will cover the following data types:
Tabular, including both numerical and categorical data.
Images
Time series, both univariate and multivariate.
Text
It will also be possible to combine different algorithms in ensemble detectors.
The library currently covers both online and offline outlier detection algorithms for tabular data, images and time series as well as an offline adversarial detector for tabular data and images. The near term focus will be on concept drift detectors (initially for tabular data), extending outlier detectors for mixed data types, generative models for anomaly detection and leveraging labels in a semisupervised setting.
This page was generated from doc/source/methods/mahalanobis.ipynb.
Mahalanobis Outlier DetectorÂ¶
OverviewÂ¶
The Mahalanobis online outlier detector aims to predict anomalies in tabular data. The algorithm calculates an outlier score, which is a measure of distance from the center of the features distribution (Mahalanobis distance). If this outlier score is higher than a userdefined threshold, the observation is flagged as an outlier. The algorithm is online, which means that it starts without knowledge about the distribution of the features and learns as requests arrive. Consequently you should expect the output to be bad at the start and to improve over time. The algorithm is suitable for low to medium dimensional tabular data.
The algorithm is also able to include categorical variables. The fit
step first computes pairwise distances between the categories of each categorical variable. The pairwise distances are based on either the model predictions (MVDM method) or the context provided by the other variables in the dataset (ABDM method). For MVDM, we use the difference between the conditional model prediction probabilities of each category. This method is based on the Modified Value Difference Metric (MVDM) by
Cost et al (1993). ABDM stands for AssociationBased Distance Metric, a categorical distance measure introduced by Le et al (2005). ABDM infers context from the presence of other variables in the data and computes a dissimilarity measure based on the KullbackLeibler divergence. Both methods can also be combined
as ABDMMVDM. We can then apply multidimensional scaling to project the pairwise distances into Euclidean space.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: Mahalanobis distance threshold above which the instance is flagged as an outlier.n_components
: number of principal components used.std_clip
: featurewise standard deviation used to clip the observations before updating the mean and covariance matrix.start_clip
: number of observations before clipping is applied.max_n
: algorithm behaves as if it has seen at mostmax_n
points.cat_vars
: dictionary with as keys the categorical columns and as values the number of categories per categorical variable. Only needed if categorical variables are present.ohe
: boolean whether the categorical variables are onehot encoded (OHE) or not. If not OHE, they are assumed to have ordinal encodings.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import Mahalanobis
od = Mahalanobis(
threshold=10.,
n_components=2,
std_clip=3,
start_clip=100
)
FitÂ¶
We only need to fit the outlier detector if there are categorical variables present in the data. The following parameters can be specified:
X
: training batch as a numpy array.y
: model class predictions or ground truth labels forX
. Used for â€˜mvdmâ€™ and â€˜abdmmvdmâ€™ pairwise distance metrics. Not needed for â€˜abdmâ€™.d_type
: pairwise distance metric used for categorical variables. Currently, â€˜abdmâ€™, â€˜mvdmâ€™ and â€˜abdmmvdmâ€™ are supported. â€˜abdmâ€™ infers context from the other variables while â€˜mvdmâ€™ uses the model predictions. â€˜abdmmvdmâ€™ is a weighted combination of the two metrics.w
: weight on â€˜abdmâ€™ (between 0. and 1.) distance ifd_type
equals â€˜abdmmvdmâ€™.disc_perc
: list with percentiles used in binning of numerical features used for the â€˜abdmâ€™ and â€˜abdmmvdmâ€™ pairwise distance measures.standardize_cat_vars
: standardize numerical values of categorical variables if True.feature_range
: tuple with min and max ranges to allow for numerical values of categorical variables. Min and max ranges can be floats or numpy arrays with dimension (1, number of features) for featurewise ranges.smooth
: smoothing exponent between 0 and 1 for the distances. Lower values will smooth the difference in distance metric between different features.center
: whether to center the scaled distance measures. If False, the min distance for each feature except for the feature with the highest raw max distance will be the lower bound of the feature range, but the upper bound will be below the max feature range.
od.fit(
X_train,
d_type='abdm',
disc_perc=[25, 50, 75]
)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(
X,
threshold_perc=95
)
Beware though that the outlier detector is stateful and every call to the score
function will update the mean and covariance matrix, even when inferring the threshold.
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
to compute the instance level Mahalanobis distances. We can also return the instance level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (batch size,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
return_instance_score=True
)
ExamplesÂ¶
This page was generated from doc/source/methods/iforest.ipynb.
Isolation Forest Outlier DetectorÂ¶
OverviewÂ¶
Isolation forests (IF) are tree based models specifically used for outlier detection. The IF isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. The number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of random trees, is a measure of normality and is used to define an anomaly score. Outliers can typically be isolated quicker, leading to shorter paths. The algorithm is suitable for low to medium dimensional tabular data.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value for the outlier score above which the instance is flagged as an outlier.n_estimators
: number of base estimators in the ensemble. Defaults to 100.max_samples
: number of samples to draw from the training data to train each base estimator. If int, drawmax_samples
samples. If float, drawmax_samples
times number of features samples. If â€˜autoâ€™,max_samples
= min(256, number of samples).max_features
: number of features to draw from the training data to train each base estimator. If int, drawmax_features
features. If float, drawmax_features
times number of features features.bootstrap
: whether to fit individual trees on random subsets of the training data, sampled with replacement.n_jobs
: number of jobs to run in parallel forfit
andpredict
.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import IForest
od = IForest(
threshold=0.,
n_estimators=100
)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array.sample_weight
: array with shape (batch size,) used to assign different weights to each instance during training. Defaults to None.
od.fit(
X_train
)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(
X,
threshold_perc=95
)
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
to compute the instance level outlier scores. We can also return the instance level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (batch size,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
return_instance_score=True
)
ExamplesÂ¶
This page was generated from doc/source/methods/vae.ipynb.
Variational AutoEncoder Outlier DetectorÂ¶
OverviewÂ¶
The Variational AutoEncoder (VAE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised or semisupervised training is desirable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process. The algorithm is suitable for tabular and image data.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value above which the instance is flagged as an outlier.score_type
: scoring method used to detect outliers. Currently only the default â€˜mseâ€™ supported.latent_dim
: latent dimension of the VAE.encoder_net
:tf.keras.Sequential
instance containing the encoder network. Example:
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net
:tf.keras.Sequential
instance containing the decoder network. Example:
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
vae
: instead of using a separate encoder and decoder, the VAE can also be passed as atf.keras.Model
.samples
: number of samples drawn during detection for each instance to detect.beta
: weight on the KLdivergence loss term following the \(\beta\)VAE framework. Default equals 1.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import OutlierVAE
od = OutlierVAE(
threshold=0.1,
encoder_net=encoder_net,
decoder_net=decoder_net,
latent_dim=1024,
samples=10
)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array of preferably normal data.loss_fn
: loss function used for training. Defaults to the elbo loss.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e3.cov_elbo
: dictionary with covariance matrix options in case the elbo loss function is used. Either use the full covariance matrix inferred from X (dict(cov_full=None)), only the variance (dict(cov_diag=None)) or a float representing the same standard deviation for each feature (e.g. dict(sim=.05)) which is the default.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.
od.fit(
X_train,
epochs=50
)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(
X,
threshold_perc=95
)
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
. Detection can be customized via the following parameters:
outlier_type
: either â€˜instanceâ€™ or â€˜featureâ€™. If the outlier type equals â€˜instanceâ€™, the outlier score at the instance level will be used to classify the instance as an outlier or not. If â€˜featureâ€™ is selected, outlier detection happens at the feature level (e.g. by pixel in images).outlier_perc
: percentage of the sorted (descending) feature level outlier scores. We might for instance want to flag an image as an outlier if at least 20% of the pixel values are on average above the threshold. In this case, we setoutlier_perc
to 20. The default value is 100 (using all the features).return_feature_score
: boolean whether to return the feature level outlier scores.return_instance_score
: boolean whether to return the instance level outlier scores.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances or features are above the threshold and therefore outliers. Ifoutlier_type
equals â€˜instanceâ€™, then the array is of shape (batch size,). If it equals â€˜featureâ€™, then the array is of shape (batch size, instance shape).feature_score
: contains feature level scores ifreturn_feature_score
equals True.instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
outlier_type='instance',
outlier_perc=75,
return_feature_score=True,
return_instance_score=True
)
This page was generated from doc/source/methods/ae.ipynb.
AutoEncoder Outlier DetectorÂ¶
OverviewÂ¶
The AutoEncoder (AE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The AE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value above which the instance is flagged as an outlier.encoder_net
:tf.keras.Sequential
instance containing the encoder network. Example:
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net
:tf.keras.Sequential
instance containing the decoder network. Example:
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(1024,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
ae
: instead of using a separate encoder and decoder, the AE can also be passed as atf.keras.Model
.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import OutlierAE
od = OutlierAE(threshold=0.1,
encoder_net=encoder_net,
decoder_net=decoder_net)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array of preferably normal data.loss_fn
: loss function used for training. Defaults to the Mean Squared Error loss.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e3.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.
od.fit(X_train, epochs=50)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(X, threshold_perc=95)
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
. Detection can be customized via the following parameters:
outlier_type
: either â€˜instanceâ€™ or â€˜featureâ€™. If the outlier type equals â€˜instanceâ€™, the outlier score at the instance level will be used to classify the instance as an outlier or not. If â€˜featureâ€™ is selected, outlier detection happens at the feature level (e.g. by pixel in images).outlier_perc
: percentage of the sorted (descending) feature level outlier scores. We might for instance want to flag an image as an outlier if at least 20% of the pixel values are on average above the threshold. In this case, we setoutlier_perc
to 20. The default value is 100 (using all the features).return_feature_score
: boolean whether to return the feature level outlier scores.return_instance_score
: boolean whether to return the instance level outlier scores.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances or features are above the threshold and therefore outliers. Ifoutlier_type
equals â€˜instanceâ€™, then the array is of shape (batch size,). If it equals â€˜featureâ€™, then the array is of shape (batch size, instance shape).feature_score
: contains feature level scores ifreturn_feature_score
equals True.instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(X,
outlier_type='instance',
outlier_perc=75,
return_feature_score=True,
return_instance_score=True)
ExamplesÂ¶
This page was generated from doc/source/methods/vaegmm.ipynb.
Variational AutoEncoding Gaussian Mixture Model Outlier DetectorÂ¶
OverviewÂ¶
The Variational AutoEncoding Gaussian Mixture Model (VAEGMM) Outlier Detector follows the Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection paper but with a VAE instead of a regular AutoEncoder. The encoder compresses the data while the reconstructed instances generated by the decoder are used to create additional features based on the reconstruction error between the input and the reconstructions. These features are combined with encodings and fed into a Gaussian Mixture Model (GMM). The VAEGMM outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised or semisupervised training is desirable since labeled data is often scarce. The sample energy of the GMM can then be used to determine whether an instance is an outlier (high sample energy) or not (low sample energy). The algorithm is suitable for tabular and image data.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value for the sample energy above which the instance is flagged as an outlier.latent_dim
: latent dimension of the VAE.n_gmm
: number of components in the GMM.encoder_net
:tf.keras.Sequential
instance containing the encoder network. Example:
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(60, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(10, activation=tf.nn.tanh),
Dense(latent_dim, activation=None)
])
decoder_net
:tf.keras.Sequential
instance containing the decoder network. Example:
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(10, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(60, activation=tf.nn.tanh),
Dense(n_features, activation=None)
])
gmm_density_net
: layers for the GMM network wrapped in atf.keras.Sequential
class. Example:
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.tanh),
Dense(n_gmm, activation=tf.nn.softmax)
])
vaegmm
: instead of using a separate encoder, decoder and GMM density net, the VAEGMM can also be passed as atf.keras.Model
.samples
: number of samples drawn during detection for each instance to detect.beta
: weight on the KLdivergence loss term following the \(\beta\)VAE framework. Default equals 1.recon_features
: function to extract features from the reconstructed instance by the decoder. Defaults to a combination of the mean squared reconstruction error and the cosine similarity between the original and reconstructed instances by the VAE.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import OutlierVAEGMM
od = OutlierVAEGMM(
threshold=7.5,
encoder_net=encoder_net,
decoder_net=decoder_net,
gmm_density_net=gmm_density_net,
latent_dim=4,
n_gmm=2,
samples=10
)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array of preferably normal data.loss_fn
: loss function used for training. Defaults to the custom VAEGMM loss which is a combination of the elbo loss, sample energy of the GMM and a loss term penalizing small values on the diagonals of the covariance matrices in the GMM to avoid trivial solutions. It is important to balance the loss weights below so no single loss term dominates during the optimization.w_recon
: weight on elbo loss term. Defaults to 1e7.w_energy
: weight on sample energy loss term. Defaults to 0.1.w_cov_diag
: weight on covariance diagonals. Defaults to 0.005.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e4.cov_elbo
: dictionary with covariance matrix options in case the elbo loss function is used. Either use the full covariance matrix inferred from X (dict(cov_full=None)), only the variance (dict(cov_diag=None)) or a float representing the same standard deviation for each feature (e.g. dict(sim=.05)) which is the default.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.
od.fit(
X_train,
epochs=10,
batch_size=1024
)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(
X,
threshold_perc=95
)
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
to compute the instance level sample energies. We can also return the instance level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (batch size,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
return_instance_score=True
)
ExamplesÂ¶
This page was generated from doc/source/methods/aegmm.ipynb.
AutoEncoding Gaussian Mixture Model Outlier DetectorÂ¶
OverviewÂ¶
The AutoEncoding Gaussian Mixture Model (AEGMM) Outlier Detector follows the Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection paper. The encoder compresses the data while the reconstructed instances generated by the decoder are used to create additional features based on the reconstruction error between the input and the reconstructions. These features are combined with encodings and fed into a Gaussian Mixture Model (GMM). The AEGMM outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised or semisupervised training is desirable since labeled data is often scarce. The sample energy of the GMM can then be used to determine whether an instance is an outlier (high sample energy) or not (low sample energy). The algorithm is suitable for tabular and image data.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value for the sample energy above which the instance is flagged as an outlier.n_gmm
: number of components in the GMM.encoder_net
:tf.keras.Sequential
instance containing the encoder network. Example:
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(60, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(10, activation=tf.nn.tanh),
Dense(latent_dim, activation=None)
])
decoder_net
:tf.keras.Sequential
instance containing the decoder network. Example:
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(10, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(60, activation=tf.nn.tanh),
Dense(n_features, activation=None)
])
gmm_density_net
: layers for the GMM network wrapped in atf.keras.Sequential
class. Example:
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.tanh),
Dense(n_gmm, activation=tf.nn.softmax)
])
aegmm
: instead of using a separate encoder, decoder and GMM density net, the AEGMM can also be passed as atf.keras.Model
.recon_features
: function to extract features from the reconstructed instance by the decoder. Defaults to a combination of the mean squared reconstruction error and the cosine similarity between the original and reconstructed instances by the AE.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized outlier detector example:
from alibi_detect.od import OutlierAEGMM
od = OutlierAEGMM(
threshold=7.5,
encoder_net=encoder_net,
decoder_net=decoder_net,
gmm_density_net=gmm_density_net,
n_gmm=2
)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array of preferably normal data.loss_fn
: loss function used for training. Defaults to the custom AEGMM loss which is a combination of the mean squared reconstruction error, the sample energy of the GMM and a loss term penalizing small values on the diagonals of the covariance matrices in the GMM to avoid trivial solutions. It is important to balance the loss weights below so no single loss term dominates during the optimization.w_energy
: weight on sample energy loss term. Defaults to 0.1.w_cov_diag
: weight on covariance diagonals. Defaults to 0.005.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e4.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.
od.fit(
X_train,
epochs=10,
batch_size=1024
)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
od.infer_threshold(
X,
threshold_perc=95
)
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
to compute the instance level sample energies. We can also return the instance level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (batch size,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
return_instance_score=True
)
ExamplesÂ¶
This page was generated from doc/source/methods/prophet.ipynb.
Prophet Outlier DetectorÂ¶
OverviewÂ¶
The Prophet outlier detector uses the Prophet time series forecasting package explained in this excellent paper. The underlying Prophet model is a decomposable univariate time series model combining trend, seasonality and holiday effects. The model forecast also includes an uncertainty interval around the estimated trend component using the MAP estimate of the extrapolated model. Alternatively, full Bayesian inference can be done at the expense of increased compute. The upper and lower values of the uncertainty interval can then be used as outlier thresholds for each point in time. First, the distance from the observed value to the nearest uncertainty boundary (upper or lower) is computed. If the observation is within the boundaries, the outlier score equals the negative distance. As a result, the outlier score is the lowest when the observation equals the model prediction. If the observation is outside of the boundaries, the score equals the distance measure and the observation is flagged as an outlier. One of the main drawbacks of the method however is that you need to refit the model as new data comes in. This is undesirable for applications with high throughput and realtime detection.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: width of the uncertainty intervals of the forecast, used as outlier threshold. Equivalent tointerval_width
. If the instance lies outside of the uncertainty intervals, it is flagged as an outlier. Ifmcmc_samples
equals 0, it is the uncertainty in the trend using the MAP estimate of the extrapolated model. Ifmcmc_samples
>0, then uncertainty over all parameters is used.growth
: â€˜linearâ€™ or â€˜logisticâ€™ to specify a linear or logistic trend.cap
: growth cap in case growth equals â€˜logisticâ€™.holidays
: pandas DataFrame with columns â€˜holidayâ€™ (string) and â€˜dsâ€™ (dates) and optionally columns â€˜lower_windowâ€™ and â€˜upper_windowâ€™ which specify a range of days around the date to be included as holidays.holidays_prior_scale
: parameter controlling the strength of the holiday components model. Higher values imply a more flexible trend, more prone to more overfitting.country_holidays
: include countryspecific holidays via country abbreviations. The holidays for each country are provided by the holidays package in Python. A list of available countries and the country name to use is available on: https://github.com/drprodigy/pythonholidays. Additionally, Prophet includes holidays for: Brazil (BR), Indonesia (ID), India (IN), Malaysia (MY), Vietnam (VN), Thailand (TH), Philippines (PH), Turkey (TU), Pakistan (PK), Bangladesh (BD), Egypt (EG), China (CN) and Russian (RU).changepoint_prior_scale
: parameter controlling the flexibility of the automatic changepoint selection. Large values will allow many changepoints, potentially leading to overfitting.changepoint_range
: proportion of history in which trend changepoints will be estimated. Higher values means more changepoints, potentially leading to overfitting.seasonality_mode
: either â€˜additiveâ€™ or â€˜multiplicativeâ€™.daily_seasonality
: can be â€˜autoâ€™, True, False, or a number of Fourier terms to generate.weekly_seasonality
: can be â€˜autoâ€™, True, False, or a number of Fourier terms to generate.yearly_seasonality
: can be â€˜autoâ€™, True, False, or a number of Fourier terms to generate.add_seasonality
: manually add one or more seasonality components. Pass a list of dicts containing the keys â€˜nameâ€™, â€˜periodâ€™, â€˜fourier_orderâ€™ (obligatory), â€˜prior_scaleâ€™ and â€˜modeâ€™ (optional).seasonality_prior_scale
: parameter controlling the strength of the seasonality model. Larger values allow the model to fit larger seasonal fluctuations, potentially leading to overfitting.uncertainty_samples
: number of simulated draws used to estimate uncertainty intervals.mcmc_samples
: If > 0, will do full Bayesian inference with the specified number of MCMC samples. If 0, will do MAP estimation.
Initialized outlier detector example:
from alibi_detect.od import OutlierProphet
od = OutlierProphet(
threshold=0.9,
growth='linear'
)
FitÂ¶
We then need to train the outlier detector. The fit
method takes a pandas DataFrame df with as columns â€˜dsâ€™ containing the dates or timestamps and â€˜yâ€™ for the time series being investigated. The date format is ideally YYYYMMDD and timestamp format YYYYMMDD HH:MM:SS.
od.fit(df)
DetectÂ¶
We detect outliers by simply calling predict
on a DataFrame df, again with columns â€˜dsâ€™ and â€˜yâ€™ to compute the instance level outlier scores. We can also return the instance level outlier score or the raw Prophet model forecast by setting respectively return_instance_score
or return_forecast
to True. It is important that the dates or timestamps of the test data follow the training data.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: DataFrame with columns â€˜dsâ€™ containing the dates or timestamps and â€˜is_outlierâ€™ a boolean whether instances are above the threshold and therefore outlier instances.instance_score
: DataFrame with â€˜dsâ€™ and â€˜instance_scoreâ€™ which contains instance level scores ifreturn_instance_score
equals True.forecast
: DataFrame with the raw model predictions ifreturn_forecast
equals True. The DataFrame contains columns with the upper and lower boundaries (â€˜yhat_upperâ€™ and â€˜yhat_lowerâ€™), the model predictions (â€˜yhatâ€™), and the decomposition of the prediction in the different components (trend, seasonality, holiday).
preds = od.predict(
df,
return_instance_score=True,
return_forecast=True
)
This page was generated from doc/source/methods/sr.ipynb.
Spectral Residual Outlier DetectorÂ¶
OverviewÂ¶
The Spectral Residual outlier detector is based on the paper TimeSeries Anomaly Detection Service at Microsoft and is suitable for unsupervised online anomaly detection in univariate time series data. The algorithm first computes the Fourier Transform of the original data. Then it computes the spectral residual of the log amplitude of the transformed signal before applying the Inverse Fourier Transform to map the sequence back from the frequency to the time domain. This sequence is called the saliency map. The anomaly score is then computed as the relative difference between the saliency map values and their moving averages. If the score is above a threshold, the value at a specific timestep is flagged as an outlier. For more details, please check out the paper.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold used to classify outliers. Relative saliency map distance from the moving average.window_amp
: window used for the moving average in the spectral residual computation. The spectral residual is the difference between the log amplitude of the Fourier Transform and a convolution of the log amplitude overwindow_amp
.window_local
: window used for the moving average in the outlier score computation. The outlier score computes the relative difference between the saliency map and a moving average of the saliency map overwindow_local
timesteps.n_est_points
: number of estimated points padded to the end of the sequence.n_grad_points
: number of points used for the gradient estimation of the additional points padded to the end of the sequence. The paper sets this value to 5.
Initialized outlier detector example:
from alibi_detect.od import SpectralResidual
od = SpectralResidual(
threshold=1.,
window_amp=20,
window_local=20,
n_est_points=10,
n_grad_points=5
)
It is often hard to find a good threshold value. If we have a time series containing both normal and outlier data and we know approximately the percentage of normal data in the time series, we can infer a suitable threshold:
od.infer_threshold(
X,
t=t, # array with timesteps, assumes dt=1 between observations if omitted
threshold_perc=95
)
DetectÂ¶
We detect outliers by simply calling predict
on a time series X
to compute the outlier scores and flag the anomalies. We can also return the instance (timestep) level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (timesteps,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(
X,
t=t, # array with timesteps, assumes dt=1 between observations if omitted
return_instance_score=True
)
This page was generated from doc/source/methods/seq2seq.ipynb.
SequencetoSequence (Seq2Seq) Outlier DetectorÂ¶
OverviewÂ¶
The SequencetoSequence (Seq2Seq) outlier detector consists of 2 main building blocks: an encoder and a decoder. The encoder consists of a Bidirectional LSTM which processes the input sequence and initializes the decoder. The LSTM decoder then makes sequential predictions for the output sequence. In our case, the decoder aims to reconstruct the input sequence. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
Since even for normal data the reconstruction error can be statedependent, we add an outlier threshold estimator network to the Seq2Seq model. This network takes in the hidden state of the decoder at each timestep and predicts the estimated reconstruction error for normal data. As a result, the outlier threshold is not static and becomes a function of the model state. This is similar to Park et al. (2017), but while they train the threshold estimator separately from the Seq2Seq model with a SupportVector Regressor, we train a neural net regression network endtoend with the Seq2Seq model.
The detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The Seq2Seq outlier detector is suitable for both univariate and multivariate time series.
UsageÂ¶
InitializeÂ¶
Parameters:
n_features
: number of features in the time series.seq_len
: sequence length fed into the Seq2Seq model.threshold
: threshold used for outlier detection. Can be a float or featurewise array.seq2seq
: optionally pass an already defined or pretrained Seq2Seq model to the outlier detector as atf.keras.Model
.threshold_net
: optionally pass the layers for the threshold estimation network wrapped in atf.keras.Sequential
instance. Example:
threshold_net = tf.keras.Sequential(
[
InputLayer(input_shape=(seq_len, latent_dim)),
Dense(64, activation=tf.nn.relu),
Dense(64, activation=tf.nn.relu),
])
latent_dim
: latent dimension of the encoder and decoder.output_activation
: activation used in the Dense output layer of the decoder.beta
: weight on the threshold estimation meansquared error (MSE) loss term.
Initialized outlier detector example:
from alibi_detect.od import OutlierSeq2Seq
n_features = 2
seq_len = 50
od = OutlierSeq2Seq(n_features,
seq_len,
threshold=None,
threshold_net=threshold_net,
latent_dim=100)
FitÂ¶
We then need to train the outlier detector. The following parameters can be specified:
X
: univariate or multivariate time series array with preferably normal data used for training. Shape equals (batch, n_features) or (batch, seq_len, n_features).loss_fn
: loss function used for training. Defaults to the MSE loss.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e3.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.
od.fit(X_train, epochs=20)
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold. We can either set the threshold over both features combined or determine a featurewise threshold. Here we opt for the featurewise threshold. This is for instance useful when different features have different variance or sensitivity to outliers. The snippet assumes there are about 5% outliers in the first feature and 10% in the second:
od.infer_threshold(X, threshold_perc=np.array([95, 90]))
DetectÂ¶
We detect outliers by simply calling predict
on a batch of instances X
. Detection can be customized via the following parameters:
outlier_type
: either â€˜instanceâ€™ or â€˜featureâ€™. If the outlier type equals â€˜instanceâ€™, the outlier score at the instance level will be used to classify the instance as an outlier or not. If â€˜featureâ€™ is selected, outlier detection happens at the feature level. It is important to distinguish 2 use cases:X
has shape (batch, n_features):There are batch instances with n_features features per instance.
X
has shape (batch, seq_len, n_features)Now there are batch instances with seq_len x n_features features per instance.
outlier_perc
: percentage of the sorted (descending) feature level outlier scores. We might for instance want to flag a multivariate time series as an outlier at a specific timestamp if at least 75% of the feature values are on average above the threshold. In this case, we setoutlier_perc
to 75. The default value is 100 (using all the features).return_feature_score
: boolean whether to return the feature level outlier scores.return_instance_score
: boolean whether to return the instance level outlier scores.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances or features are above the threshold and therefore outliers. Ifoutlier_type
equals â€˜instanceâ€™, then the array is of shape (batch,). If it equals â€˜featureâ€™, then the array is of shape (batch, seq_len, n_features) or (batch, n_features), depending on the shape ofX
.feature_score
: contains feature level scores ifreturn_feature_score
equals True.instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds = od.predict(X,
outlier_type='instance',
outlier_perc=100,
return_feature_score=True,
return_instance_score=True)
This page was generated from doc/source/methods/adversarialae.ipynb.
Adversarial AutoEncoder DetectorÂ¶
OverviewÂ¶
The adversarial detector follows the method explained in the Adversarial Detection and Correction by Matching Prediction Distributions paper. Usually, autoencoders are trained to find a transformation \(T\) that reconstructs the input instance \(x\) as accurately as possible with loss functions that are suited to capture the similarities between x and \(x'\) such as the mean squared reconstruction error. The novelty of the adversarial autoencoder (AE) detector relies on the use of a classification modeldependent loss function based on a distance metric in the output space of the model to train the autoencoder network. Given a classification model \(M\) we optimise the weights of the autoencoder such that the KLdivergence between the model predictions on \(x\) and on \(x'\) is minimised. Without the presence of a reconstruction loss term \(x'\) simply tries to make sure that the prediction probabilities \(M(x')\) and \(M(x)\) match without caring about the proximity of \(x'\) to \(x\). As a result, \(x'\) is allowed to live in different areas of the input feature space than \(x\) with different decision boundary shapes with respect to the model \(M\). The carefully crafted adversarial perturbation which is effective around x does not transfer to the new location of \(x'\) in the feature space, and the attack is therefore neutralised. Training of the autoencoder is unsupervised since we only need access to the model prediction probabilities and the normal training instances. We do not require any knowledge about the underlying adversarial attack and the classifier weights are frozen during training.
The detector can be used as follows:
An adversarial score \(S\) is computed. \(S\) equals the KL divergence between the model predictions on \(x\) and \(x'\).
If \(S\) is above a threshold (explicitly defined or inferred from training data), the instance is flagged as adversarial.
For adversarial instances, the model \(M\) uses the reconstructed instance \(x'\) to make a prediction. If the adversarial score is below the threshold, the model makes a prediction on the original instance \(x\).
This procedure is illustrated in the diagram below:
The method is very flexible and can also be used to detect common data corruptions and perturbations which negatively impact the model performance. The algorithm works well on tabular and image data.
UsageÂ¶
InitializeÂ¶
Parameters:
threshold
: threshold value above which the instance is flagged as an adversarial instance.encoder_net
:tf.keras.Sequential
instance containing the encoder network. Example:
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(32, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Flatten(),
Dense(40)
]
)
decoder_net
:tf.keras.Sequential
instance containing the decoder network. Example:
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(40,)),
Dense(4 * 4 * 128, activation=tf.nn.relu),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(3, 4, strides=2, padding='same',
activation=None, kernel_regularizer=l1(1e5))
]
)
ae
: instead of using a separate encoder and decoder, the AE can also be passed as atf.keras.Model
.model
: the classifier as atf.keras.Model
. Example:
inputs = tf.keras.Input(shape=(input_dim,))
outputs = tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
hidden_layer_kld
: dictionary with as keys the number of the hidden layer(s) in the classification model which are extracted and used during training of the adversarial AE, and as values the output dimension for the hidden layer. Extending the training methodology to the hidden layers is optional and can further improve the adversarial correction mechanism.model_hl
: instead of passing a dictionary tohidden_layer_kld
, a list with tf.keras models for the hidden layer KL divergence computation can be passed directly.w_model_hl
: Weights assigned to the loss of each model inmodel_hl
. Also used to weight the KL divergence contribution for each model inmodel_hl
when computing the adversarial score.temperature
: Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution which can be beneficial for prediction distributions with high entropy.data_type
: can specify data type added to metadata. E.g. â€˜tabularâ€™ or â€˜imageâ€™.
Initialized adversarial detector example:
from alibi_detect.ad import AdversarialAE
ad = AdversarialAE(
encoder_net=encoder_net,
decoder_net=decoder_net,
model=model,
temperature=0.5
)
FitÂ¶
We then need to train the adversarial detector. The following parameters can be specified:
X
: training batch as a numpy array.loss_fn
: loss function used for training. Defaults to the custom adversarial loss.w_model
: weight on the loss term minimizing the KL divergence between model prediction probabilities on the original and reconstructed instance. Defaults to 1.w_recon
: weight on the mean squared error reconstruction loss term. Defaults to 0.optimizer
: optimizer used for training. Defaults to Adam with learning rate 1e3.epochs
: number of training epochs.batch_size
: batch size used during training.verbose
: boolean whether to print training progress.log_metric
: additional metrics whose progress will be displayed if verbose equals True.preprocess_fn
: optional data preprocessing function applied per batch during training.
ad.fit(X_train, epochs=50)
The threshold for the adversarial score can be set via infer_threshold
. We need to pass a batch of instances \(X\) and specify what percentage of those we consider to be normal via threshold_perc
. Even if we only have normal instances in the batch, it might be best to set the threshold value a bit lower (e.g. \(95\)%) since the the model could have misclassified training instances leading to a higher score if the reconstruction picked up features from the correct class or some
instances might look adversarial in the first place.
ad.infer_threshold(X_train, threshold_perc=95, batch_size=64)
DetectÂ¶
We detect adversarial instances by simply calling predict
on a batch of instances X
. We can also return the instance level adversarial score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detectorâ€™s metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_adversarial
: boolean whether instances are above the threshold and therefore adversarial instances. The array is of shape (batch size,).instance_score
: contains instance level scores ifreturn_instance_score
equals True.
preds_detect = ad.predict(X, batch_size=64, return_instance_score=True)
CorrectÂ¶
We can immediately apply the procedure sketched out in the above diagram via correct
. The method also returns a dictionary with meta
and data
keys. On top of the information returned by detect
, 3 additional fields are returned under data
:
corrected
: model predictions by following the adversarial detection and correction procedure.no_defense
: model predictions without the adversarial correction.defense
: model predictions where each instance is corrected by the defense, regardless of the adversarial score.
preds_correct = ad.correct(X, batch_size=64, return_instance_score=True)
ExamplesÂ¶
This page was generated from examples/od_mahalanobis_kddcup.ipynb.
Mahalanobis outlier detection on KDD Cup â€˜99 datasetÂ¶
MethodÂ¶
The Mahalanobis online outlier detector aims to predict anomalies in tabular data. The algorithm calculates an outlier score, which is a measure of distance from the center of the features distribution (Mahalanobis distance). If this outlier score is higher than a userdefined threshold, the observation is flagged as an outlier. The algorithm is online, which means that it starts without knowledge about the distribution of the features and learns as requests arrive. Consequently you should expect the output to be bad at the start and to improve over time.
DatasetÂ¶
The outlier detector needs to detect computer network intrusions using TCP dump data for a localarea network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack.
There are 4 types of attacks in the dataset:
DOS: denialofservice, e.g. syn flood;
R2L: unauthorized access from a remote machine, e.g. guessing password;
U2R: unauthorized access to local superuser (root) privileges;
probing: surveillance and other probing, e.g., port scanning.
The dataset contains about 5 million connection records.
There are 3 types of features:
basic features of individual connections, e.g. duration of connection
content features within a connection, e.g. number of failed log in attempts
traffic features within a 2 second window, e.g. number of connections to the same host as the current connection
[1]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix, f1_score
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder
from alibi_detect.od import Mahalanobis
from alibi_detect.datasets import fetch_kdd
from alibi_detect.utils.data import create_outlier_batch
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.mapping import ord2ohe
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_roc
Load datasetÂ¶
We only keep a number of continuous (18 out of 41) features.
[2]:
kddcup = fetch_kdd(percent10=True) # only load 10% of the dataset
print(kddcup.data.shape, kddcup.target.shape)
(494021, 18) (494021,)
Assume that a machine learning model is trained on normal instances of the dataset (not outliers) and standardization is applied:
[3]:
np.random.seed(0)
normal_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=100000, perc_outlier=0)
X_train, y_train = normal_batch.data.astype('float'), normal_batch.target
print(X_train.shape, y_train.shape)
print('{}% outliers'.format(100 * y_train.mean()))
(100000, 18) (100000,)
0.0% outliers
[4]:
mean, stdev = X_train.mean(axis=0), X_train.std(axis=0)
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can initialise a detector from scratch.
Be aware that Mahalanobis
is an online, stateful outlier detector. Saving or loading a Mahalanobis detector therefore also saves and loads the state of the detector. This allows the user to warm up the detector before deploying it into production.
[5]:
load_outlier_detector = False
[6]:
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load initialized outlier detector
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'Mahalanobis'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # initialize and save outlier detector
threshold = None # scores above threshold are classified as outliers
n_components = 2 # nb of components used in PCA
std_clip = 3 # clip values used to compute mean and cov above "std_clip" standard deviations
start_clip = 20 # start clipping values after "start_clip" instances
od = Mahalanobis(threshold,
n_components=n_components,
std_clip=std_clip,
start_clip=start_clip)
save_detector(od, filepath) # save outlier detector
WARNING:alibi_detect.od.mahalanobis:No threshold level set. Need to infer threshold using `infer_threshold`.
If load_outlier_detector
equals False, the warning tells us we still need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via threshold_perc
. Letâ€™s assume we have some data which we know contains around 5% outliers. The percentage of outliers can be set with perc_outlier
in the create_outlier_batch
function.
[7]:
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target
X_threshold = (X_threshold  mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
5.0% outliers
[8]:
od.infer_threshold(X_threshold, threshold_perc=100perc_outlier)
print('New threshold: {}'.format(od.threshold))
threshold = od.threshold
New threshold: 10.670934046719518
Detect outliersÂ¶
We now generate a batch of data with 10% outliers, standardize those with the mean
and stdev
values obtained from the normal data (inliers) and detect the outliers in the batch.
[9]:
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
X_outlier, y_outlier = outlier_batch.data.astype('float'), outlier_batch.target
X_outlier = (X_outlier  mean) / stdev
print(X_outlier.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
(1000, 18) (1000,)
10.0% outliers
Predict outliers:
[10]:
od_preds = od.predict(X_outlier, return_instance_score=True)
We can now save the warmed up outlier detector:
[11]:
save_detector(od, filepath)
Display resultsÂ¶
F1 score and confusion matrix:
[12]:
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9693877551020408
Plot instance level outlier scores vs. the outlier threshold:
[13]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold, ylim=(0,50))
We can also plot the ROC curve for the outlier scores of the detector:
[14]:
roc_data = {'MD': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
Include categorical variablesÂ¶
So far we only tracked continuous variables. We can however also include categorical variables. The fit
step first computes pairwise distances between the categories of each categorical variable. The pairwise distances are based on either the model predictions (MVDM method) or the context provided by the other variables in the dataset (ABDM method). For MVDM, we use the difference between the conditional model prediction probabilities of each category. This method is based on the
Modified Value Difference Metric (MVDM) by Cost et al (1993). ABDM stands for AssociationBased Distance Metric, a categorical distance measure introduced by Le et al (2005). ABDM infers context from the presence of other variables in the data and computes a dissimilarity measure based on the KullbackLeibler divergence. Both methods can also be combined as ABDMMVDM. We can then
apply multidimensional scaling to project the pairwise distances into Euclidean space.
Load and transform dataÂ¶
[15]:
cat_cols = ['protocol_type', 'service', 'flag']
num_cols = ['srv_count', 'serror_rate', 'srv_serror_rate',
'rerror_rate', 'srv_rerror_rate', 'same_srv_rate',
'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count',
'dst_host_srv_count', 'dst_host_same_srv_rate',
'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate',
'dst_host_srv_diff_host_rate', 'dst_host_serror_rate',
'dst_host_srv_serror_rate', 'dst_host_rerror_rate',
'dst_host_srv_rerror_rate']
cols = cat_cols + num_cols
[16]:
np.random.seed(0)
kddcup = fetch_kdd(keep_cols=cols, percent10=True)
print(kddcup.data.shape, kddcup.target.shape)
(494021, 21) (494021,)
Create a dictionary with as keys the categorical columns and values the number of categories for each variable in the dataset. This dictionary will later be used in the fit
step of the outlier detector.
[17]:
cat_vars_ord = {}
n_categories = len(cat_cols)
for i in range(n_categories):
cat_vars_ord[i] = len(np.unique(kddcup.data[:, i]))
print(cat_vars_ord)
{0: 3, 1: 66, 2: 11}
Fit an ordinal encoder on the categorical data:
[18]:
enc = OrdinalEncoder()
enc.fit(kddcup.data[:, :n_categories])
[18]:
OrdinalEncoder(categories='auto', dtype=<class 'numpy.float64'>)
Combine scaled numerical and ordinal features. X_fit
will be used to infer distances between categorical features later. To make it easy, we will already transform the whole dataset, including the outliers that need to be detected later. This is for illustrative purposes:
[19]:
X_num = (kddcup.data[:, n_categories:]  mean) / stdev # standardize numerical features
X_ord = enc.transform(kddcup.data[:, :n_categories]) # apply ordinal encoding to categorical features
X_fit = np.c_[X_ord, X_num].astype(np.float32, copy=False) # combine numerical and categorical features
print(X_fit.shape)
(494021, 21)
Initialize and fit outlier detectorÂ¶
We use the same threshold as for the continuous data. This will likely not result in optimal performance. Alternatively, you can infer the threshold again.
[20]:
n_components = 2
std_clip = 3
start_clip = 20
od = Mahalanobis(threshold,
n_components=n_components,
std_clip=std_clip,
start_clip=start_clip,
cat_vars=cat_vars_ord,
ohe=False) # True if onehot encoding (OHE) is used
Set fit
parameters:
[21]:
d_type = 'abdm' # pairwise distance type, 'abdm' infers context from other variables
disc_perc = [25, 50, 75] # percentiles used to bin numerical values; used in 'abdm' calculations
standardize_cat_vars = True # standardize numerical values of categorical variables
Apply fit
method to find numerical values for categorical variables:
[22]:
od.fit(X_fit,
d_type=d_type,
disc_perc=disc_perc,
standardize_cat_vars=standardize_cat_vars)
The numerical values for the categorical features are stored in the attribute od.d_abs
. This is a dictionary with as keys the columns for the categorical features and as values the numerical equivalent of the category:
[23]:
cat = 0 # categorical variable to plot numerical values for
[24]:
plt.bar(np.arange(len(od.d_abs[cat])), od.d_abs[cat])
plt.xticks(np.arange(len(od.d_abs[cat])))
plt.title('Numerical values for categories in categorical variable {}'.format(cat))
plt.xlabel('Category')
plt.ylabel('Numerical value')
plt.show()
Another option would be to set d_type
to 'mvdm'
and y
to kddcup.target
to infer the numerical values for categorical variables from the model labels (or alternatively the predictions).
Run outlier detector and display resultsÂ¶
Generate batch of data with 10% outliers:
[25]:
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
data, y_outlier = outlier_batch.data, outlier_batch.target
print(data.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
(1000, 21) (1000,)
10.0% outliers
Preprocess the outlier batch:
[26]:
X_num = (data[:, n_categories:]  mean) / stdev
X_ord = enc.transform(data[:, :n_categories])
X_outlier = np.c_[X_ord, X_num].astype(np.float32, copy=False)
print(X_outlier.shape)
(1000, 21)
Predict outliers:
[27]:
od_preds = od.predict(X_outlier, return_instance_score=True)
F1 score and confusion matrix:
[28]:
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9375
Plot instance level outlier scores vs. the outlier threshold:
[29]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold, ylim=(0, 150))
Use OHE instead of ordinal encoding for the categorical variablesÂ¶
Since we will apply onehot encoding (OHE) on the categorical variables, we convert cat_vars_ord
from the ordinal to OHE format. alibi_detect.utils.mapping
contains utility functions to do this. The keys in cat_vars_ohe
now represent the first column index for each onehot encoded categorical variable. This dictionary will later be used in the counterfactual explanation.
[30]:
cat_vars_ohe = ord2ohe(X_fit, cat_vars_ord)[1]
print(cat_vars_ohe)
{0: 3, 3: 66, 69: 11}
Fit a onehot encoder on the categorical data:
[31]:
enc = OneHotEncoder(categories='auto')
enc.fit(X_fit[:, :n_categories])
[31]:
OneHotEncoder(categorical_features=None, categories='auto', drop=None,
dtype=<class 'numpy.float64'>, handle_unknown='error',
n_values=None, sparse=True)
Transform X_fit
to OHE:
[32]:
X_ohe = enc.transform(X_fit[:, :n_categories])
X_fit = np.array(np.c_[X_ohe.todense(), X_fit[:, n_categories:]].astype(np.float32, copy=False))
print(X_fit.shape)
(494021, 98)
Initialize and fit outlier detectorÂ¶
Initialize:
[33]:
od = Mahalanobis(threshold,
n_components=n_components,
std_clip=std_clip,
start_clip=start_clip,
cat_vars=cat_vars_ohe,
ohe=True)
Apply fit method:
[34]:
od.fit(X_fit,
d_type=d_type,
disc_perc=disc_perc,
standardize_cat_vars=standardize_cat_vars)
Run outlier detector and display resultsÂ¶
Transform outlier batch to OHE:
[35]:
X_ohe = enc.transform(X_ord)
X_outlier = np.array(np.c_[X_ohe.todense(), X_num].astype(np.float32, copy=False))
print(X_outlier.shape)
(1000, 98)
Predict outliers:
[36]:
od_preds = od.predict(X_outlier, return_instance_score=True)
F1 score and confusion matrix:
[37]:
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9375
Plot instance level outlier scores vs. the outlier threshold:
[38]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold, ylim=(0,200))
This page was generated from examples/od_if_kddcup.ipynb.
Isolation Forest outlier detection on KDD Cup â€˜99 datasetÂ¶
MethodÂ¶
Isolation forests (IF) are tree based models specifically used for outlier detection. The IF isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. The number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of random trees, is a measure of normality and is used to define an anomaly score. Outliers can typically be isolated quicker, leading to shorter paths.
DatasetÂ¶
The outlier detector needs to detect computer network intrusions using TCP dump data for a localarea network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack.
There are 4 types of attacks in the dataset:
DOS: denialofservice, e.g. syn flood;
R2L: unauthorized access from a remote machine, e.g. guessing password;
U2R: unauthorized access to local superuser (root) privileges;
probing: surveillance and other probing, e.g., port scanning.
The dataset contains about 5 million connection records.
There are 3 types of features:
basic features of individual connections, e.g. duration of connection
content features within a connection, e.g. number of failed log in attempts
traffic features within a 2 second window, e.g. number of connections to the same host as the current connection
[1]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix, f1_score
from alibi_detect.od import IForest
from alibi_detect.datasets import fetch_kdd
from alibi_detect.utils.data import create_outlier_batch
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_roc
Load datasetÂ¶
We only keep a number of continuous (18 out of 41) features.
[2]:
kddcup = fetch_kdd(percent10=True) # only load 10% of the dataset
print(kddcup.data.shape, kddcup.target.shape)
(494021, 18) (494021,)
Assume that a model is trained on normal instances of the dataset (not outliers) and standardization is applied:
[3]:
np.random.seed(0)
normal_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=400000, perc_outlier=0)
X_train, y_train = normal_batch.data.astype('float'), normal_batch.target
print(X_train.shape, y_train.shape)
print('{}% outliers'.format(100 * y_train.mean()))
(400000, 18) (400000,)
0.0% outliers
[4]:
mean, stdev = X_train.mean(axis=0), X_train.std(axis=0)
Apply standardization:
[5]:
X_train = (X_train  mean) / stdev
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[6]:
load_outlier_detector = True
[7]:
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'IForest'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = IForest(threshold=None, # threshold for outlier score
n_estimators=100)
# train
od.fit(X_train)
# save the trained outlier detector
save_detector(od, filepath)
If load_outlier_detector
equals False, the warning tells us we still need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via threshold_perc
. Letâ€™s assume we have some data which we know contains around 5% outliers. The percentage of outliers can be set with perc_outlier
in the create_outlier_batch
function.
[8]:
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target
X_threshold = (X_threshold  mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
5.0% outliers
[9]:
od.infer_threshold(X_threshold, threshold_perc=100perc_outlier)
print('New threshold: {}'.format(od.threshold))
New threshold: 0.08008174509752322
Letâ€™s save the outlier detector with updated threshold:
[10]:
save_detector(od, filepath)
Detect outliersÂ¶
We now generate a batch of data with 10% outliers and detect the outliers in the batch.
[11]:
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
X_outlier, y_outlier = outlier_batch.data.astype('float'), outlier_batch.target
X_outlier = (X_outlier  mean) / stdev
print(X_outlier.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
(1000, 18) (1000,)
10.0% outliers
Predict outliers:
[12]:
od_preds = od.predict(X_outlier, return_instance_score=True)
Display resultsÂ¶
F1 score and confusion matrix:
[13]:
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {:.4f}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.3315
Plot instance level outlier scores vs. the outlier threshold:
[14]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold)
We can see that the isolation forest does not do a good job at detecting 1 type of outliers with an outlier score around 0. This makes inferring a good threshold without explicit knowledge about the outliers hard. Setting the threshold just below 0 would lead to significantly better detector performance for the outliers in the dataset. This is also reflected by the ROC curve:
[15]:
roc_data = {'IF': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
This page was generated from examples/od_vae_kddcup.ipynb.
VAE outlier detection on KDD Cup â€˜99 datasetÂ¶
MethodÂ¶
The Variational AutoEncoder (VAE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.
DatasetÂ¶
The outlier detector needs to detect computer network intrusions using TCP dump data for a localarea network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack.
There are 4 types of attacks in the dataset:
DOS: denialofservice, e.g. syn flood;
R2L: unauthorized access from a remote machine, e.g. guessing password;
U2R: unauthorized access to local superuser (root) privileges;
probing: surveillance and other probing, e.g., port scanning.
The dataset contains about 5 million connection records.
There are 3 types of features:
basic features of individual connections, e.g. duration of connection
content features within a connection, e.g. number of failed log in attempts
traffic features within a 2 second window, e.g. number of connections to the same host as the current connection
[1]:
import logging
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix, f1_score
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Dense, InputLayer
from alibi_detect.datasets import fetch_kdd
from alibi_detect.models.losses import elbo
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.data import create_outlier_batch
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_tabular, plot_roc
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
ERROR:fbprophet:Importing plotly failed. Interactive plots will not work.
Load datasetÂ¶
We only keep a number of continuous (18 out of 41) features.
[2]:
kddcup = fetch_kdd(percent10=True) # only load 10% of the dataset
print(kddcup.data.shape, kddcup.target.shape)
(494021, 18) (494021,)
Assume that a model is trained on normal instances of the dataset (not outliers) and standardization is applied:
[3]:
np.random.seed(0)
normal_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=400000, perc_outlier=0)
X_train, y_train = normal_batch.data.astype('float'), normal_batch.target
print(X_train.shape, y_train.shape)
print('{}% outliers'.format(100 * y_train.mean()))
(400000, 18) (400000,)
0.0% outliers
[4]:
mean, stdev = X_train.mean(axis=0), X_train.std(axis=0)
Apply standardization:
[5]:
X_train = (X_train  mean) / stdev
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[6]:
load_outlier_detector = True
[7]:
filepath = 'my_dir' # change to directory (absolute path) where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'OutlierVAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
n_features = X_train.shape[1]
latent_dim = 2
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(20, activation=tf.nn.relu),
Dense(15, activation=tf.nn.relu),
Dense(7, activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(7, activation=tf.nn.relu),
Dense(15, activation=tf.nn.relu),
Dense(20, activation=tf.nn.relu),
Dense(n_features, activation=None)
])
# initialize outlier detector
od = OutlierVAE(threshold=None, # threshold for outlier score
score_type='mse', # use MSE of reconstruction error for outlier detection
encoder_net=encoder_net, # can also pass VAE model instead
decoder_net=decoder_net, # of separate encoder and decoder
latent_dim=latent_dim,
samples=5)
# train
od.fit(X_train,
loss_fn=elbo,
cov_elbo=dict(sim=.01),
epochs=30,
verbose=True)
# save the trained outlier detector
save_detector(od, filepath)
WARNING:alibi_detect.od.vae:No threshold level set. Need to infer threshold using `infer_threshold`.
The warning tells us we still need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via threshold_perc
. Letâ€™s assume we have some data which we know contains around 5% outliers. The percentage of outliers can be set with perc_outlier
in the create_outlier_batch
function.
[8]:
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target
X_threshold = (X_threshold  mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
5.0% outliers
[9]:
od.infer_threshold(X_threshold, threshold_perc=100perc_outlier)
print('New threshold: {}'.format(od.threshold))
New threshold: 1.7367815971374498
We could have also inferred the threshold from the normal training data by setting threshold_perc
e.g. at 99 and adding a bit of margin on top of the inferred threshold. Letâ€™s save the outlier detector with updated threshold:
[10]:
save_detector(od, filepath)
Detect outliersÂ¶
We now generate a batch of data with 10% outliers and detect the outliers in the batch.
[11]:
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
X_outlier, y_outlier = outlier_batch.data.astype('float'), outlier_batch.target
X_outlier = (X_outlier  mean) / stdev
print(X_outlier.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
(1000, 18) (1000,)
10.0% outliers
Predict outliers:
[12]:
od_preds = od.predict(X_outlier,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
['instance_score', 'feature_score', 'is_outlier']
Display resultsÂ¶
F1 score and confusion matrix:
[13]:
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {:.4f}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9754
Plot instance level outlier scores vs. the outlier threshold:
[14]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold)
We can clearly see that some outliers are very easy to detect while others have outlier scores closer to the normal data. We can also plot the ROC curve for the outlier scores of the detector:
[15]:
roc_data = {'VAE': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
Investigate instance level outlierÂ¶
We can now take a closer look at some of the individual predictions on X_outlier
.
[16]:
X_recon = od.vae(X_outlier).numpy() # reconstructed instances by the VAE
[17]:
plot_feature_outlier_tabular(od_preds,
X_outlier,
X_recon=X_recon,
threshold=od.threshold,
instance_ids=None, # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
top_n=5, # only show top_n features ordered by outlier score
outliers_only=False, # only show outlier predictions
feature_names=kddcup.feature_names, # add feature names
figsize=(20, 30))
The srv_count
feature is responsible for a lot of the displayed outliers.
This page was generated from examples/od_vae_cifar10.ipynb.
VAE outlier detection on CIFAR10Â¶
MethodÂ¶
The Variational AutoEncoder (VAE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.
DatasetÂ¶
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
[1]:
import logging
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Layer, Reshape, InputLayer
from tqdm import tqdm
from alibi_detect.models.losses import elbo
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
Load CIFAR10 dataÂ¶
[2]:
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[3]:
load_outlier_detector = True
[4]:
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'cifar10'
detector_name = 'OutlierVAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
latent_dim = 1024
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
# initialize outlier detector
od = OutlierVAE(threshold=.015, # threshold for outlier score
score_type='mse', # use MSE of reconstruction error for outlier detection
encoder_net=encoder_net, # can also pass VAE model instead
decoder_net=decoder_net, # of separate encoder and decoder
latent_dim=latent_dim,
samples=2)
# train
od.fit(X_train,
loss_fn=elbo,
cov_elbo=dict(sim=.05),
epochs=50,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
Check quality VAE modelÂ¶
[5]:
idx = 8
X = X_train[idx].reshape(1, 32, 32, 3)
X_recon = od.vae(X)
[6]:
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
[7]:
plt.imshow(X_recon.numpy().reshape(32, 32, 3))
plt.axis('off')
plt.show()
Check outliers on original CIFAR imagesÂ¶
[8]:
X = X_train[:500]
print(X.shape)
(500, 32, 32, 3)
[9]:
od_preds = od.predict(X,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
['instance_score', 'feature_score', 'is_outlier']
Plot instance level outlier scoresÂ¶
[10]:
target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances
labels = ['normal', 'outlier']
plot_instance_score(od_preds, target, labels, od.threshold)
Visualize predictionsÂ¶
[11]:
X_recon = od.vae(X).numpy()
plot_feature_outlier_image(od_preds,
X,
X_recon=X_recon,
instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
outliers_only=False) # only show outlier predictions
Predict outliers on perturbed CIFAR imagesÂ¶
We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in n_mask_sizes
, sample n_masks
and apply those to each of the n_imgs
images. Then we predict outliers on the masked instances:
[12]:
# nb of predictions per image: n_masks * n_mask_sizes
n_mask_sizes = 10
n_masks = 20
n_imgs = 50
Define masks and get images:
[13]:
mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]
print(mask_sizes)
img_ids = np.arange(n_imgs)
X_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)
print(X_orig.shape)
[(2, 2), (4, 4), (6, 6), (8, 8), (10, 10), (12, 12), (14, 14), (16, 16), (18, 18), (20, 20)]
(50, 32, 32, 3)
Calculate instance level outlier scores:
[14]:
all_img_scores = []
for i in tqdm(range(X_orig.shape[0])):
img_scores = np.zeros((len(mask_sizes),))
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),
mask_size=mask_size,
n_masks=n_masks,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
# predict outliers
od_preds_mask = od.predict(X_mask)
score = od_preds_mask['data']['instance_score']
# store average score over `n_masks` for a given mask size
img_scores[j] = np.mean(score)
all_img_scores.append(img_scores)
100%â–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆ 50/50 [00:39<00:00, 1.26it/s]
Visualize outlier scores vs. mask sizesÂ¶
[15]:
x_plt = [mask[0] for mask in mask_sizes]
[16]:
for ais in all_img_scores:
plt.plot(x_plt, ais)
plt.xticks(x_plt)
plt.title('Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier Score')
plt.show()
[17]:
ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))
for i, ais in enumerate(all_img_scores):
ais_np[i, :] = ais
ais_mean = np.mean(ais_np, axis=0)
plt.title('Mean Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.plot(x_plt, ais_mean)
plt.xticks(x_plt)
plt.show()
Investigate instance level outlierÂ¶
[18]:
i = 8 # index of instance to look at
[19]:
plt.plot(x_plt, all_img_scores[i])
plt.xticks(x_plt)
plt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.show()
Reconstruction of masked images and outlier scores per channel:
[20]:
all_X_mask = []
X_i = X_orig[i].reshape(1, 32, 32, 3)
all_X_mask.append(X_i)
# apply masks
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_i,
mask_size=mask_size,
n_masks=1, # just 1 for visualization purposes
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
all_X_mask.append(X_mask)
all_X_mask = np.concatenate(all_X_mask, axis=0)
all_X_recon = od.vae(all_X_mask).numpy()
od_preds = od.predict(all_X_mask)
Visualize:
[21]:
plot_feature_outlier_image(od_preds,
all_X_mask,
X_recon=all_X_recon,
max_instances=all_X_mask.shape[0],
n_channels=3)
Predict outliers on a subset of featuresÂ¶
The sensitivity of the outlier detector can not only be controlled via the threshold
, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the outlier_perc
argument in the predict
function. It specifies the percentage of the features that are used for outlier detection,
sorted in descending outlier score order.
[22]:
perc_list = [20, 40, 60, 80, 100]
all_perc_scores = []
for perc in perc_list:
od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)
iscore = od_preds_perc['data']['instance_score']
all_perc_scores.append(iscore)
Visualize outlier scores vs. mask sizes and percentage of features used:
[23]:
x_plt = [0] + x_plt
for aps in all_perc_scores:
plt.plot(x_plt, aps)
plt.xticks(x_plt)
plt.legend(perc_list)
plt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')
plt.xlabel('Mask Size')
plt.ylabel('Outlier Score')
plt.show()
Infer outlier threshold valueÂ¶
Finding good threshold values can be tricky since they are typically not easy to interpret. The infer_threshold
method helps finding a sensible value. We need to pass a batch of instances X
and specify what percentage of those we consider to be normal via threshold_perc
.
[24]:
print('Current threshold: {}'.format(od.threshold))
od.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers
print('New threshold: {}'.format(od.threshold))
Current threshold: 0.015
New threshold: 0.010383214280009267
This page was generated from examples/od_ae_cifar10.ipynb.
AE outlier detection on CIFAR10Â¶
MethodÂ¶
The AutoEncoder (AE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The AE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
DatasetÂ¶
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
[1]:
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, \
Dense, Layer, Reshape, InputLayer, Flatten
from tqdm import tqdm
from alibi_detect.od import OutlierAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
ERROR:fbprophet:Importing plotly failed. Interactive plots will not work.
Load CIFAR10 dataÂ¶
[2]:
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[3]:
load_outlier_detector = True
[4]:
filepath = 'my_path' # change to (absolute) directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'cifar10'
detector_name = 'OutlierAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
encoding_dim = 1024
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu),
Flatten(),
Dense(encoding_dim,)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(encoding_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
# initialize outlier detector
od = OutlierAE(threshold=.015, # threshold for outlier score
encoder_net=encoder_net, # can also pass AE model instead
decoder_net=decoder_net, # of separate encoder and decoder
)
# train
od.fit(X_train,
epochs=50,
verbose=True)
# save the trained outlier detector
save_detector(od, filepath)
Check quality AE modelÂ¶
[5]:
idx = 8
X = X_train[idx].reshape(1, 32, 32, 3)
X_recon = od.ae(X)
[6]:
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
[7]:
plt.imshow(X_recon.numpy().reshape(32, 32, 3))
plt.axis('off')
plt.show()
Check outliers on original CIFAR imagesÂ¶
[8]:
X = X_train[:500]
print(X.shape)
(500, 32, 32, 3)
[9]:
od_preds = od.predict(X,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
['instance_score', 'feature_score', 'is_outlier']
Plot instance level outlier scoresÂ¶
[10]:
target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances
labels = ['normal', 'outlier']
plot_instance_score(od_preds, target, labels, od.threshold)
Visualize predictionsÂ¶
[11]:
X_recon = od.ae(X).numpy()
plot_feature_outlier_image(od_preds,
X,
X_recon=X_recon,
instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
outliers_only=False) # only show outlier predictions
Predict outliers on perturbed CIFAR imagesÂ¶
We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in n_mask_sizes
, sample n_masks
and apply those to each of the n_imgs
images. Then we predict outliers on the masked instances:
[12]:
# nb of predictions per image: n_masks * n_mask_sizes
n_mask_sizes = 10
n_masks = 20
n_imgs = 50
Define masks and get images:
[13]:
mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]
print(mask_sizes)
img_ids = np.arange(n_imgs)
X_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)
print(X_orig.shape)
[(2, 2), (4, 4), (6, 6), (8, 8), (10, 10), (12, 12), (14, 14), (16, 16), (18, 18), (20, 20)]
(50, 32, 32, 3)
Calculate instance level outlier scores:
[14]:
all_img_scores = []
for i in tqdm(range(X_orig.shape[0])):
img_scores = np.zeros((len(mask_sizes),))
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),
mask_size=mask_size,
n_masks=n_masks,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
# predict outliers
od_preds_mask = od.predict(X_mask)
score = od_preds_mask['data']['instance_score']
# store average score over `n_masks` for a given mask size
img_scores[j] = np.mean(score)
all_img_scores.append(img_scores)
100%â–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆ 50/50 [00:17<00:00, 2.90it/s]
Visualize outlier scores vs. mask sizesÂ¶
[15]:
x_plt = [mask[0] for mask in mask_sizes]
[16]:
for ais in all_img_scores:
plt.plot(x_plt, ais)
plt.xticks(x_plt)
plt.title('Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier Score')
plt.show()
[17]:
ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))
for i, ais in enumerate(all_img_scores):
ais_np[i, :] = ais
ais_mean = np.mean(ais_np, axis=0)
plt.title('Mean Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.plot(x_plt, ais_mean)
plt.xticks(x_plt)
plt.show()
Investigate instance level outlierÂ¶
[18]:
i = 8 # index of instance to look at
[19]:
plt.plot(x_plt, all_img_scores[i])
plt.xticks(x_plt)
plt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.show()
Reconstruction of masked images and outlier scores per channel:
[20]:
all_X_mask = []
X_i = X_orig[i].reshape(1, 32, 32, 3)
all_X_mask.append(X_i)
# apply masks
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_i,
mask_size=mask_size,
n_masks=1, # just 1 for visualization purposes
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
all_X_mask.append(X_mask)
all_X_mask = np.concatenate(all_X_mask, axis=0)
all_X_recon = od.ae(all_X_mask).numpy()
od_preds = od.predict(all_X_mask)
Visualize:
[21]:
plot_feature_outlier_image(od_preds,
all_X_mask,
X_recon=all_X_recon,
max_instances=all_X_mask.shape[0],
n_channels=3)
Predict outliers on a subset of featuresÂ¶
The sensitivity of the outlier detector can not only be controlled via the threshold
, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the outlier_perc
argument in the predict
function. It specifies the percentage of the features that are used for outlier detection,
sorted in descending outlier score order.
[22]:
perc_list = [20, 40, 60, 80, 100]
all_perc_scores = []
for perc in perc_list:
od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)
iscore = od_preds_perc['data']['instance_score']
all_perc_scores.append(iscore)
Visualize outlier scores vs. mask sizes and percentage of features used:
[23]:
x_plt = [0] + x_plt
for aps in all_perc_scores:
plt.plot(x_plt, aps)
plt.xticks(x_plt)
plt.legend(perc_list)
plt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')
plt.xlabel('Mask Size')
plt.ylabel('Outlier Score')
plt.show()
Infer outlier threshold valueÂ¶
Finding good threshold values can be tricky since they are typically not easy to interpret. The infer_threshold
method helps finding a sensible value. We need to pass a batch of instances X
and specify what percentage of those we consider to be normal via threshold_perc
.
[24]:
print('Current threshold: {}'.format(od.threshold))
od.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers
print('New threshold: {}'.format(od.threshold))
Current threshold: 0.015
New threshold: 0.0017021061840932791
This page was generated from examples/od_aegmm_kddcup.ipynb.
AEGMM and VAEGMM outlier detection on KDD Cup â€˜99 datasetÂ¶
MethodÂ¶
The AEGMM method follows the Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection ICLR 2018 paper. The encoder compresses the data while the reconstructed instances generated by the decoder are used to create additional features based on the reconstruction error between the input and the reconstructions. These features are combined with encodings and fed into a Gaussian Mixture Model (GMM). Training of the AEGMM model is unsupervised on normal (inlier) data. The sample energy of the GMM can then be used to determine whether an instance is an outlier (high sample energy) or not (low sample energy). VAEGMM on the other hand uses a variational autoencoder instead of a plain autoencoder.
DatasetÂ¶
The outlier detector needs to detect computer network intrusions using TCP dump data for a localarea network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack.
There are 4 types of attacks in the dataset:
DOS: denialofservice, e.g. syn flood;
R2L: unauthorized access from a remote machine, e.g. guessing password;
U2R: unauthorized access to local superuser (root) privileges;
probing: surveillance and other probing, e.g., port scanning.
The dataset contains about 5 million connection records.
There are 3 types of features:
basic features of individual connections, e.g. duration of connection
content features within a connection, e.g. number of failed log in attempts
traffic features within a 2 second window, e.g. number of connections to the same host as the current connection
[1]:
import logging
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix, f1_score
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Dense, InputLayer
from alibi_detect.datasets import fetch_kdd
from alibi_detect.models.autoencoder import eucl_cosim_features
from alibi_detect.od import OutlierAEGMM, OutlierVAEGMM
from alibi_detect.utils.data import create_outlier_batch
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_tabular, plot_roc
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
Load datasetÂ¶
We only keep a number of continuous (18 out of 41) features.
[2]:
kddcup = fetch_kdd(percent10=True) # only load 10% of the dataset
print(kddcup.data.shape, kddcup.target.shape)
(494021, 18) (494021,)
Assume that a model is trained on normal instances of the dataset (not outliers) and standardization is applied:
[3]:
np.random.seed(0)
normal_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=400000, perc_outlier=0)
X_train, y_train = normal_batch.data.astype('float32'), normal_batch.target
print(X_train.shape, y_train.shape)
print('{}% outliers'.format(100 * y_train.mean()))
(400000, 18) (400000,)
0.0% outliers
[4]:
mean, stdev = X_train.mean(axis=0), X_train.std(axis=0)
Apply standardization:
[5]:
X_train = (X_train  mean) / stdev
Load or define AEGMM outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[6]:
load_outlier_detector = True
[7]:
filepath = 'my_path' # change to directory (absolute path) where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'OutlierAEGMM'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
# the model defined here is similar to the one defined in the original paper
n_features = X_train.shape[1]
latent_dim = 1
n_gmm = 2 # nb of components in GMM
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(60, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(10, activation=tf.nn.tanh),
Dense(latent_dim, activation=None)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(10, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(60, activation=tf.nn.tanh),
Dense(n_features, activation=None)
])
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.tanh),
Dense(n_gmm, activation=tf.nn.softmax)
])
# initialize outlier detector
od = OutlierAEGMM(threshold=None, # threshold for outlier score
encoder_net=encoder_net, # can also pass AEGMM model instead
decoder_net=decoder_net, # of separate encoder, decoder
gmm_density_net=gmm_density_net, # and gmm density net
n_gmm=n_gmm,
recon_features=eucl_cosim_features) # fn used to derive features
# from the reconstructed
# instances based on cosine
# similarity and Eucl distance
# train
od.fit(X_train,
epochs=50,
batch_size=1024,
save_path=filepath,
verbose=True)
# save the trained outlier detector
save_detector(od, filepath)
WARNING:alibi_detect.od.aegmm:No threshold level set. Need to infer threshold using `infer_threshold`.
The warning tells us we still need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via threshold_perc
. Letâ€™s assume we have some data which we know contains around 5% outliers. The percentage of outliers can be set with perc_outlier
in the create_outlier_batch
function.
[8]:
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float32'), threshold_batch.target
X_threshold = (X_threshold  mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
5.0% outliers
[9]:
od.infer_threshold(X_threshold, threshold_perc=100perc_outlier)
print('New threshold: {}'.format(od.threshold))
New threshold: 29.55636606216426
Save outlier detector with updated threshold:
[10]:
save_detector(od, filepath)
Detect outliersÂ¶
We now generate a batch of data with 10% outliers and detect the outliers in the batch.
[11]:
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
X_outlier, y_outlier = outlier_batch.data.astype('float32'), outlier_batch.target
X_outlier = (X_outlier  mean) / stdev
print(X_outlier.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
(1000, 18) (1000,)
10.0% outliers
Predict outliers:
[12]:
od_preds = od.predict(X_outlier, return_instance_score=True)
Display resultsÂ¶
F1 score and confusion matrix:
[13]:
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {:.4f}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9641
Plot instance level outlier scores vs. the outlier threshold:
[14]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold, ylim=(None, None))
We can also plot the ROC curve for the outlier scores of the detector:
[15]:
roc_data = {'AEGMM': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
Investigate resultsÂ¶
We can visualize the encodings of the instances in the latent space and the features derived from the instance reconstructions by the decoder. The encodings and features are then fed into the GMM density network.
[16]:
enc = od.aegmm.encoder(X_outlier) # encoding
X_recon = od.aegmm.decoder(enc) # reconstructed instances
recon_features = od.aegmm.recon_features(X_outlier, X_recon) # reconstructed features
[17]:
df = pd.DataFrame(dict(enc=enc[:, 0].numpy(),
cos=recon_features[:, 0].numpy(),
eucl=recon_features[:, 1].numpy(),
label=y_outlier))
groups = df.groupby('label')
fig, ax = plt.subplots()
for name, group in groups:
ax.plot(group.enc, group.cos, marker='o',
linestyle='', ms=6, label=labels[name])
plt.title('Encoding vs. Cosine Similarity')
plt.xlabel('Encoding')
plt.ylabel('Cosine Similarity')
ax.legend()
plt.show()
[18]:
fig, ax = plt.subplots()
for name, group in groups:
ax.plot(group.enc, group.eucl, marker='o',
linestyle='', ms=6, label=labels[name])
plt.title('Encoding vs. Relative Euclidean Distance')
plt.xlabel('Encoding')
plt.ylabel('Relative Euclidean Distance')
ax.legend()
plt.show()
A lot of the outliers are already separated well in the latent space.
Use VAEGMM outlier detectorÂ¶
We can again instantiate the pretrained VAEGMM detector from the Google Cloud Bucket. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[19]:
load_outlier_detector = True
[20]:
filepath = 'my_path' # change to directory (absolute path) where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'OutlierVAEGMM'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
# the model defined here is similar to the one defined in
# the OutlierVAE notebook
n_features = X_train.shape[1]
latent_dim = 2
n_gmm = 2
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(20, activation=tf.nn.relu),
Dense(15, activation=tf.nn.relu),
Dense(7, activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(7, activation=tf.nn.relu),
Dense(15, activation=tf.nn.relu),
Dense(20, activation=tf.nn.relu),
Dense(n_features, activation=None)
])
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.relu),
Dense(n_gmm, activation=tf.nn.softmax)
])
# initialize outlier detector
od = OutlierVAEGMM(threshold=None,
encoder_net=encoder_net,
decoder_net=decoder_net,
gmm_density_net=gmm_density_net,
n_gmm=n_gmm,
latent_dim=latent_dim,
samples=10,
recon_features=eucl_cosim_features)
# train
od.fit(X_train,
epochs=50,
batch_size=1024,
cov_elbo=dict(sim=.0025), # standard deviation assumption
verbose=True) # for elbo training
# save the trained outlier detector
save_detector(od, filepath)
WARNING:alibi_detect.od.vaegmm:No threshold level set. Need to infer threshold using `infer_threshold`.
Need to infer the threshold again:
[21]:
od.infer_threshold(X_threshold, threshold_perc=100perc_outlier)
print('New threshold: {}'.format(od.threshold))
New threshold: 11.703912305831903
Save outlier detector with updated threshold:
[22]:
save_detector(od, filepath)
Detect outliers and display resultsÂ¶
Predict:
[23]:
od_preds = od.predict(X_outlier, return_instance_score=True)
F1 score and confusion matrix:
[24]:
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {:.4f}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.9261
Plot instance level outlier scores vs. the outlier threshold:
[25]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold, ylim=(None, None))
You can zoom in by adjusting the min and max values in ylim
. We can also compare the VAEGMM ROC curve with AEGMM:
[26]:
roc_data['VAEGMM'] = {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}
plot_roc(roc_data)
This page was generated from examples/od_prophet_weather.ipynb.
Timeseries outlier detection using Prophet on weather dataÂ¶
MethodÂ¶
The Prophet outlier detector uses the Prophet time series forecasting package explained in this excellent paper. The underlying Prophet model is a decomposable univariate time series model combining trend, seasonality and holiday effects. The model forecast also includes an uncertainty interval around the estimated trend component using the MAP estimate of the extrapolated model. Alternatively, full Bayesian inference can be done at the expense of increased compute. The upper and lower values of the uncertainty interval can then be used as outlier thresholds for each point in time. First, the distance from the observed value to the nearest uncertainty boundary (upper or lower) is computed. If the observation is within the boundaries, the outlier score equals the negative distance. As a result, the outlier score is the lowest when the observation equals the model prediction. If the observation is outside of the boundaries, the score equals the distance measure and the observation is flagged as an outlier. One of the main drawbacks of the method however is that you need to refit the model as new data comes in. This is undesirable for applications with high throughput and realtime detection.
DatasetÂ¶
The example uses a weather time series dataset recorded by the MaxPlanckInstitute for Biogeochemistry. The dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. Like the TensorFlow timeseries tutorial, we only use data collected between 2009 and 2016.
[1]:
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import tensorflow as tf
from alibi_detect.od import OutlierProphet
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
Load datasetÂ¶
[2]:
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tfkerasdatasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True
)
csv_path, _ = os.path.splitext(zip_path)
df = pd.read_csv(csv_path)
df['Date Time'] = pd.to_datetime(df['Date Time'], format='%d.%m.%Y %H:%M:%S')
print(df.shape)
df.head()
(420551, 15)
[2]:
Date Time  p (mbar)  T (degC)  Tpot (K)  Tdew (degC)  rh (%)  VPmax (mbar)  VPact (mbar)  VPdef (mbar)  sh (g/kg)  H2OC (mmol/mol)  rho (g/m**3)  wv (m/s)  max. wv (m/s)  wd (deg)  

0  20090101 00:10:00  996.52  8.02  265.40  8.90  93.3  3.33  3.11  0.22  1.94  3.12  1307.75  1.03  1.75  152.3 
1  20090101 00:20:00  996.57  8.41  265.01  9.28  93.4  3.23  3.02  0.21  1.89  3.03  1309.80  0.72  1.50  136.1 
2  20090101 00:30:00  996.53  8.51  264.91  9.31  93.9  3.21  3.01  0.20  1.88  3.02  1310.24  0.19  0.63  171.6 
3  20090101 00:40:00  996.51  8.31  265.12  9.07  94.2  3.26  3.07  0.19  1.92  3.08  1309.19  0.34  0.50  198.0 
4  20090101 00:50:00  996.51  8.27  265.15  9.04  94.1  3.27  3.08  0.19  1.92  3.09  1309.00  0.32  0.63  214.3 
Select subset to test Prophet model on:
[3]:
n_prophet = 10000
Prophet model expects a DataFrame with 2 columns: one named ds
with the timestamps and one named y
with the time series to be evaluated. We will just look at the temperature data:
[4]:
d = {'ds': df['Date Time'][:n_prophet], 'y': df['T (degC)'][:n_prophet]}
df_T = pd.DataFrame(data=d)
print(df_T.shape)
df_T.head()
(10000, 2)
[4]:
ds  y  

0  20090101 00:10:00  8.02 
1  20090101 00:20:00  8.41 
2  20090101 00:30:00  8.51 
3  20090101 00:40:00  8.31 
4  20090101 00:50:00  8.27 
[5]:
plt.plot(df_T['ds'], df_T['y'])
plt.title('T (in Â°C) over time')
plt.xlabel('Time')
plt.ylabel('T (in Â°C)')
plt.show()
Load or define outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[6]:
load_outlier_detector = False
[7]:
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'weather'
detector_name = 'OutlierProphet'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # initialize, fit and save outlier detector
od = OutlierProphet(threshold=.9)
od.fit(df_T)
save_detector(od, filepath)
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
Please check out the documentation as well as the original Prophet documentation on how to customize the Prophetbased outlier detector and add seasonalities, holidays, opt for a saturating logistic growth model or apply parameter regularization.
Predict outliers on test dataÂ¶
Define the test data. It is important that the timestamps of the test data follow the training data. We check this below by comparing the first few rows of the test DataFrame with the last few of the training DataFrame:
[8]:
n_periods = 1000
d = {'ds': df['Date Time'][n_prophet:n_prophet+n_periods],
'y': df['T (degC)'][n_prophet:n_prophet+n_periods]}
df_T_test = pd.DataFrame(data=d)
df_T_test.head()
[8]:
ds  y  

10000  20090311 10:50:00  4.12 
10001  20090311 11:00:00  4.62 
10002  20090311 11:10:00  4.29 
10003  20090311 11:20:00  3.95 
10004  20090311 11:30:00  3.96 
[9]:
df_T.tail()
[9]:
ds  y  

9995  20090311 10:00:00  2.69 
9996  20090311 10:10:00  2.98 
9997  20090311 10:20:00  3.66 
9998  20090311 10:30:00  4.21 
9999  20090311 10:40:00  4.19 
Predict outliers on test data:
[10]:
od_preds = od.predict(
df_T_test,
return_instance_score=True,
return_forecast=True
)
Visualize resultsÂ¶
We can first visualize our predictions with Prophetâ€™s built in plotting functionality. This also allows us to include historical predictions:
[11]:
future = od.model.make_future_dataframe(periods=n_periods, freq='10T', include_history=True)
forecast = od.model.predict(future)
fig = od.model.plot(forecast)
We can also plot the breakdown of the different components in the forecast. Since we did not do full Bayesian inference with mcmc_samples
, the uncertaintly intervals of the forecast are determined by the MAP estimate of the extrapolated trend.
[12]:
fig = od.model.plot_components(forecast)
It is clear that the further we predict in the future, the wider the uncertainty intervals which determine the outlier threshold.
Letâ€™s overlay the actual data with the upper and lower outlier thresholds predictions and check where we predicted outliers:
[13]:
forecast['y'] = df['T (degC)'][:n_prophet+n_periods]
[14]:
pd.plotting.register_matplotlib_converters() # needed to plot timestamps
forecast[n_periods:].plot(x='ds', y=['y', 'yhat', 'yhat_upper', 'yhat_lower'])
plt.title('Predicted T (in Â°C) over time')
plt.xlabel('Time')
plt.ylabel('T (in Â°C)')
plt.show()
Outlier scores and predictions:
[15]:
od_preds['data']['forecast']['threshold'] = np.zeros(n_periods)
od_preds['data']['forecast'][n_periods:].plot(x='ds', y=['score', 'threshold'])
plt.title('Outlier score over time')
plt.xlabel('Time')
plt.ylabel('Outlier score')
plt.show()
The outlier scores naturally trend down as uncertainty increases when we predict further in the future.
Letâ€™s look at some individual outliers:
[16]:
df_fcst = od_preds['data']['forecast']
df_outlier = df_fcst.loc[df_fcst['score'] > 0]
[17]:
print('Number of outliers: {}'.format(df_outlier.shape[0]))
df_outlier[['ds', 'yhat', 'yhat_lower', 'yhat_upper', 'y']]
Number of outliers: 42
[17]:
ds  yhat  yhat_lower  yhat_upper  y  

166  20090312 14:30:00  5.826341  2.359631  9.523064  2.32 
279  20090313 09:20:00  2.438507  1.834116  6.408030  6.60 
280  20090313 09:30:00  2.546610  1.484339  6.560092  7.22 
281  20090313 09:40:00  2.661307  1.238431  6.712489  7.11 
282  20090313 09:50:00  2.782348  1.284963  6.631783  7.22 
283  20090313 10:00:00  2.909426  1.360733  7.213170  7.50 
284  20090313 10:10:00  3.042175  0.938601  7.138561  7.71 
285  20090313 10:20:00  3.180168  0.542519  7.159230  7.93 
286  20090313 10:30:00  3.322914  0.749686  7.319972  7.98 
287  20090313 10:40:00  3.469862  0.639899  7.248605  7.97 
288  20090313 10:50:00  3.620397  0.536662  7.690310  8.11 
289  20090313 11:00:00  3.773845  0.407588  7.735500  8.31 
290  20090313 11:10:00  3.929473  0.242533  8.195561  8.22 
291  20090313 11:20:00  4.086493  0.070751  8.352415  8.47 
292  20090313 11:30:00  4.244068  0.046052  8.216369  8.65 
293  20090313 11:40:00  4.401316  0.398563  8.058718  8.73 
294  20090313 11:50:00  4.557318  0.532095  8.671582  8.86 
295  20090313 12:00:00  4.711127  0.657084  8.844659  9.03 
296  20090313 12:10:00  4.861773  0.861597  8.995459  9.20 
297  20090313 12:20:00  5.008280  0.987469  9.190047  9.27 
310  20090313 14:30:00  6.146584  2.098111  10.394298  10.43 
314  20090313 15:10:00  6.108253  1.736977  10.220202  10.24 
315  20090313 15:20:00  6.070174  2.139051  10.002806  10.02 
316  20090313 15:30:00  6.021822  1.536603  10.264883  10.40 
317  20090313 15:40:00  5.963937  1.977076  9.995948  10.32 
320  20090313 16:10:00  5.741674  1.486856  10.023755  10.07 
324  20090313 16:50:00  5.368270  1.421167  9.341373  9.52 
435  20090314 11:20:00  4.284622  0.291427  8.510237  8.77 
437  20090314 11:40:00  4.597406  0.229902  9.058397  9.11 
439  20090314 12:00:00  4.904994  0.715068  9.270239  9.51 
440  20090314 12:10:00  5.054459  0.779145  9.515633  9.71 
442  20090314 12:30:00  5.339857  0.972692  9.709758  9.76 
469  20090314 17:00:00  5.406697  0.555661  9.653830  9.66 
472  20090314 17:30:00  5.096989  0.472095  9.414530  9.57 
473  20090314 17:40:00  4.996114  0.478616  9.479434  9.49 
474  20090314 17:50:00  4.897644  0.509235  9.347031  9.42 
516  20090315 00:50:00  2.658244  2.097107  7.294615  7.56 
517  20090315 01:00:00  2.619797  2.156021  7.082319  7.53 
519  20090315 01:20:00  2.547890  1.977093  7.401913  7.42 
521  20090315 01:40:00  2.482139  2.358331  7.093606  7.36 
522  20090315 01:50:00  2.451274  2.391906  7.132587  7.35 
523  20090315 02:00:00  2.421542  2.561413  7.114390  7.28 
This page was generated from examples/od_sr_synth.ipynb.
Time series outlier detection with Spectral Residuals on synthetic dataÂ¶
MethodÂ¶
The Spectral Residual outlier detector is based on the paper TimeSeries Anomaly Detection Service at Microsoft and is suitable for unsupervised online anomaly detection in univariate time series data. The algorithm first computes the Fourier Transform of the original data. Then it computes the spectral residual of the log amplitude of the transformed signal before applying the Inverse Fourier Transform to map the sequence back from the frequency to the time domain. This sequence is called the saliency map. The anomaly score is then computed as the relative difference between the saliency map values and their moving averages. If this score is above a threshold, the value at a specific timestep is flagged as an outlier. For more details, please check out the paper.
DatasetÂ¶
We test the outlier detector on a synthetic dataset generated with the TimeSynth package. It allows you to generate a wide range of time series (e.g. pseudoperiodic, autoregressive or Gaussian Process generated signals) and noise types (white or red noise). It can be installed as follows:
!pip install git+https://github.com/TimeSynth/TimeSynth.git
[1]:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, recall_score
import timesynth as ts
from alibi_detect.od import SpectralResidual
from alibi_detect.utils.perturbation import inject_outlier_ts
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_ts
Create univariate time seriesÂ¶
Define number of sampled points and the type of simulated time series. We use TimeSynth to generate a sinusoidal signal with Gaussian noise.
[2]:
n_points = 100000
[3]:
# timestamps
time_sampler = ts.TimeSampler(stop_time=n_points // 4)
time_samples = time_sampler.sample_regular_time(num_points=n_points)
# harmonic time series with Gaussian noise
sinusoid = ts.signals.Sinusoidal(frequency=0.25)
white_noise = ts.noise.GaussianNoise(std=0.1)
ts_harm = ts.TimeSeries(signal_generator=sinusoid, noise_generator=white_noise)
samples, signals, errors = ts_harm.sample(time_samples)
X = samples.reshape(1, 1).astype(np.float32)
print(X.shape)
(100000, 1)
We can inject noise in the time series via inject_outlier_ts
. The noise can be regulated via the percentage of outliers (perc_outlier
), the strength of the perturbation (n_std
) and the minimum size of the noise perturbation (min_std
):
[4]:
data = inject_outlier_ts(X, perc_outlier=10, perc_window=10, n_std=2., min_std=1.)
X_outlier, y_outlier, labels = data.data, data.target.astype(int), data.target_names
print(X_outlier.shape, y_outlier.shape)
(100000, 1) (100000,)
Visualize part of the original and perturbed time series:
[5]:
n_plot = 200
[6]:
plt.plot(time_samples[:n_plot], X[:n_plot], marker='o', markersize=4, label='sample')
plt.plot(time_samples[:n_plot], signals[:n_plot], marker='*', markersize=4, label='signal')
plt.plot(time_samples[:n_plot], errors[:n_plot], marker='.', markersize=4, label='noise')
plt.xlabel('Time')
plt.ylabel('Magnitude')
plt.title('Original sinusoid with noise')
plt.legend()
plt.show();
Perturbed data:
[7]:
plt.plot(time_samples[:n_plot], X[:n_plot], marker='o', markersize=4, label='original')
plt.plot(time_samples[:n_plot], X_outlier[:n_plot], marker='*', markersize=4, label='perturbed')
plt.xlabel('Time')
plt.ylabel('Magnitude')
plt.title('Original vs. perturbed data')
plt.legend()
plt.show();
Define Spectral Residual outlier detectorÂ¶
[8]:
od = SpectralResidual(
threshold=None, # threshold for outlier score
window_amp=20, # window for the average log amplitude
window_local=20, # window for the average saliency map
n_est_points=20 # nb of estimated points padded to the end of the sequence
)
WARNING:alibi_detect.od.sr:No threshold level set. Need to infer threshold using `infer_threshold`.
The warning tells us that we need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via threshold_perc
. Letâ€™s assume we have some data which we know contains around 10% outliers:
[9]:
X_threshold = X_outlier[:10000, :]
Letâ€™s infer the threshold:
[10]:
od.infer_threshold(X_threshold, time_samples[:10000], threshold_perc=90)
print('New threshold: {:.4f}'.format(od.threshold))
New threshold: 1.1818
Letâ€™s save the outlier detector with the updated threshold:
[11]:
filepath = 'my_path'
save_detector(od, filepath)
We can load the same detector via load_detector
:
[12]:
od = load_detector(filepath)
Detect outliersÂ¶
Predict outliers:
[13]:
od_preds = od.predict(X_outlier, time_samples, return_instance_score=True)
Display resultsÂ¶
F1 score, accuracy, recall and confusion matrix:
[14]:
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
acc = accuracy_score(y_outlier, y_pred)
rec = recall_score(y_outlier, y_pred)
print('F1 score: {}  Accuracy: {}  Recall: {}'.format(f1, acc, rec))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.8922542204568024  Accuracy: 0.9783  Recall: 0.8985
Plot the outlier scores of the time series vs. the outlier threshold. :
[15]:
plot_instance_score(od_preds, y_outlier, labels, od.threshold)
Letâ€™s zoom in on a smaller time scale to have a clear picture:
[17]:
plot_feature_outlier_ts(od_preds,
X_outlier,
od.threshold,
window=(1000, 1050),
t=time_samples,
X_orig=X)
This page was generated from examples/od_seq2seq_synth.ipynb.
Time series outlier detection with Seq2Seq models on synthetic dataÂ¶
MethodÂ¶
The SequencetoSequence (Seq2Seq) outlier detector consists of 2 main building blocks: an encoder and a decoder. The encoder consists of a Bidirectional LSTM which processes the input sequence and initializes the decoder. The LSTM decoder then makes sequential predictions for the output sequence. In our case, the decoder aims to reconstruct the input sequence. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
Since even for normal data the reconstruction error can be statedependent, we add an outlier threshold estimator network to the Seq2Seq model. This network takes in the hidden state of the decoder at each timestep and predicts the estimated reconstruction error for normal data. As a result, the outlier threshold is not static and becomes a function of the model state. This is similar to Park et al. (2017), but while they train the threshold estimator separately from the Seq2Seq model with a SupportVector Regressor, we train a neural net regression network endtoend with the Seq2Seq model.
The detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The Seq2Seq outlier detector is suitable for both univariate and multivariate time series.
DatasetÂ¶
We test the outlier detector on a synthetic dataset generated with the TimeSynth package. It allows you to generate a wide range of time series (e.g. pseudoperiodic, autoregressive or Gaussian Process generated signals) and noise types (white or red noise). It can be installed as follows:
!pip install git+https://github.com/TimeSynth/TimeSynth.git
[1]:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, recall_score
import tensorflow as tf
import timesynth as ts
from alibi_detect.od import OutlierSeq2Seq
from alibi_detect.utils.perturbation import inject_outlier_ts
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_feature_outlier_ts, plot_roc
Create multivariate time seriesÂ¶
Define number of sampled points and the type of simulated time series. We use TimeSynth to generate sinusoidal signals with noise.
[2]:
n_points = int(1e6) # number of timesteps
perc_train = 80 # percentage of instances used for training
perc_threshold = 10 # percentage of instances used to determine threshold
n_train = int(n_points * perc_train * .01)
n_threshold = int(n_points * perc_threshold * .01)
n_features = 2 # number of features in the time series
seq_len = 50 # sequence length
[3]:
# set random seed
np.random.seed(0)
# timestamps
time_sampler = ts.TimeSampler(stop_time=n_points // 4)
time_samples = time_sampler.sample_regular_time(num_points=n_points)
# create time series
ts1 = ts.TimeSeries(
signal_generator=ts.signals.Sinusoidal(frequency=0.25),
noise_generator=ts.noise.GaussianNoise(std=0.1)
)
samples1 = ts1.sample(time_samples)[0].reshape(1, 1)
ts2 = ts.TimeSeries(
signal_generator=ts.signals.Sinusoidal(frequency=0.15),
noise_generator=ts.noise.RedNoise(std=.7, tau=0.5)
)
samples2 = ts2.sample(time_samples)[0].reshape(1, 1)
# combine signals
X = np.concatenate([samples1, samples2], axis=1).astype(np.float32)
# split dataset into train, infer threshold and outlier detection sets
X_train = X[:n_train]
X_threshold = X[n_train:n_train+n_threshold]
X_outlier = X[n_train+n_threshold:]
# scale using the normal training data
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train  mu) / sigma
X_threshold = (X_threshold  mu) / sigma
X_outlier = (X_outlier  mu) / sigma
print(X_train.shape, X_threshold.shape, X_outlier.shape)
(800000, 2) (100000, 2) (100000, 2)
Visualize:
[4]:
n_features = X.shape[1]
istart, istop = 50, 100
for f in range(n_features):
plt.plot(X_train[istart:istop, f], label='X_train')
plt.title('Feature {}'.format(f))
plt.xlabel('Time')
plt.ylabel('Feature value')
plt.legend()
plt.show()
Load or define Seq2Seq outlier detectorÂ¶
[5]:
load_outlier_detector = False
[6]:
filepath = 'my_path' # change to directory where model is saved
if load_outlier_detector: # load pretrained outlier detector
od = load_detector(filepath)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = OutlierSeq2Seq(n_features,
seq_len,
threshold=None,
latent_dim=100)
# train
od.fit(X_train,
epochs=10,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
We still need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a time series of instances and specify what percentage of those we consider to be normal via threshold_perc
. First we create outliers by injecting noise in the time series via inject_outlier_ts
. The noise can be regulated via the percentage of outliers (perc_outlier
), the strength of the perturbation (n_std
) and the minimum size of the noise perturbation
(min_std
). Letâ€™s assume we have some data which we know contains around 10% outliers in either of the features:
[7]:
np.random.seed(0)
X_thr = X_threshold.copy()
data = inject_outlier_ts(X_threshold, perc_outlier=10, perc_window=10, n_std=2., min_std=1.)
X_threshold = data.data
print(X_threshold.shape)
(100000, 2)
Visualize outlier data used to determine the threshold:
[8]:
istart, istop = 0, 50
for f in range(n_features):
plt.plot(X_threshold[istart:istop, f], label='outliers')
plt.plot(X_thr[istart:istop, f], label='original')
plt.title('Feature {}'.format(f))
plt.xlabel('Time')
plt.ylabel('Feature value')
plt.legend()
plt.show()
Letâ€™s infer the threshold. The inject_outlier_ts
method distributes perturbations evenly across features. As a result, each feature contains about 5% outliers. We can either set the threshold over both features combined or determine a featurewise threshold. Here we opt for the featurewise threshold. This is for instance useful when different features have different variance or sensitivity to outliers. We also manually decrease the threshold a bit to increase the sensitivity of our
detector:
[9]:
od.infer_threshold(X_threshold, threshold_perc=[95, 95])
od.threshold = .15
print('New threshold: {}'.format(od.threshold))
New threshold: [[0.12968134 0.29728393]]
Letâ€™s save the outlier detector with the updated threshold:
[10]:
save_detector(od, filepath)
We can load the same detector via load_detector
:
[11]:
od = load_detector(filepath)
Detect outliersÂ¶
Generate the outliers to detect:
[12]:
np.random.seed(1)
X_out = X_outlier.copy()
data = inject_outlier_ts(X_outlier, perc_outlier=10, perc_window=10, n_std=2., min_std=1.)
X_outlier, y_outlier, labels = data.data, data.target.astype(int), data.target_names
print(X_outlier.shape, y_outlier.shape)
(100000, 2) (100000,)
Predict outliers:
[13]:
od_preds = od.predict(X_outlier,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
Display resultsÂ¶
F1 score, accuracy, recall and confusion matrix:
[14]:
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
acc = accuracy_score(y_outlier, y_pred)
rec = recall_score(y_outlier, y_pred)
print('F1 score: {:.3f}  Accuracy: {:.3f}  Recall: {:.3f}'.format(f1, acc, rec))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.927  Accuracy: 0.986  Recall: 0.901
Plot the featurewise outlier scores of the time series for each timestep vs. the outlier threshold:
[17]:
plot_feature_outlier_ts(od_preds,
X_outlier,
od.threshold[0],
window=(150, 200),
t=time_samples,
X_orig=X_out)
We can also plot the ROC curve using the instance level outlier scores:
[18]:
roc_data = {'S2S': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
This page was generated from examples/od_seq2seq_ecg.ipynb.
Seq2Seq time series outlier detection on ECG dataÂ¶
MethodÂ¶
The SequencetoSequence (Seq2Seq) outlier detector consists of 2 main building blocks: an encoder and a decoder. The encoder consists of a Bidirectional LSTM which processes the input sequence and initializes the decoder. The LSTM decoder then makes sequential predictions for the output sequence. In our case, the decoder aims to reconstruct the input sequence. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
Since even for normal data the reconstruction error can be statedependent, we add an outlier threshold estimator network to the Seq2Seq model. This network takes in the hidden state of the decoder at each timestep and predicts the estimated reconstruction error for normal data. As a result, the outlier threshold is not static and becomes a function of the model state. This is similar to Park et al. (2017), but while they train the threshold estimator separately from the Seq2Seq model with a SupportVector Regressor, we train a neural net regression network endtoend with the Seq2Seq model.
The detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised training is desireable since labeled data is often scarce. The Seq2Seq outlier detector is suitable for both univariate and multivariate time series.
DatasetÂ¶
The outlier detector needs to spot anomalies in electrocardiograms (ECGâ€™s). The dataset contains 5000 ECGâ€™s, originally obtained from Physionet under the name BIDMC Congestive Heart Failure Database(chfdb), record chf07. The data has been preprocessed in 2 steps: first each heartbeat is extracted, and then each beat is made equal length via interpolation. The data is labeled and contains 5 classes. The first class which contains almost 60% of the observations is seen as normal while the others are outliers. The detector is trained on heartbeats from the first class and needs to flag the other classes as anomalies.
[1]:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, precision_score, recall_score
from alibi_detect.od import OutlierSeq2Seq
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.datasets import fetch_ecg
from alibi_detect.utils.visualize import plot_roc
Load datasetÂ¶
Flip train and test data because there are only 500 ECGâ€™s in the original training set and 4500 in the test set:
[2]:
(X_test, y_test), (X_train, y_train) = fetch_ecg(return_X_y=True)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
(4500, 140) (4500,)
(500, 140) (500,)
Since we treat the first class as the normal, inlier data and the rest of X_train as outliers, we need to adjust the training (inlier) data and the labels of the test set.
[3]:
inlier_idx = np.where(y_train == 1)[0]
X_inlier, y_inlier = X_train[inlier_idx], np.zeros_like(y_train[inlier_idx])
outlier_idx = np.where(y_train != 1)[0]
X_outlier, y_outlier = X_train[outlier_idx], y_train[outlier_idx]
y_test[y_test == 1] = 0 # class 1 represent the inliers
y_test[y_test != 0] = 1
print(X_inlier.shape, X_outlier.shape)
(2627, 140) (1873, 140)
Some of the outliers in X_train are used in combination with some of the inlier instances to infer the threshold level:
[4]:
n_threshold = 1000
perc_inlier = 60
n_inlier = int(perc_inlier * .01 * n_threshold)
n_outlier = int((100  perc_inlier) * .01 * n_threshold)
idx_thr_in = np.random.choice(X_inlier.shape[0], n_inlier, replace=False)
idx_thr_out = np.random.choice(X_outlier.shape[0], n_outlier, replace=False)
X_threshold = np.concatenate([X_inlier[idx_thr_in], X_outlier[idx_thr_out]], axis=0)
y_threshold = np.zeros(n_threshold).astype(int)
y_threshold[n_outlier:] = 1
print(X_threshold.shape, y_threshold.shape)
(1000, 140) (1000,)
Apply minmax scaling between 0 and 1 to the observations using the inlier data:
[5]:
xmin, xmax = X_inlier.min(), X_inlier.max()
rng = (0, 1)
X_inlier = ((X_inlier  xmin) / (xmax  xmin)) * (rng[1]  rng[0]) + rng[0]
X_threshold = ((X_threshold  xmin) / (xmax  xmin)) * (rng[1]  rng[0]) + rng[0]
X_test = ((X_test  xmin) / (xmax  xmin)) * (rng[1]  rng[0]) + rng[0]
X_outlier = ((X_outlier  xmin) / (xmax  xmin)) * (rng[1]  rng[0]) + rng[0]
print('Inlier: min {:.2f}  max {:.2f}'.format(X_inlier.min(), X_inlier.max()))
print('Threshold: min {:.2f}  max {:.2f}'.format(X_threshold.min(), X_threshold.max()))
print('Test: min {:.2f}  max {:.2f}'.format(X_test.min(), X_test.max()))
Inlier: min 0.00  max 1.00
Threshold: min 0.00  max 0.99
Test: min 0.11  max 0.92
Reshape the observations to (batch size, sequence length, features) for the detector:
[6]:
shape = (1, X_inlier.shape[1], 1)
X_inlier = X_inlier.reshape(shape)
X_threshold = X_threshold.reshape(shape)
X_test = X_test.reshape(shape)
X_outlier = X_outlier.reshape(shape)
print(X_inlier.shape, X_threshold.shape, X_test.shape)
(2627, 140, 1) (1000, 140, 1) (500, 140, 1)
We can now visualize scaled instances from each class:
[7]:
idx_plt = [np.where(y_outlier == i)[0][0] for i in list(np.unique(y_outlier))]
X_plt = np.concatenate([X_inlier[0:1], X_outlier[idx_plt]], axis=0)
for i in range(X_plt.shape[0]):
plt.plot(X_plt[i], label='Class ' + str(i+1))
plt.title('ECGs of Different Classes')
plt.xlabel('Time step')
plt.legend()
plt.show()
Load or define Seq2Seq outlier detectorÂ¶
The pretrained outlier and adversarial detectors used in the example notebooks can be found here. You can use the builtin fetch_detector
function which saves the pretrained models in a local directory filepath
and loads the detector. Alternatively, you can train a detector from scratch:
[8]:
load_outlier_detector = True
[9]:
filepath = 'my_path' # change to (absolute) directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'ecg'
detector_name = 'OutlierSeq2Seq'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = OutlierSeq2Seq(1,
X_inlier.shape[1], # sequence length
threshold=None,
latent_dim=40)
# train
od.fit(X_inlier,
epochs=100,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
Letâ€™s inspect how well the sequencetosequence model can predict the ECGâ€™s of the inlier and outlier classes. The predictions in the charts below are made on ECGâ€™s from the test set:
[10]:
ecg_pred = od.seq2seq.decode_seq(X_test)[0]
[11]:
i_normal = np.where(y_test == 0)[0][0]
plt.plot(ecg_pred[i_normal], label='Prediction')
plt.plot(X_test[i_normal], label='Original')
plt.title('Predicted vs. Original ECG of Inlier Class 1')
plt.legend()
plt.show()
i_outlier = np.where(y_test == 1)[0][0]
plt.plot(ecg_pred[i_outlier], label='Prediction')
plt.plot(X_test[i_outlier], label='Original')
plt.title('Predicted vs. Original ECG of Outlier')
plt.legend()
plt.show()
It is clear that the model can reconstruct the inlier class but struggles with the outliers.
If we trained a model from scratch, the warning thrown when we initialized the model tells us that we need to set the outlier threshold. This can be done with the infer_threshold
method. We need to pass a time series of instances and specify what percentage of those we consider to be normal via threshold_perc
, equal to the percentage of Class 1 in X_threshold. The outlier_perc
parameter defines the percentage of features used to define the outlier threshold. In this example, the
number of features considered per instance equals 140 (1 for each timestep). We set the outlier_perc
at 95, which means that we will use the 95% features with highest reconstruction error, adjusted for by the threshold estimate.
[12]:
od.infer_threshold(X_threshold, outlier_perc=95, threshold_perc=perc_inlier)
print('New threshold: {}'.format(od.threshold))
New threshold: 0.002807901981854227
Letâ€™s save the outlier detector with the updated threshold:
[13]:
save_detector(od, filepath)
We can load the same detector via load_detector
:
[14]:
od = load_detector(filepath)
Detect outliersÂ¶
[15]:
od_preds = od.predict(X_test,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
Display resultsÂ¶
F1 score, accuracy, recall and confusion matrix:
[16]:
y_pred = od_preds['data']['is_outlier']
labels = ['normal', 'outlier']
f1 = f1_score(y_test, y_pred)
acc = accuracy_score(y_test, y_pred)
prec = precision_score(y_test, y_pred)
rec = recall_score(y_test, y_pred)
print('F1 score: {:.3f}  Accuracy: {:.3f}  Precision: {:.3f}  Recall: {:.3f}'.format(f1, acc, prec, rec))
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
F1 score: 0.964  Accuracy: 0.970  Precision: 0.975  Recall: 0.952
We can also plot the ROC curve based on the instance level outlier scores:
[17]:
roc_data = {'S2S': {'scores': od_preds['data']['instance_score'], 'labels': y_test}}
plot_roc(roc_data)
This page was generated from examples/ad_ae_cifar10.ipynb.
Adversarial AE detection and correction on CIFAR10Â¶
MethodÂ¶
The adversarial detector is based on Adversarial Detection and Correction by Matching Prediction Distributions. Usually, autoencoders are trained to find a transformation \(T\) that reconstructs the input instance \(x\) as accurately as possible with loss functions that are suited to capture the similarities between x and \(x'\) such as the mean squared reconstruction error. The novelty of the adversarial autoencoder (AE) detector relies on the use of a classification modeldependent loss function based on a distance metric in the output space of the model to train the autoencoder network. Given a classification model \(M\) we optimise the weights of the autoencoder such that the KLdivergence between the model predictions on \(x\) and on \(x'\) is minimised. Without the presence of a reconstruction loss term \(x'\) simply tries to make sure that the prediction probabilities \(M(x')\) and \(M(x)\) match without caring about the proximity of \(x'\) to \(x\). As a result, \(x'\) is allowed to live in different areas of the input feature space than \(x\) with different decision boundary shapes with respect to the model \(M\). The carefully crafted adversarial perturbation which is effective around x does not transfer to the new location of \(x'\) in the feature space, and the attack is therefore neutralised. Training of the autoencoder is unsupervised since we only need access to the model prediction probabilities and the normal training instances. We do not require any knowledge about the underlying adversarial attack and the classifier weights are frozen during training.
The detector can be used as follows:
An adversarial score \(S\) is computed. \(S\) equals the KL divergence between the model predictions on \(x\) and \(x'\).
If \(S\) is above a threshold (explicitly defined or inferred from training data), the instance is flagged as adversarial.
For adversarial instances, the model \(M\) uses the reconstructed instance \(x'\) to make a prediction. If the adversarial score is below the threshold, the model makes a prediction on the original instance \(x\).
This procedure is illustrated in the diagram below:
The method is very flexible and can also be used to detect common data corruptions and perturbations which negatively impact the model performance.
DatasetÂ¶
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
Note: in order to run this notebook, it is adviced to use Python 3.7 and have a GPU enabled.
[1]:
import matplotlib.pyplot as plt
import numpy as np
import os
from sklearn.metrics import roc_curve, auc
import tensorflow as tf
from tensorflow.keras.layers import (Conv2D, Conv2DTranspose, Dense, Flatten,
InputLayer, Reshape)
from tensorflow.keras.regularizers import l1
from alibi_detect.ad import AdversarialAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.prediction import predict_batch
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.datasets import fetch_attack, fetch_cifar10c, corruption_types_cifar10c
ERROR:fbprophet:Importing plotly failed. Interactive plots will not work.
Utility functionsÂ¶
[2]:
def scale_by_instance(X: np.ndarray) > np.ndarray:
mean_ = X.mean(axis=(1, 2, 3)).reshape(1, 1, 1, 1)
std_ = X.std(axis=(1, 2, 3)).reshape(1, 1, 1, 1)
return (X  mean_) / std_, mean_, std_
def accuracy(y_true: np.ndarray, y_pred: np.ndarray) > float:
return (y_true == y_pred).astype(int).sum() / y_true.shape[0]
def plot_adversarial(idx: list,
X: np.ndarray,
y: np.ndarray,
X_adv: np.ndarray,
y_adv: np.ndarray,
mean: np.ndarray,
std: np.ndarray,
score_x: np.ndarray = None,
score_x_adv: np.ndarray = None,
X_recon: np.ndarray = None,
y_recon: np.ndarray = None,
figsize: tuple = (10, 5)) > None:
# category map from class numbers to names
cifar10_map = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog',
6: 'frog', 7: 'horse', 8: 'ship', 9: 'truck'}
nrows = len(idx)
ncols = 3 if isinstance(X_recon, np.ndarray) else 2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize)
n_subplot = 1
for i in idx:
# rescale images in [0, 1]
X_adj = (X[i] * std[i] + mean[i]) / 255
X_adv_adj = (X_adv[i] * std[i] + mean[i]) / 255
if isinstance(X_recon, np.ndarray):
X_recon_adj = (X_recon[i] * std[i] + mean[i]) / 255
# original image
plt.subplot(nrows, ncols, n_subplot)
plt.axis('off')
if i == idx[0]:
if isinstance(score_x, np.ndarray):
plt.title('CIFAR10 Image \n{}: {:.3f}'.format(cifar10_map[y[i]], score_x[i]))
else:
plt.title('CIFAR10 Image \n{}'.format(cifar10_map[y[i]]))
else:
if isinstance(score_x, np.ndarray):
plt.title('{}: {:.3f}'.format(cifar10_map[y[i]], score_x[i]))
else:
plt.title('{}'.format(cifar10_map[y[i]]))
plt.imshow(X_adj)
n_subplot += 1
# adversarial image
plt.subplot(nrows, ncols, n_subplot)
plt.axis('off')
if i == idx[0]:
if isinstance(score_x_adv, np.ndarray):
plt.title('Adversarial \n{}: {:.3f}'.format(cifar10_map[y_adv[i]], score_x_adv[i]))
else:
plt.title('Adversarial \n{}'.format(cifar10_map[y_adv[i]]))
else:
if isinstance(score_x_adv, np.ndarray):
plt.title('{}: {:.3f}'.format(cifar10_map[y_adv[i]], score_x_adv[i]))
else:
plt.title('{}'.format(cifar10_map[y_adv[i]]))
plt.imshow(X_adv_adj)
n_subplot += 1
# reconstructed image
if isinstance(X_recon, np.ndarray):
plt.subplot(nrows, ncols, n_subplot)
plt.axis('off')
if i == idx[0]:
plt.title('AE Reconstruction \n{}'.format(cifar10_map[y_recon[i]]))
else:
plt.title('{}'.format(cifar10_map[y_recon[i]]))
plt.imshow(X_recon_adj)
n_subplot += 1
plt.show()
def plot_roc(roc_data: dict, figsize: tuple = (10,5)):
plot_labels = []
scores_attacks = []
labels_attacks = []
for k, v in roc_data.items():
if 'original' in k:
continue
score_x = roc_data[v['normal']]['scores']
y_pred = roc_data[v['normal']]['predictions']
score_v = v['scores']
y_pred_v = v['predictions']
labels_v = np.ones(score_x.shape[0])
idx_remove = np.where(y_pred == y_pred_v)[0]
labels_v = np.delete(labels_v, idx_remove)
score_v = np.delete(score_v, idx_remove)
scores = np.concatenate([score_x, score_v])
labels = np.concatenate([np.zeros(y_pred.shape[0]), labels_v]).astype(int)
scores_attacks.append(scores)
labels_attacks.append(labels)
plot_labels.append(k)
for sc_att, la_att, plt_la in zip(scores_attacks, labels_attacks, plot_labels):
fpr, tpr, thresholds = roc_curve(la_att, sc_att)
roc_auc = auc(fpr, tpr)
label = str('{}: AUC = {:.2f}'.format(plt_la, roc_auc))
plt.plot(fpr, tpr, lw=1, label='{}: AUC={:.4f}'.format(plt_la, roc_auc))
plt.plot([0, 1], [0, 1], color='black', lw=1, linestyle='')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('{}'.format('ROC curve'))
plt.legend(loc="lower right", ncol=1)
plt.grid()
plt.show()
Load dataÂ¶
[3]:
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
y_train = y_train.astype('int64').reshape(1,)
y_test = y_test.astype('int64').reshape(1,)
Standardise the dataset by instance:
[4]:
X_train, mean_train, std_train = scale_by_instance(X_train)
X_test, mean_test, std_test = scale_by_instance(X_test)
scale = (mean_train, std_train), (mean_test, std_test)
Load classifierÂ¶
[5]:
dataset = 'cifar10'
model = 'resnet56'
clf = fetch_tf_model(dataset, model)
Check that the predictions on the test set reach \(93.15\)% accuracy:
[6]:
y_pred = predict_batch(clf, X_test, batch_size=32)
acc_y_pred = accuracy(y_test, y_pred)
print('Accuracy: {:.4f}'.format(acc_y_pred))
Accuracy: 0.9315
Adversarial AttackÂ¶
We investigate both CarliniWagner (C&W) and SLIDE attacks. You can simply load previously found adversarial instances on the pretrained ResNet56 model. The attacks are generated by using Foolbox:
[7]:
# C&W attack
data_cw = fetch_attack(dataset, model, 'cw')
X_train_cw, X_test_cw = data_cw['data_train'], data_cw['data_test']
meta_cw = data_cw['meta'] # metadata with hyperparameters of the attack
# SLIDE attack
data_slide = fetch_attack(dataset, model, 'slide')
X_train_slide, X_test_slide = data_slide['data_train'], data_slide['data_test']
meta_slide = data_slide['meta']
[8]:
print(X_test_cw.shape, X_test_slide.shape)
(10000, 32, 32, 3) (10000, 32, 32, 3)
Check if the prediction accuracy of the model on the adversarial instances is close to \(0\)%.
[9]:
y_pred_cw = predict_batch(clf, X_test_cw, batch_size=32)
y_pred_slide = predict_batch(clf, X_test_slide, batch_size=32)
[10]:
acc_y_pred_cw = accuracy(y_test, y_pred_cw)
acc_y_pred_slide = accuracy(y_test, y_pred_slide)
print('Accuracy: cw {:.4f}  SLIDE {:.4f}'.format(acc_y_pred_cw, acc_y_pred_slide))
Accuracy: cw 0.0000  SLIDE 0.0002
Letâ€™s visualise some adversarial instances:
[11]:
idx = [3, 4]
print('C&W attack...')
plot_adversarial(idx, X_test, y_pred, X_test_cw, y_pred_cw,
mean_test, std_test, figsize=(10, 10))
print('SLIDE attack...')
plot_adversarial(idx, X_test, y_pred, X_test_slide, y_pred_slide,
mean_test, std_test, figsize=(10, 10))
C&W attack...
SLIDE attack...
Load or train and evaluate the adversarial detectorsÂ¶
We can again either fetch the pretrained detector from a Google Cloud Bucket or train one from scratch:
[12]:
load_pretrained = True
[13]:
filepath = 'my_path' # change to (absolute) directory where model is downloaded
if load_pretrained:
detector_type = 'adversarial'
detector_name = 'base'
ad = fetch_detector(filepath, detector_type, dataset, detector_name, model=model)
filepath = os.path.join(filepath, detector_name)
else: # train detector from scratch
# define encoder and decoder networks
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(32, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Flatten(),
Dense(40)
]
)
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(40,)),
Dense(4 * 4 * 128, activation=tf.nn.relu),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(3, 4, strides=2, padding='same',
activation=None, kernel_regularizer=l1(1e5))
]
)
# initialise and train detector
ad = AdversarialAE(
encoder_net=encoder_net,
decoder_net=decoder_net,
model=clf
)
ad.fit(X_train, epochs=40, batch_size=64, verbose=True)
# save the trained adversarial detector
save_detector(ad, filepath)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:alibi_detect.ad.adversarialae:No threshold level set. Need to infer threshold using `infer_threshold`.
The detector first reconstructs the input instances which can be adversarial. The reconstructed input is then fed to the classifier if the adversarial score for the instance is above the threshold. Letâ€™s investigate what happens when we reconstruct attacked instances and make predictions on them:
[14]:
X_recon_cw = predict_batch(ad.ae, X_test_cw, batch_size=32)
X_recon_slide = predict_batch(ad.ae, X_test_slide, batch_size=32)
[15]:
y_recon_cw = predict_batch(clf, X_recon_cw, batch_size=32)
y_recon_slide = predict_batch(clf, X_recon_slide, batch_size=32)
Accuracy on attacked vs. reconstructed instances:
[16]:
acc_y_recon_cw = accuracy(y_test, y_recon_cw)
acc_y_recon_slide = accuracy(y_test, y_recon_slide)
print('Accuracy after C&W attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_cw, acc_y_recon_cw))
print('Accuracy after SLIDE attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_slide, acc_y_recon_slide))
Accuracy after C&W attack 0.0000  reconstruction 0.8048
Accuracy after SLIDE attack 0.0002  reconstruction 0.8159
The detector restores the accuracy after the attacks from almost \(0\)% to well over \(80\)%! We can compute the adversarial scores and inspect some of the reconstructed instances:
[17]:
score_x = ad.score(X_test, batch_size=32)
score_cw = ad.score(X_test_cw, batch_size=32)
score_slide = ad.score(X_test_slide, batch_size=32)
[18]:
print('C&W attack...')
idx = [10, 13, 14, 16, 17]
plot_adversarial(idx, X_test, y_pred, X_test_cw, y_pred_cw, mean_test, std_test,
score_x=score_x, score_x_adv=score_cw, X_recon=X_recon_cw,
y_recon=y_recon_cw, figsize=(10, 15))
print('SLIDE attack...')
idx = [23, 25, 27, 29, 34]
plot_adversarial(idx, X_test, y_pred, X_test_slide, y_pred_slide, mean_test, std_test,
score_x=score_x, score_x_adv=score_slide, X_recon=X_recon_slide,
y_recon=y_recon_slide, figsize=(10, 15))
C&W attack...
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
SLIDE attack...
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
The ROC curves and AUC values show the effectiveness of the adversarial score to detect adversarial instances:
[19]:
roc_data = {
'original': {'scores': score_x, 'predictions': y_pred},
'C&W': {'scores': score_cw, 'predictions': y_pred_cw, 'normal': 'original'},
'SLIDE': {'scores': score_slide, 'predictions': y_pred_slide, 'normal': 'original'}
}
plot_roc(roc_data)
The threshold for the adversarial score can be set via infer_threshold
. We need to pass a batch of instances \(X\) and specify what percentage of those we consider to be normal via threshold_perc
. Assume we have only normal instances some of which the model has misclassified leading to a higher score if the reconstruction picked up features from the correct class or some might look adversarial in the first place. As a result, we set our threshold at \(95\)%:
[20]:
ad.infer_threshold(X_test, threshold_perc=95, margin=0., batch_size=32)
print('Adversarial threshold: {:.4f}'.format(ad.threshold))
Adversarial threshold: 2.6722
Letâ€™s save the updated detector:
[21]:
save_detector(ad, filepath)
We can also load it easily as follows:
[22]:
ad = load_detector(filepath)
The correct
method of the detector executes the diagram in Figure 1. First the adversarial scores is computed. For instances where the score is above the threshold, the classifier prediction on the reconstructed instance is returned. Otherwise the original prediction is kept. The method returns a dictionary containing the metadata of the detector, whether the instances in the batch are adversarial (above the threshold) or not, the classifier predictions using the correction mechanism and
both the original and reconstructed predictions. Letâ€™s illustrate this on a batch containing some adversarial (C&W) and original test set instances:
[23]:
n_test = X_test.shape[0]
np.random.seed(0)
idx_normal = np.random.choice(n_test, size=1600, replace=False)
idx_cw = np.random.choice(n_test, size=400, replace=False)
X_mix = np.concatenate([X_test[idx_normal], X_test_cw[idx_cw]])
y_mix = np.concatenate([y_test[idx_normal], y_test[idx_cw]])
print(X_mix.shape, y_mix.shape)
(2000, 32, 32, 3) (2000,)
Letâ€™s check the model performance:
[24]:
y_pred_mix = predict_batch(clf, X_mix, batch_size=32)
acc_y_pred_mix = accuracy(y_mix, y_pred_mix)
print('Accuracy {:.4f}'.format(acc_y_pred_mix))
Accuracy 0.7380
This can be improved with the correction mechanism:
[25]:
preds = ad.correct(X_mix, batch_size=32)
acc_y_corr_mix = accuracy(y_mix, preds['data']['corrected'])
print('Accuracy {:.4f}'.format(acc_y_corr_mix))
Accuracy 0.8205
Temperature ScalingÂ¶
We can further improve the correction performance by applying temperature scaling on the original model predictions \(M(x)\) during both training and inference when computing the adversarial scores. We can again load a pretrained detector or train one from scratch:
[26]:
load_pretrained = True
[27]:
filepath = 'my_path' # change to (absolute) directory where model is downloaded
if load_pretrained:
detector_name = 'temperature'
ad_t = fetch_detector(filepath, detector_type, dataset, detector_name, model=model)
filepath = os.path.join(filepath, detector_name)
else: # train detector from scratch
# define encoder and decoder networks
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(32, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Flatten(),
Dense(40)
]
)
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(40,)),
Dense(4 * 4 * 128, activation=tf.nn.relu),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(3, 4, strides=2, padding='same',
activation=None, kernel_regularizer=l1(1e5))
]
)
# initialise and train detector
ad_t = AdversarialAE(
encoder_net=encoder_net,
decoder_net=decoder_net,
model=clf,
temperature=0.5
)
ad_t.fit(X_train, epochs=40, batch_size=64, verbose=True)
# save the trained adversarial detector
save_detector(ad_t, filepath)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:alibi_detect.ad.adversarialae:No threshold level set. Need to infer threshold using `infer_threshold`.
[28]:
# reconstructed adversarial instances
X_recon_cw_t = predict_batch(ad_t.ae, X_test_cw, batch_size=32)
X_recon_slide_t = predict_batch(ad_t.ae, X_test_slide, batch_size=32)
# make predictions on reconstructed instances and compute accuracy
y_recon_cw_t = predict_batch(clf, X_recon_cw_t, batch_size=32)
y_recon_slide_t = predict_batch(clf, X_recon_slide_t, batch_size=32)
acc_y_recon_cw_t = accuracy(y_test, y_recon_cw_t)
acc_y_recon_slide_t = accuracy(y_test, y_recon_slide_t)
print('Accuracy after C&W attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_cw, acc_y_recon_cw_t))
print('Accuracy after SLIDE attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_slide,
acc_y_recon_slide_t))
Accuracy after C&W attack 0.0000  reconstruction 0.8141
Accuracy after SLIDE attack 0.0002  reconstruction 0.8265
Applying temperature scaling to CIFAR10 improves the ROC curve and AUC values.
[29]:
score_x_t = ad_t.score(X_test, batch_size=32)
score_cw_t = ad_t.score(X_test_cw, batch_size=32)
score_slide_t = ad_t.score(X_test_slide, batch_size=32)
[30]:
roc_data['original_t'] = {'scores': score_x_t, 'predictions': y_pred}
roc_data['C&W T=0.5'] = {'scores': score_cw_t, 'predictions': y_pred_cw, 'normal': 'original_t'}
roc_data['SLIDE T=0.5'] = {'scores': score_slide_t, 'predictions': y_pred_slide, 'normal': 'original_t'}
plot_roc(roc_data)
Hidden Layer KL DivergenceÂ¶
The performance of the correction mechanism can also be improved by extending the training methodology to one of the hidden layers of the classification model. We extract a flattened feature map from the hidden layer, feed it into a linear layer and apply the softmax function. The KL divergence between predictions on the hidden layer for \(x\) and \(x'\) is optimised and included in the adversarial score during inference:
[31]:
load_pretrained = True
[32]:
filepath = 'my_path' # change to (absolute) directory where model is downloaded
if load_pretrained:
detector_name = 'hiddenkld'
ad_hl = fetch_detector(filepath, detector_type, dataset, detector_name, model=model)
filepath = os.path.join(filepath, detector_name)
else: # train detector from scratch
# define encoder and decoder networks
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(32, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2D(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Flatten(),
Dense(40)
]
)
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(40,)),
Dense(4 * 4 * 128, activation=tf.nn.relu),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(64, 4, strides=2, padding='same',
activation=tf.nn.relu, kernel_regularizer=l1(1e5)),
Conv2DTranspose(3, 4, strides=2, padding='same',
activation=None, kernel_regularizer=l1(1e5))
]
)
# initialise and train detector
ad_hl = AdversarialAE(
encoder_net=encoder_net,
decoder_net=decoder_net,
model=clf,
hidden_layer_kld={200: 20}, # extract feature map from hidden layer 200
temperature=1 # predict softmax with output dim=20
)
ad_hl.fit(X_train, epochs=40, batch_size=64, verbose=True)
# save the trained adversarial detector
save_detector(ad_hl, filepath)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:alibi_detect.ad.adversarialae:No threshold level set. Need to infer threshold using `infer_threshold`.
[33]:
# reconstructed adversarial instances
X_recon_cw_hl = predict_batch(ad_hl.ae, X_test_cw, batch_size=32)
X_recon_slide_hl = predict_batch(ad_hl.ae, X_test_slide, batch_size=32)
# make predictions on reconstructed instances and compute accuracy
y_recon_cw_hl = predict_batch(clf, X_recon_cw_hl, batch_size=32)
y_recon_slide_hl = predict_batch(clf, X_recon_slide_hl, batch_size=32)
acc_y_recon_cw_hl = accuracy(y_test, y_recon_cw_hl)
acc_y_recon_slide_hl = accuracy(y_test, y_recon_slide_hl)
print('Accuracy after C&W attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_cw, acc_y_recon_cw_hl))
print('Accuracy after SLIDE attack {:.4f}  reconstruction {:.4f}'.format(acc_y_pred_slide,
acc_y_recon_slide_hl))
Accuracy after C&W attack 0.0000  reconstruction 0.8153
Accuracy after SLIDE attack 0.0002  reconstruction 0.8318
Malicious Data DriftÂ¶
The adversarial detector proves to be very flexible and can be used to measure the harmfulness of the data drift on the classifier. We evaluate the detector on the CIFAR10C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR10C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in model performance.
We can select from the following corruption types:
[34]:
corruptions = corruption_types_cifar10c()
print(corruptions)
['brightness', 'contrast', 'defocus_blur', 'elastic_transform', 'fog', 'frost', 'gaussian_blur', 'gaussian_noise', 'glass_blur', 'impulse_noise', 'jpeg_compression', 'motion_blur', 'pixelate', 'saturate', 'shot_noise', 'snow', 'spatter', 'speckle_noise', 'zoom_blur']
Fetch the CIFAR10C data for a list of corruptions at each severity level (from 1 to 5), make classifier predictions on the corrupted data, compute adversarial scores and identify which perturbations where malicious or harmful and which werenâ€™t. We can then store and visualise the adversarial scores for the harmful and harmless corruption. The score for the harmful perturbations is significantly higher than for the harmless ones. As a result, the adversarial detector also functions as a data drift detector.
[35]:
severities = [1,2,3,4,5]
score_drift = {
1: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
2: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
3: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
4: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
5: {'all': [], 'harm': [], 'noharm': [], 'acc': 0},
}
for s in severities:
print('\nSeverity: {} of {}'.format(s, len(severities)))
print('Loading corrupted dataset...')
X_corr, y_corr = fetch_cifar10c(corruption=corruptions, severity=s, return_X_y=True)
X_corr = X_corr.astype('float32')
print('Preprocess data...')
X_corr, mean_test, std_test = scale_by_instance(X_corr)
print('Make predictions on corrupted dataset...')
y_pred_corr = predict_batch(clf, X_corr, batch_size=32)
print('Compute adversarial scores on corrupted dataset...')
score_corr = ad_t.score(X_corr, batch_size=32)
scores = np.concatenate([score_x_t, score_corr])
print('Get labels for malicious corruptions...')
labels_corr = np.zeros(score_corr.shape[0])
repeat = y_corr.shape[0] // y_test.shape[0]
y_pred_repeat = np.tile(y_pred, (repeat,))
# malicious/harmful corruption: original prediction correct but
# prediction on corrupted data incorrect
idx_orig_right = np.where(y_pred_repeat == y_corr)[0]
idx_corr_wrong = np.where(y_pred_corr != y_corr)[0]
idx_harmful = np.intersect1d(idx_orig_right, idx_corr_wrong)
labels_corr[idx_harmful] = 1
labels = np.concatenate([np.zeros(X_test.shape[0]), labels_corr]).astype(int)
# harmless corruption: original prediction correct and prediction
# on corrupted data correct
idx_corr_right = np.where(y_pred_corr == y_corr)[0]
idx_harmless = np.intersect1d(idx_orig_right, idx_corr_right)
score_drift[s]['all'] = score_corr
score_drift[s]['harm'] = score_corr[idx_harmful]
score_drift[s]['noharm'] = score_corr[idx_harmless]
score_drift[s]['acc'] = accuracy(y_corr, y_pred_corr)
Severity: 1 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 2 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 3 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 4 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Severity: 5 of 5
Loading corrupted dataset...
Preprocess data...
Make predictions on corrupted dataset...
Compute adversarial scores on corrupted dataset...
Get labels for malicious corruptions...
Compute mean scores and standard deviation per severity level and plot:
[36]:
mu_noharm, std_noharm = [], []
mu_harm, std_harm = [], []
acc = [acc_y_pred]
for k, v in score_drift.items():
mu_noharm.append(v['noharm'].mean())
std_noharm.append(v['noharm'].std())
mu_harm.append(v['harm'].mean())
std_harm.append(v['harm'].std())
acc.append(v['acc'])
[37]:
plot_labels = ['0', '1', '2', '3', '4', '5']
N = 6
ind = np.arange(N)
width = .35
fig_bar_cd, ax = plt.subplots()
ax2 = ax.twinx()
p0 = ax.bar(ind[0], score_x_t.mean(), yerr=score_x_t.std(), capsize=2)
p1 = ax.bar(ind[1:], mu_noharm, width, yerr=std_noharm, capsize=2)
p2 = ax.bar(ind[1:] + width, mu_harm, width, yerr=std_harm, capsize=2)
ax.set_title('Adversarial Scores and Accuracy by Corruption Severity')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(plot_labels)
ax.set_ylim((1,6))
ax.legend((p1[0], p2[0]), ('Not Harmful', 'Harmful'), loc='upper right', ncol=2)
ax.set_ylabel('Score')
ax.set_xlabel('Corruption Severity')
color = 'tab:red'
ax2.set_ylabel('Accuracy', color=color)
ax2.plot(acc, color=color)
ax2.tick_params(axis='y', labelcolor=color)
plt.show()
alibi_detectÂ¶
alibi_detect packageÂ¶
SubpackagesÂ¶
alibi_detect.ad packageÂ¶

class
alibi_detect.ad.
AdversarialAE
(threshold=None, ae=None, model=None, encoder_net=None, decoder_net=None, model_hl=None, hidden_layer_kld=None, w_model_hl=None, temperature=1.0, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, ae=None, model=None, encoder_net=None, decoder_net=None, model_hl=None, hidden_layer_kld=None, w_model_hl=None, temperature=1.0, data_type=None)[source]Â¶ Autoencoder (AE) based adversarial detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for adversarial score to determine adversarial instances.ae (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras autoencoder model if available.model (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras classification model.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.model_hl (
Optional
[List
[tensorflow.keras.Model]]) â€“ List with tf.keras models for the hidden layer KL divergence computation.hidden_layer_kld (
Optional
[dict
]) â€“ Dictionary with as keys the hidden layer(s) of the model which are extracted and used during training of the AE, and as values the output dimension for the hidden layer.w_model_hl (
Optional
[list
]) â€“ Weights assigned to the loss of each model in model_hl.temperature (
float
) â€“ Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution.data_type (
Optional
[str
]) â€“ Optionally specifiy the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

correct
(X, batch_size=10000000000, return_instance_score=True, return_all_predictions=True)[source]Â¶ Correct adversarial instances if the adversarial score is above the threshold.
 Parameters
 Return type
 Returns
Dict with corrected predictions and information whether an instance is adversarial or not.

fit
(X, loss_fn=<function loss_adv_ae>, w_model=1.0, w_recon=0.0, optimizer=tensorflow.keras.optimizers.Adam, epochs=20, batch_size=128, verbose=True, log_metric=None, callbacks=None, preprocess_fn=None)[source]Â¶ Train Adversarial AE model.
 Parameters
X (numpy.ndarray) â€“ Training batch.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
w_model (
float
) â€“ Weight on model prediction loss term.w_recon (
float
) â€“ Weight on MSE reconstruction error loss term.optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.preprocess_fn (
Optional
[Callable
]) â€“ Preprocessing function applied to each training batch.
 Return type
None

infer_threshold
(X, threshold_perc=99.0, margin=0.0, batch_size=10000000000)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be adversarial in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the adversarial score.margin (
float
) â€“ Add margin to threshold. Useful if adversarial instances have significantly higher scores and there is no adversarial instance in X.batch_size (
int
) â€“ Batch size used when computing scores.
 Return type
None

predict
(X, batch_size=10000000000, return_instance_score=True)[source]Â¶ Predict whether instances are adversarial instances or not.
 Parameters
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the adversarial predictions and instance level adversarial scores.

SubmodulesÂ¶
alibi_detect.ad.adversarialae moduleÂ¶

class
alibi_detect.ad.adversarialae.
AdversarialAE
(threshold=None, ae=None, model=None, encoder_net=None, decoder_net=None, model_hl=None, hidden_layer_kld=None, w_model_hl=None, temperature=1.0, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, ae=None, model=None, encoder_net=None, decoder_net=None, model_hl=None, hidden_layer_kld=None, w_model_hl=None, temperature=1.0, data_type=None)[source]Â¶ Autoencoder (AE) based adversarial detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for adversarial score to determine adversarial instances.ae (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras autoencoder model if available.model (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras classification model.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.model_hl (
Optional
[List
[tensorflow.keras.Model]]) â€“ List with tf.keras models for the hidden layer KL divergence computation.hidden_layer_kld (
Optional
[dict
]) â€“ Dictionary with as keys the hidden layer(s) of the model which are extracted and used during training of the AE, and as values the output dimension for the hidden layer.w_model_hl (
Optional
[list
]) â€“ Weights assigned to the loss of each model in model_hl.temperature (
float
) â€“ Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution.data_type (
Optional
[str
]) â€“ Optionally specifiy the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

correct
(X, batch_size=10000000000, return_instance_score=True, return_all_predictions=True)[source]Â¶ Correct adversarial instances if the adversarial score is above the threshold.
 Parameters
 Return type
 Returns
Dict with corrected predictions and information whether an instance is adversarial or not.

fit
(X, loss_fn=<function loss_adv_ae>, w_model=1.0, w_recon=0.0, optimizer=tensorflow.keras.optimizers.Adam, epochs=20, batch_size=128, verbose=True, log_metric=None, callbacks=None, preprocess_fn=None)[source]Â¶ Train Adversarial AE model.
 Parameters
X (numpy.ndarray) â€“ Training batch.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
w_model (
float
) â€“ Weight on model prediction loss term.w_recon (
float
) â€“ Weight on MSE reconstruction error loss term.optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.preprocess_fn (
Optional
[Callable
]) â€“ Preprocessing function applied to each training batch.
 Return type
None

infer_threshold
(X, threshold_perc=99.0, margin=0.0, batch_size=10000000000)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be adversarial in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the adversarial score.margin (
float
) â€“ Add margin to threshold. Useful if adversarial instances have significantly higher scores and there is no adversarial instance in X.batch_size (
int
) â€“ Batch size used when computing scores.
 Return type
None

predict
(X, batch_size=10000000000, return_instance_score=True)[source]Â¶ Predict whether instances are adversarial instances or not.
 Parameters
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the adversarial predictions and instance level adversarial scores.


class
alibi_detect.ad.adversarialae.
DenseHidden
(model, hidden_layer, output_dim, hidden_dim=None)[source]Â¶ Bases:
tensorflow.keras.Model
alibi_detect.cd packageÂ¶
alibi_detect.metrics packageÂ¶
SubmodulesÂ¶
alibi_detect.metrics.data moduleÂ¶
alibi_detect.metrics.utils moduleÂ¶

class
alibi_detect.metrics.utils.
NumpyEncoder
(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]Â¶ Bases:
json.encoder.JSONEncoder

default
(obj)[source]Â¶ Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)

alibi_detect.models packageÂ¶
SubmodulesÂ¶
alibi_detect.models.autoencoder moduleÂ¶

class
alibi_detect.models.autoencoder.
AE
(encoder_net, decoder_net, name='ae')[source]Â¶ Bases:
tensorflow.keras.Model

__init__
(encoder_net, decoder_net, name='ae')[source]Â¶ Combine encoder and decoder in AE.
 Parameters
encoder_net (tensorflow.keras.Sequential) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class.
decoder_net (tensorflow.keras.Sequential) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class.
name (
str
) â€“ Name of autoencoder model.
 Return type
None


class
alibi_detect.models.autoencoder.
AEGMM
(encoder_net, decoder_net, gmm_density_net, n_gmm, recon_features=<function eucl_cosim_features>, name='aegmm')[source]Â¶ Bases:
tensorflow.keras.Model

__init__
(encoder_net, decoder_net, gmm_density_net, n_gmm, recon_features=<function eucl_cosim_features>, name='aegmm')[source]Â¶ Deep Autoencoding Gaussian Mixture Model.
 Parameters
encoder_net (tensorflow.keras.Sequential) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class.
decoder_net (tensorflow.keras.Sequential) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class.
gmm_density_net (tensorflow.keras.Sequential) â€“ Layers for the GMM network wrapped in a tf.keras.Sequential class.
n_gmm (
int
) â€“ Number of components in GMM.recon_features (
Callable
) â€“ Function to extract features from the reconstructed instance by the decoder.name (
str
) â€“ Name of the AEGMM model.
 Return type
None


class
alibi_detect.models.autoencoder.
Decoder
(decoder_net, name='decoder')[source]Â¶ Bases:
tensorflow.keras.layers.Layer

class
alibi_detect.models.autoencoder.
DecoderLSTM
(latent_dim, output_dim, output_activation=None, name='decoder_lstm')[source]Â¶ Bases:
tensorflow.keras.layers.Layer

class
alibi_detect.models.autoencoder.
EncoderAE
(encoder_net, name='encoder_ae')[source]Â¶ Bases:
tensorflow.keras.layers.Layer

class
alibi_detect.models.autoencoder.
EncoderLSTM
(latent_dim, name='encoder_lstm')[source]Â¶ Bases:
tensorflow.keras.layers.Layer

class
alibi_detect.models.autoencoder.
EncoderVAE
(encoder_net, latent_dim, name='encoder_vae')[source]Â¶ Bases:
tensorflow.keras.layers.Layer

class
alibi_detect.models.autoencoder.
Sampling
(*args, **kwargs)[source]Â¶ Bases:
tensorflow.keras.layers.Layer
Reparametrization trick. Uses (z_mean, z_log_var) to sample the latent vector z.

class
alibi_detect.models.autoencoder.
Seq2Seq
(encoder_net, decoder_net, threshold_net, n_features, score_fn=tensorflow.math.squared_difference, beta=1.0, name='seq2seq')[source]Â¶ Bases:
tensorflow.keras.Model

__init__
(encoder_net, decoder_net, threshold_net, n_features, score_fn=tensorflow.math.squared_difference, beta=1.0, name='seq2seq')[source]Â¶ Sequencetosequence model.
 Parameters
encoder_net (
EncoderLSTM
) â€“ Encoder network.decoder_net (
DecoderLSTM
) â€“ Decoder network.threshold_net (tensorflow.keras.Sequential) â€“ Regression network used to estimate threshold.
n_features (
int
) â€“ Number of features.score_fn (
Callable
) â€“ Function used for outlier score.beta (
float
) â€“ Weight on the threshold estimation loss term.name (
str
) â€“ Name of the seq2seq model.
 Return type
None


class
alibi_detect.models.autoencoder.
VAE
(encoder_net, decoder_net, latent_dim, beta=1.0, name='vae')[source]Â¶ Bases:
tensorflow.keras.Model

__init__
(encoder_net, decoder_net, latent_dim, beta=1.0, name='vae')[source]Â¶ Combine encoder and decoder in VAE.
 Parameters
encoder_net (tensorflow.keras.Sequential) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class.
decoder_net (tensorflow.keras.Sequential) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class.
latent_dim (
int
) â€“ Dimensionality of the latent space.beta (
float
) â€“ Beta parameter for KLdivergence loss term.name (
str
) â€“ Name of VAE model.
 Return type
None


class
alibi_detect.models.autoencoder.
VAEGMM
(encoder_net, decoder_net, gmm_density_net, n_gmm, latent_dim, recon_features=<function eucl_cosim_features>, beta=1.0, name='vaegmm')[source]Â¶ Bases:
tensorflow.keras.Model

__init__
(encoder_net, decoder_net, gmm_density_net, n_gmm, latent_dim, recon_features=<function eucl_cosim_features>, beta=1.0, name='vaegmm')[source]Â¶ Variational Autoencoding Gaussian Mixture Model.
 Parameters
encoder_net (tensorflow.keras.Sequential) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class.
decoder_net (tensorflow.keras.Sequential) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class.
gmm_density_net (tensorflow.keras.Sequential) â€“ Layers for the GMM network wrapped in a tf.keras.Sequential class.
n_gmm (
int
) â€“ Number of components in GMM.latent_dim (
int
) â€“ Dimensionality of the latent space.recon_features (
Callable
) â€“ Function to extract features from the reconstructed instance by the decoder.beta (
float
) â€“ Beta parameter for KLdivergence loss term.name (
str
) â€“ Name of the VAEGMM model.
 Return type
None


alibi_detect.models.autoencoder.
eucl_cosim_features
(x, y, max_eucl=100.0)[source]Â¶ Compute features extracted from the reconstructed instance using the relative Euclidean distance and cosine similarity between 2 tensors.
 Parameters
x (tensorflow.Tensor) â€“ Tensor used in feature computation.
y (tensorflow.Tensor) â€“ Tensor used in feature computation.
max_eucl (
float
) â€“ Maximum value to clip relative Euclidean distance by.
 Return type
tensorflow.Tensor
 Returns
Tensor concatenating the relative Euclidean distance and
cosine similarity features.
alibi_detect.models.gmm moduleÂ¶

alibi_detect.models.gmm.
gmm_energy
(z, phi, mu, cov, L, log_det_cov, return_mean=True)[source]Â¶ Compute sample energy from Gaussian Mixture Model.
 Parameters
z (tensorflow.Tensor) â€“ Observations.
phi (tensorflow.Tensor) â€“ Mixture component distribution weights.
mu (tensorflow.Tensor) â€“ Mixture means.
cov (tensorflow.Tensor) â€“ Mixture covariance.
L (tensorflow.Tensor) â€“ Cholesky decomposition of cov.
log_det_cov (tensorflow.Tensor) â€“ Log of the determinant of cov.
return_mean (
bool
) â€“ Take mean across all sample energies in a batch.
 Return type
Tuple
[tensorflow.Tensor, tensorflow.Tensor] Returns
sample_energy â€“ The sample energy of the GMM.
cov_diag â€“ The inverse sum of the diagonal components of the covariance matrix.

alibi_detect.models.gmm.
gmm_params
(z, gamma)[source]Â¶ Compute parameters of Gaussian Mixture Model.
 Parameters
z (tensorflow.Tensor) â€“ Observations.
gamma (tensorflow.Tensor) â€“ Mixture probabilities to derive mixture distribution weights from.
 Return type
Tuple
[tensorflow.Tensor, tensorflow.Tensor, tensorflow.Tensor, tensorflow.Tensor, tensorflow.Tensor] Returns
phi â€“ Mixture component distribution weights.
mu â€“ Mixture means.
cov â€“ Mixture covariance.
L â€“ Cholesky decomposition of cov.
log_det_cov â€“ Log of the determinant of cov.
alibi_detect.models.losses moduleÂ¶

alibi_detect.models.losses.
elbo
(y_true, y_pred, cov_full=None, cov_diag=None, sim=0.05)[source]Â¶ Compute ELBO loss.
 Parameters
 Return type
tensorflow.Tensor
 Returns
ELBO loss value.

alibi_detect.models.losses.
loss_adv_ae
(x_true, x_pred, model=None, model_hl=None, w_model=1.0, w_recon=0.0, w_model_hl=None, temperature=1.0)[source]Â¶ Loss function used for AdversarialAE.
 Parameters
x_true (tensorflow.Tensor) â€“ Batch of instances.
x_pred (tensorflow.Tensor) â€“ Batch of reconstructed instances by the autoencoder.
model (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras model with frozen layers (layers.trainable = False).model_hl (
Optional
[list
]) â€“ List with tf.keras models used to extract feature maps and make predictions on hidden layers.w_model (
float
) â€“ Weight on model prediction loss term.w_recon (
float
) â€“ Weight on MSE reconstruction error loss term.w_model_hl (
Optional
[list
]) â€“ Weights assigned to the loss of each model in model_hl.temperature (
float
) â€“ Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution.
 Return type
tensorflow.Tensor
 Returns
Loss value.

alibi_detect.models.losses.
loss_aegmm
(x_true, x_pred, z, gamma, w_energy=0.1, w_cov_diag=0.005)[source]Â¶ Loss function used for OutlierAEGMM.
 Parameters
x_true (tensorflow.Tensor) â€“ Batch of instances.
x_pred (tensorflow.Tensor) â€“ Batch of reconstructed instances by the autoencoder.
z (tensorflow.Tensor) â€“ Latent space values.
gamma (tensorflow.Tensor) â€“ Membership prediction for mixture model components.
w_energy (
float
) â€“ Weight on sample energy loss term.w_cov_diag (
float
) â€“ Weight on covariance regularizing loss term.
 Return type
tensorflow.Tensor
 Returns
Loss value.

alibi_detect.models.losses.
loss_vaegmm
(x_true, x_pred, z, gamma, w_recon=1e07, w_energy=0.1, w_cov_diag=0.005, cov_full=None, cov_diag=None, sim=0.05)[source]Â¶ Loss function used for OutlierVAEGMM.
 Parameters
x_true (tensorflow.Tensor) â€“ Batch of instances.
x_pred (tensorflow.Tensor) â€“ Batch of reconstructed instances by the variational autoencoder.
z (tensorflow.Tensor) â€“ Latent space values.
gamma (tensorflow.Tensor) â€“ Membership prediction for mixture model components.
w_recon (
float
) â€“ Weight on elbo loss term.w_energy (
float
) â€“ Weight on sample energy loss term.w_cov_diag (
float
) â€“ Weight on covariance regularizing loss term.cov_full (
Optional
[tensorflow.Tensor]) â€“ Full covariance matrix.cov_diag (
Optional
[tensorflow.Tensor]) â€“ Diagonal (variance) of covariance matrix.sim (
float
) â€“ Scale identity multiplier.
 Return type
tensorflow.Tensor
 Returns
Loss value.
alibi_detect.models.trainer moduleÂ¶

alibi_detect.models.trainer.
trainer
(model, loss_fn, X_train, y_train=None, optimizer=tensorflow.keras.optimizers.Adam, loss_fn_kwargs=None, preprocess_fn=None, epochs=20, batch_size=64, buffer_size=1024, verbose=True, log_metric=None, callbacks=None)[source]Â¶ Train TensorFlow model.
 Parameters
model (tensorflow.keras.Model) â€“ Model to train.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
X_train (numpy.ndarray) â€“ Training batch.
y_train (
Optional
[numpy.ndarray]) â€“ Training labels.optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
loss_fn_kwargs (
Optional
[dict
]) â€“ Kwargs for loss function.preprocess_fn (
Optional
[Callable
]) â€“ Preprocessing function applied to each training batch.epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.buffer_size (
int
) â€“ Maximum number of elements that will be buffered when prefetching.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.
 Return type
None
alibi_detect.od packageÂ¶

class
alibi_detect.od.
OutlierAEGMM
(threshold=None, aegmm=None, encoder_net=None, decoder_net=None, gmm_density_net=None, n_gmm=None, recon_features=<function eucl_cosim_features>, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, aegmm=None, encoder_net=None, decoder_net=None, gmm_density_net=None, n_gmm=None, recon_features=<function eucl_cosim_features>, data_type=None)[source]Â¶ AEGMMbased outlier detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for outlier score to determine outliers.aegmm (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras model if available.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜aegmmâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜aegmmâ€™ is specified.gmm_density_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the GMM network wrapped in a tf.keras.Sequential class.recon_features (
Callable
) â€“ Function to extract features from the reconstructed instance by the decoder.data_type (
Optional
[str
]) â€“ Optionally specifiy the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

fit
(X, loss_fn=<function loss_aegmm>, w_energy=0.1, w_cov_diag=0.005, optimizer=tensorflow.keras.optimizers.Adam, epochs=20, batch_size=64, verbose=True, log_metric=None, callbacks=None)[source]Â¶ Train AEGMM model.
 Parameters
X (numpy.ndarray) â€“ Training batch.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
w_energy (
float
) â€“ Weight on sample energy loss term if default loss_aegmm loss fn is used.w_cov_diag (
float
) â€“ Weight on covariance regularizing loss term if default loss_aegmm loss fn is used.optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.
 Return type
None

infer_threshold
(X, threshold_perc=95.0, batch_size=10000000000)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be outliers in a sample of the dataset.

predict
(X, batch_size=10000000000, return_instance_score=True)[source]Â¶ Compute outlier scores and transform into outlier predictions.
 Parameters
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the outlier predictions and instance level outlier scores.


class
alibi_detect.od.
IForest
(threshold=None, n_estimators=100, max_samples='auto', max_features=1.0, bootstrap=False, n_jobs=1, data_type='tabular')[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, n_estimators=100, max_samples='auto', max_features=1.0, bootstrap=False, n_jobs=1, data_type='tabular')[source]Â¶ Outlier detector for tabular data using isolation forests.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for outlier score to determine outliers.n_estimators (
int
) â€“ Number of base estimators in the ensemble.max_samples (
Union
[str
,int
,float
]) â€“ Number of samples to draw from the training data to train each base estimator. If int, draw â€˜max_samplesâ€™ samples. If float, draw â€˜max_samples * number of featuresâ€™ samples. If â€˜autoâ€™, max_samples = min(256, number of samples)max_features (
Union
[int
,float
]) â€“ Number of features to draw from the training data to train each base estimator. If int, draw â€˜max_featuresâ€™ features. If float, draw â€˜max_features * number of featuresâ€™ features.bootstrap (
bool
) â€“ Whether to fit individual trees on random subsets of the training data, sampled with replacement.n_jobs (
int
) â€“ Number of jobs to run in parallel for â€˜fitâ€™ and â€˜predictâ€™.data_type (
str
) â€“ Optionally specify the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

fit
(X, sample_weight=None)[source]Â¶ Fit isolation forest.
 Parameters
X (numpy.ndarray) â€“ Training batch.
sample_weight (
Optional
[numpy.ndarray]) â€“ Sample weights.
 Return type
None

infer_threshold
(X, threshold_perc=95.0)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be outliers in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the outlier score.
 Return type
None

predict
(X, return_instance_score=True)[source]Â¶ Compute outlier scores and transform into outlier predictions.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
return_instance_score (
bool
) â€“ Whether to return instance level outlier scores.
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the outlier predictions and instance level outlier scores.


class
alibi_detect.od.
Mahalanobis
(threshold=None, n_components=3, std_clip=3, start_clip=100, max_n=None, cat_vars=None, ohe=False, data_type='tabular')[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, n_components=3, std_clip=3, start_clip=100, max_n=None, cat_vars=None, ohe=False, data_type='tabular')[source]Â¶ Outlier detector for tabular data using the Mahalanobis distance.
 Parameters
threshold (
Optional
[float
]) â€“ Mahalanobis distance threshold used to classify outliers.n_components (
int
) â€“ Number of principal components used.std_clip (
int
) â€“ Featurewise stdev used to clip the observations before updating the mean and cov.start_clip (
int
) â€“ Number of observations before clipping is applied.max_n (
Optional
[int
]) â€“ Algorithm behaves as if it has seen at most max_n points.cat_vars (
Optional
[dict
]) â€“ Dict with as keys the categorical columns and as values the number of categories per categorical variable.ohe (
bool
) â€“ Whether the categorical variables are onehot encoded (OHE) or not. If not OHE, they are assumed to have ordinal encodings.data_type (
str
) â€“ Optionally specifiy the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

cat2num
(X)[source]Â¶ Convert categorical variables to numerical values.
 Parameters
X (numpy.ndarray) â€“ Batch of instances to analyze.
 Return type
numpy.ndarray
 Returns
Batch of instances where categorical variables are converted to numerical values.

fit
(X, y=None, d_type='abdm', w=None, disc_perc=[25, 50, 75], standardize_cat_vars=True, feature_range=(10000000000.0, 10000000000.0), smooth=1.0, center=True)[source]Â¶ If categorical variables are present, then transform those to numerical values. This step is not necessary in the absence of categorical variables.
 Parameters
X (numpy.ndarray) â€“ Batch of instances used to infer distances between categories from.
y (
Optional
[numpy.ndarray]) â€“ Model class predictions or ground truth labels for X. Used for â€˜mvdmâ€™ and â€˜abdmmvdmâ€™ pairwise distance metrics. Note that this is only compatible with classification problems. For regression problems, use the â€˜abdmâ€™ distance metric.d_type (
str
) â€“ Pairwise distance metric used for categorical variables. Currently, â€˜abdmâ€™, â€˜mvdmâ€™ and â€˜abdmmvdmâ€™ are supported. â€˜abdmâ€™ infers context from the other variables while â€˜mvdmâ€™ uses the model predictions. â€˜abdmmvdmâ€™ is a weighted combination of the two metrics.w (
Optional
[float
]) â€“ Weight on â€˜abdmâ€™ (between 0. and 1.) distance if d_type equals â€˜abdmmvdmâ€™.disc_perc (
list
) â€“ List with percentiles used in binning of numerical features used for the â€˜abdmâ€™ and â€˜abdmmvdmâ€™ pairwise distance measures.standardize_cat_vars (
bool
) â€“ Standardize numerical values of categorical variables if True.feature_range (
tuple
) â€“ Tuple with min and max ranges to allow for perturbed instances. Min and max ranges can be floats or numpy arrays with dimension (1x nb of features) for featurewise ranges.smooth (
float
) â€“ Smoothing exponent between 0 and 1 for the distances. Lower values of l will smooth the difference in distance metric between different features.center (
bool
) â€“ Whether to center the scaled distance measures. If False, the min distance for each feature except for the feature with the highest raw max distance will be the lower bound of the feature range, but the upper bound will be below the max feature range.
 Return type
None

infer_threshold
(X, threshold_perc=95.0)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be outliers in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the outlier score.
 Return type
None

predict
(X, return_instance_score=True)[source]Â¶ Compute outlier scores and transform into outlier predictions.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
return_instance_score (
bool
) â€“ Whether to return instance level outlier scores.
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the outlier predictions and instance level outlier scores.


class
alibi_detect.od.
OutlierAE
(threshold=None, ae=None, encoder_net=None, decoder_net=None, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, ae=None, encoder_net=None, decoder_net=None, data_type=None)[source]Â¶ AEbased outlier detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for outlier score to determine outliers.ae (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras model if available.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜aeâ€™ is specified.data_type (
Optional
[str
]) â€“ Optionally specify the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

feature_score
(X_orig, X_recon)[source]Â¶ Compute feature level outlier scores.
 Parameters
X_orig (numpy.ndarray) â€“ Batch of original instances.
X_recon (numpy.ndarray) â€“ Batch of reconstructed instances.
 Return type
numpy.ndarray
 Returns
Feature level outlier scores.

fit
(X, loss_fn=tensorflow.keras.losses.MeanSquaredError, optimizer=tensorflow.keras.optimizers.Adam, epochs=20, batch_size=64, verbose=True, log_metric=None, callbacks=None)[source]Â¶ Train AE model.
 Parameters
X (numpy.ndarray) â€“ Training batch.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.
 Return type
None

infer_threshold
(X, outlier_type='instance', outlier_perc=100.0, threshold_perc=95.0, batch_size=10000000000)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be outliers in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
outlier_type (
str
) â€“ Predict outliers at the â€˜featureâ€™ or â€˜instanceâ€™ level.outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the outlier score.batch_size (
int
) â€“ Batch size used when making predictions with the autoencoder.
 Return type
None

instance_score
(fscore, outlier_perc=100.0)[source]Â¶ Compute instance level outlier scores.
 Parameters
fscore (numpy.ndarray) â€“ Feature level outlier scores.
outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.
 Return type
numpy.ndarray
 Returns
Instance level outlier scores.

predict
(X, outlier_type='instance', outlier_perc=100.0, batch_size=10000000000, return_feature_score=True, return_instance_score=True)[source]Â¶ Predict whether instances are outliers or not.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
outlier_type (
str
) â€“ Predict outliers at the â€˜featureâ€™ or â€˜instanceâ€™ level.outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.batch_size (
int
) â€“ Batch size used when making predictions with the autoencoder.return_feature_score (
bool
) â€“ Whether to return feature level outlier scores.return_instance_score (
bool
) â€“ Whether to return instance level outlier scores.
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the outlier predictions and both feature and instance level outlier scores.


class
alibi_detect.od.
OutlierVAE
(threshold=None, score_type='mse', vae=None, encoder_net=None, decoder_net=None, latent_dim=None, samples=10, beta=1.0, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, score_type='mse', vae=None, encoder_net=None, decoder_net=None, latent_dim=None, samples=10, beta=1.0, data_type=None)[source]Â¶ VAEbased outlier detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for outlier score to determine outliers.score_type (
str
) â€“ Metric used for outlier scores. Either â€˜mseâ€™ (mean squared error) or â€˜probaâ€™ (reconstruction probabilities) supported.vae (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras model if available.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜vaeâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜vaeâ€™ is specified.latent_dim (
Optional
[int
]) â€“ Dimensionality of the latent space.samples (
int
) â€“ Number of samples sampled to evaluate each instance.beta (
float
) â€“ Beta parameter for KLdivergence loss term.data_type (
Optional
[str
]) â€“ Optionally specify the data type (tabular, image or timeseries). Added to metadata.
 Return type
None

feature_score
(X_orig, X_recon)[source]Â¶ Compute feature level outlier scores.
 Parameters
X_orig (numpy.ndarray) â€“ Batch of original instances.
X_recon (numpy.ndarray) â€“ Batch of reconstructed instances.
 Return type
numpy.ndarray
 Returns
Feature level outlier scores.

fit
(X, loss_fn=<function elbo>, optimizer=tensorflow.keras.optimizers.Adam, cov_elbo={'sim': 0.05}, epochs=20, batch_size=64, verbose=True, log_metric=None, callbacks=None)[source]Â¶ Train VAE model.
 Parameters
X (numpy.ndarray) â€“ Training batch.
loss_fn (tensorflow.keras.losses) â€“ Loss function used for training.
optimizer (tensorflow.keras.optimizers) â€“ Optimizer used for training.
cov_elbo (
dict
) â€“ Dictionary with covariance matrix options in case the elbo loss function is used. Either use the full covariance matrix inferred from X (dict(cov_full=None)), only the variance (dict(cov_diag=None)) or a float representing the same standard deviation for each feature (e.g. dict(sim=.05)).epochs (
int
) â€“ Number of training epochs.batch_size (
int
) â€“ Batch size used for training.verbose (
bool
) â€“ Whether to print training progress.log_metric (
Optional
[Tuple
[str
, tensorflow.keras.metrics]]) â€“ Additional metrics whose progress will be displayed if verbose equals True.callbacks (
Optional
[tensorflow.keras.callbacks]) â€“ Callbacks used during training.
 Return type
None

infer_threshold
(X, outlier_type='instance', outlier_perc=100.0, threshold_perc=95.0, batch_size=10000000000)[source]Â¶ Update threshold by a value inferred from the percentage of instances considered to be outliers in a sample of the dataset.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
outlier_type (
str
) â€“ Predict outliers at the â€˜featureâ€™ or â€˜instanceâ€™ level.outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.threshold_perc (
float
) â€“ Percentage of X considered to be normal based on the outlier score.batch_size (
int
) â€“ Batch size used when making predictions with the VAE.
 Return type
None

instance_score
(fscore, outlier_perc=100.0)[source]Â¶ Compute instance level outlier scores.
 Parameters
fscore (numpy.ndarray) â€“ Feature level outlier scores.
outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.
 Return type
numpy.ndarray
 Returns
Instance level outlier scores.

predict
(X, outlier_type='instance', outlier_perc=100.0, batch_size=10000000000, return_feature_score=True, return_instance_score=True)[source]Â¶ Predict whether instances are outliers or not.
 Parameters
X (numpy.ndarray) â€“ Batch of instances.
outlier_type (
str
) â€“ Predict outliers at the â€˜featureâ€™ or â€˜instanceâ€™ level.outlier_perc (
float
) â€“ Percentage of sorted feature level outlier scores used to predict instance level outlier.batch_size (
int
) â€“ Batch size used when making predictions with the VAE.return_feature_score (
bool
) â€“ Whether to return feature level outlier scores.return_instance_score (
bool
) â€“ Whether to return instance level outlier scores.
 Return type
 Returns
Dictionary containing â€˜metaâ€™ and â€˜dataâ€™ dictionaries.
â€™metaâ€™ has the modelâ€™s metadata.
â€™dataâ€™ contains the outlier predictions and both feature and instance level outlier scores.


class
alibi_detect.od.
OutlierVAEGMM
(threshold=None, vaegmm=None, encoder_net=None, decoder_net=None, gmm_density_net=None, n_gmm=None, latent_dim=None, samples=10, beta=1.0, recon_features=<function eucl_cosim_features>, data_type=None)[source]Â¶ Bases:
alibi_detect.base.BaseDetector
,alibi_detect.base.FitMixin
,alibi_detect.base.ThresholdMixin

__init__
(threshold=None, vaegmm=None, encoder_net=None, decoder_net=None, gmm_density_net=None, n_gmm=None, latent_dim=None, samples=10, beta=1.0, recon_features=<function eucl_cosim_features>, data_type=None)[source]Â¶ VAEGMMbased outlier detector.
 Parameters
threshold (
Optional
[float
]) â€“ Threshold used for outlier score to determine outliers.vaegmm (
Optional
[tensorflow.keras.Model]) â€“ A trained tf.keras model if available.encoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the encoder wrapped in a tf.keras.Sequential class if no â€˜vaegmmâ€™ is specified.decoder_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the decoder wrapped in a tf.keras.Sequential class if no â€˜vaegmmâ€™ is specified.gmm_density_net (
Optional
[tensorflow.keras.Sequential]) â€“ Layers for the GMM network wrapped in a tf.keras.Sequential class.latent_dim (
Optional
[int
]) â€“ Dimensionality of the latent space.samples (
int
) â€“ Nu
