This page was generated from od/methods/vae.ipynb.

source

Variational Auto-Encoder

Overview

The Variational Auto-Encoder (VAE) outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised or semi-supervised training is desirable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process. The algorithm is suitable for tabular and image data.

Usage

Initialize

Parameters:

  • threshold: threshold value above which the instance is flagged as an outlier.

  • score_type: scoring method used to detect outliers. Currently only the default ‘mse’ supported.

  • latent_dim: latent dimension of the VAE.

  • encoder_net: tf.keras.Sequential instance containing the encoder network. Example:

encoder_net = tf.keras.Sequential(
  [
      InputLayer(input_shape=(32, 32, 3)),
      Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
  ])
  • decoder_net: tf.keras.Sequential instance containing the decoder network. Example:

decoder_net = tf.keras.Sequential(
  [
      InputLayer(input_shape=(latent_dim,)),
      Dense(4*4*128),
      Reshape(target_shape=(4, 4, 128)),
      Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
  ])
  • vae: instead of using a separate encoder and decoder, the VAE can also be passed as a tf.keras.Model.

  • samples: number of samples drawn during detection for each instance to detect.

  • beta: weight on the KL-divergence loss term following the \(\beta\)-VAE framework. Default equals 1.

  • data_type: can specify data type added to metadata. E.g. ‘tabular’ or ‘image’.

Initialized outlier detector example:

from alibi_detect.od import OutlierVAE

od = OutlierVAE(
    threshold=0.1,
    encoder_net=encoder_net,
    decoder_net=decoder_net,
    latent_dim=1024,
    samples=10
)

Fit

We then need to train the outlier detector. The following parameters can be specified:

  • X: training batch as a numpy array of preferably normal data.

  • loss_fn: loss function used for training. Defaults to the elbo loss.

  • optimizer: optimizer used for training. Defaults to Adam with learning rate 1e-3.

  • cov_elbo: dictionary with covariance matrix options in case the elbo loss function is used. Either use the full covariance matrix inferred from X (dict(cov_full=None)), only the variance (dict(cov_diag=None)) or a float representing the same standard deviation for each feature (e.g. dict(sim=.05)) which is the default.

  • epochs: number of training epochs.

  • batch_size: batch size used during training.

  • verbose: boolean whether to print training progress.

  • log_metric: additional metrics whose progress will be displayed if verbose equals True.

od.fit(
    X_train,
    epochs=50
)

It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:

od.infer_threshold(
    X,
    threshold_perc=95
)

Detect

We detect outliers by simply calling predict on a batch of instances X. Detection can be customized via the following parameters:

  • outlier_type: either ‘instance’ or ‘feature’. If the outlier type equals ‘instance’, the outlier score at the instance level will be used to classify the instance as an outlier or not. If ‘feature’ is selected, outlier detection happens at the feature level (e.g. by pixel in images).

  • outlier_perc: percentage of the sorted (descending) feature level outlier scores. We might for instance want to flag an image as an outlier if at least 20% of the pixel values are on average above the threshold. In this case, we set outlier_perc to 20. The default value is 100 (using all the features).

  • return_feature_score: boolean whether to return the feature level outlier scores.

  • return_instance_score: boolean whether to return the instance level outlier scores.

The prediction takes the form of a dictionary with meta and data keys. meta contains the detector’s metadata while data is also a dictionary which contains the actual predictions stored in the following keys:

  • is_outlier: boolean whether instances or features are above the threshold and therefore outliers. If outlier_type equals ‘instance’, then the array is of shape (batch size,). If it equals ‘feature’, then the array is of shape (batch size, instance shape).

  • feature_score: contains feature level scores if return_feature_score equals True.

  • instance_score: contains instance level scores if return_instance_score equals True.

preds = od.predict(
    X,
    outlier_type='instance',
    outlier_perc=75,
    return_feature_score=True,
    return_instance_score=True
)

Examples

Image

Outlier detection on CIFAR10

Tabular

Outlier detection on KDD Cup 99

Outlier detection on Adult dataset