alibi_detect.cd.lsdd module

class alibi_detect.cd.lsdd.LSDDDrift(x_ref, backend='tensorflow', p_val=0.05, preprocess_x_ref=True, update_x_ref=None, preprocess_fn=None, sigma=None, n_permutations=100, n_kernel_centers=None, lambda_rd_max=0.2, device=None, input_shape=None, data_type=None)[source]

Bases: object

__init__(x_ref, backend='tensorflow', p_val=0.05, preprocess_x_ref=True, update_x_ref=None, preprocess_fn=None, sigma=None, n_permutations=100, n_kernel_centers=None, lambda_rd_max=0.2, device=None, input_shape=None, data_type=None)[source]

Least-squares density difference (LSDD) data drift detector using a permutation test.

Parameters
  • x_ref (Union[ndarray, list]) – Data used as reference distribution.

  • backend (str) – Backend used for the LSDD implementation.

  • p_val (float) – p-value used for the significance of the permutation test.

  • preprocess_x_ref (bool) – Whether to already preprocess and store the reference data.

  • update_x_ref (Optional[Dict[str, int]]) – Reference data can optionally be updated to the last n instances seen by the detector or via reservoir sampling with size n. For the former, the parameter equals {‘last’: n} while for reservoir sampling {‘reservoir_sampling’: n} is passed.

  • preprocess_fn (Optional[Callable]) – Function to preprocess the data before computing the data drift metrics.

  • sigma (Optional[ndarray]) – Optionally set the bandwidth of the Gaussian kernel used in estimating the LSDD. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If sigma is not specified, the ‘median heuristic’ is adopted whereby sigma is set as the median pairwise distance between reference samples.

  • n_permutations (int) – Number of permutations used in the permutation test.

  • n_kernel_centers (Optional[int]) – The number of reference samples to use as centers in the Gaussian kernel model used to estimate LSDD. Defaults to 1/20th of the reference data.

  • lambda_rd_max (float) – The maximum relative difference between two estimates of LSDD that the regularization parameter lambda is allowed to cause. Defaults to 0.2 as in the paper.

  • device (Optional[str]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either ‘cuda’, ‘gpu’ or ‘cpu’. Only relevant for ‘pytorch’ backend.

  • input_shape (Optional[tuple]) – Shape of input data.

  • data_type (Optional[str]) – Optionally specify the data type (tabular, image or time-series). Added to metadata.

Return type

None

predict(x, return_p_val=True, return_distance=True)[source]

Predict whether a batch of data has drifted from the reference data.

Parameters
  • x (Union[ndarray, list]) – Batch of instances.

  • return_p_val (bool) – Whether to return the p-value of the permutation test.

  • return_distance (bool) – Whether to return the LSDD metric between the new batch and reference data.

Return type

Dict[Dict[str, str], Dict[str, Union[int, float]]]

Returns

  • Dictionary containing ‘meta’ and ‘data’ dictionaries.

  • ’meta’ has the model’s metadata.

  • ’data’ contains the drift prediction and optionally the p-value, threshold and LSDD metric.