- class alibi_detect.cd.pytorch.mmd_online.MMDDriftOnlineTorch(x_ref, ert, window_size, preprocess_fn=None, x_ref_preprocessed=False, kernel=<class 'alibi_detect.utils.pytorch.kernels.GaussianRBF'>, sigma=None, n_bootstraps=1000, device=None, verbose=True, input_shape=None, data_type=None)[source]
- __init__(x_ref, ert, window_size, preprocess_fn=None, x_ref_preprocessed=False, kernel=<class 'alibi_detect.utils.pytorch.kernels.GaussianRBF'>, sigma=None, n_bootstraps=1000, device=None, verbose=True, input_shape=None, data_type=None)[source]
Online maximum Mean Discrepancy (MMD) data drift detector using preconfigured thresholds.
list]) – Data used as reference distribution.
float) – The expected run-time (ERT) in the absence of drift. For the multivariate detectors, the ERT is defined as the expected run-time from t=0.
int) – The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.
Callable]) – Function to preprocess the data before computing the data drift metrics.
bool) – Whether the given reference data x_ref has been preprocessed yet. If x_ref_preprocessed=True, only the test data x will be preprocessed at prediction time. If x_ref_preprocessed=False, the reference data will also be preprocessed.
Callable) – Kernel used for the MMD computation, defaults to Gaussian RBF kernel.
ndarray]) – Optionally set the GaussianRBF kernel bandwidth. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If sigma is not specified, the ‘median heuristic’ is adopted whereby sigma is set as the median pairwise distance between reference samples.
int) – The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.
str]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either ‘cuda’, ‘gpu’ or ‘cpu’. Only relevant for ‘pytorch’ backend.
bool) – Whether or not to print progress during configuration.
str]) – Optionally specify the data type (tabular, image or time-series). Added to metadata.