alibi_detect.cd.cvm_online module
- class alibi_detect.cd.cvm_online.CVMDriftOnline(x_ref, ert, window_sizes, preprocess_fn=None, x_ref_preprocessed=False, n_bootstraps=10000, batch_size=64, n_features=None, verbose=True, input_shape=None, data_type=None)[source]
Bases:
BaseUniDriftOnline
,DriftConfigMixin
- __init__(x_ref, ert, window_sizes, preprocess_fn=None, x_ref_preprocessed=False, n_bootstraps=10000, batch_size=64, n_features=None, verbose=True, input_shape=None, data_type=None)[source]
Online Cramer-von Mises (CVM) data drift detector using preconfigured thresholds, which tests for any change in the distribution of continuous univariate data. This detector is an adaption of that proposed by Ross and Adams [RA12].
For multivariate data, the detector makes a correction similar to the Bonferroni correction used for the offline detector. Given \(d\) features, the detector configures thresholds by targeting the \(1-\beta\) quantile of test statistics over the simulated streams, where \(\beta = 1 - (1-(1/ERT))^{(1/d)}\). For the univariate case, this simplifies to \(\beta = 1/ERT\). At prediction time, drift is flagged if the test statistic of any feature stream exceed the thresholds.
Note
In the multivariate case, for the ERT to be accurately targeted the feature streams must be independent.
- Parameters:
x_ref (
Union
[ndarray
,list
]) – Data used as reference distribution.ert (
float
) – The expected run-time (ERT) in the absence of drift. For the univariate detectors, the ERT is defined as the expected run-time after the smallest window is full i.e. the run-time from t=min(windows_sizes).window_sizes (
List
[int
]) – window sizes for the sliding test-windows used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.preprocess_fn (
Optional
[Callable
]) – Function to preprocess the data before computing the data drift metrics.x_ref_preprocessed (
bool
) – Whether the given reference data x_ref has been preprocessed yet. If x_ref_preprocessed=True, only the test data x will be preprocessed at prediction time. If x_ref_preprocessed=False, the reference data will also be preprocessed.n_bootstraps (
int
) – The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.batch_size (
int
) – The maximum number of bootstrap simulations to compute in each batch when configuring thresholds. A smaller batch size reduces memory requirements, but can result in a longer configuration run time.n_features (
Optional
[int
]) – Number of features used in the statistical test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.verbose (
bool
) – Whether or not to print progress during configuration.data_type (
Optional
[str
]) – Optionally specify the data type (tabular, image or time-series). Added to metadata.
- online_state_keys: Tuple[str, ...] = ('t', 'test_stats', 'drift_preds', 'xs', 'ids_ref_wins', 'ids_wins_ref', 'ids_wins_wins')
- score(x_t)[source]
Compute the test-statistic (CVM) between the reference window(s) and test window. If a given test-window is not yet full then a test-statistic of np.nan is returned for that window.
- thresholds: np.ndarray