alibi_detect.saving.schemas module

Pydantic models used by validate_config() to validate configuration dictionaries. The resolved kwarg of validate_config() determines whether the unresolved or resolved pydantic models are used:

  • The unresolved models expect any artefacts specified within it to not yet have been resolved. The artefacts are still string references to local filepaths or registries (e.g. x_ref = ‘x_ref.npy’).

  • The resolved models expect all artefacts to be have been resolved into runtime objects. For example, x_ref should have been resolved into an np.ndarray.

Note

For detector pydantic models, the fields match the corresponding detector’s args/kwargs. Refer to the detector’s api docs for a full description of each arg/kwarg.

class alibi_detect.saving.schemas.CVMDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the CVMDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the CVMDrift documentation for a description of each field.

correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.CVMDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the CVMDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the CVMDrift documentation for a description of each field.

correction: str = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.CVMDriftOnlineConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the CVMDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the CVMDriftOnline documentation for a description of each field.

batch_size: int = 64
ert: float
n_bootstraps: int = 10000
n_features: int | None = None
verbose: bool = True
window_sizes: List[int]
class alibi_detect.saving.schemas.CVMDriftOnlineConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the CVMDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the CVMDriftOnline documentation for a description of each field.

batch_size: int = 64
ert: float
n_bootstraps: int = 10000
n_features: int | None = None
verbose: bool = True
window_sizes: List[int]
class alibi_detect.saving.schemas.ChiSquareDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the ChiSquareDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ChiSquareDrift documentation for a description of each field.

categories_per_feature: Dict[int, int | List[int]] = None
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.ChiSquareDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the ChiSquareDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ChiSquareDrift documentation for a description of each field.

categories_per_feature: Dict[int, int | List[int]] = None
correction: str = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.ClassifierDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the ClassifierDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ClassifierDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'sklearn'] = 'tensorflow'
batch_size: int = 32
binarize_preds: bool = False
calibration_kwargs: dict | None = None
dataloader: str | None = None
dataset: str | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
learning_rate: float = 0.001
model: str | ModelConfig
n_folds: int | None = None
optimizer: str | OptimizerConfig | None = None
p_val: float = 0.05
preds_type: Literal['probs', 'logits'] = 'probs'
preprocess_at_init: bool = True
preprocess_batch_fn: str | None = None
reg_loss_fn: str | None = None
retrain_from_scratch: bool = True
seed: int = 0
train_kwargs: dict | None = None
train_size: float | None = 0.75
update_x_ref: Dict[str, int] | None = None
use_calibration: bool = False
use_oob: bool = False
verbose: int = 0
class alibi_detect.saving.schemas.ClassifierDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the ClassifierDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ClassifierDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'sklearn'] = 'tensorflow'
batch_size: int = 32
binarize_preds: bool = False
calibration_kwargs: dict | None = None
dataloader: Callable | None = None
dataset: Callable | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
learning_rate: float = 0.001
model: SupportedModel | None = None
n_folds: int | None = None
optimizer: SupportedOptimizer | None = None
p_val: float = 0.05
preds_type: Literal['probs', 'logits'] = 'probs'
preprocess_at_init: bool = True
preprocess_batch_fn: Callable | None = None
reg_loss_fn: Callable | None = None
retrain_from_scratch: bool = True
seed: int = 0
train_kwargs: dict | None = None
train_size: float | None = 0.75
update_x_ref: Dict[str, int] | None = None
use_calibration: bool = False
use_oob: bool = False
verbose: int = 0
class alibi_detect.saving.schemas.ClassifierUncertaintyDriftConfig(*args, **kwargs)[source]

Bases: DetectorConfig

Unresolved schema for the ClassifierUncertaintyDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ClassifierUncertaintyDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
data_type: str | None = None
device: str | None = None
input_shape: tuple | None = None
margin_width: float = 0.1
max_len: int | None = None
model: str | ModelConfig
p_val: float = 0.05
preds_type: Literal['probs', 'logits'] = 'probs'
preprocess_batch_fn: str | None = None
tokenizer: str | TokenizerConfig | None = None
uncertainty_type: Literal['entropy', 'margin'] = 'entropy'
update_x_ref: Dict[str, int] | None = None
x_ref: str
x_ref_preprocessed: bool = False
class alibi_detect.saving.schemas.ClassifierUncertaintyDriftConfigResolved(*args, **kwargs)[source]

Bases: DetectorConfig

Resolved schema for the ClassifierUncertaintyDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ClassifierUncertaintyDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
data_type: str | None = None
device: str | None = None
input_shape: tuple | None = None
margin_width: float = 0.1
max_len: int | None = None
model: SupportedModel | None = None
p_val: float = 0.05
preds_type: Literal['probs', 'logits'] = 'probs'
preprocess_batch_fn: Callable | None = None
tokenizer: str | Callable | None = None
uncertainty_type: Literal['entropy', 'margin'] = 'entropy'
update_x_ref: Dict[str, int] | None = None
x_ref: ndarray | list
x_ref_preprocessed: bool = False
class alibi_detect.saving.schemas.ContextMMDDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the ContextMMDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the ContextMMDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int | None = 256
c_kernel: str | KernelConfig | None = None
c_ref: str
device: Literal['cpu', 'cuda'] | None = None
n_folds: int = 5
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
prop_c_held: float = 0.25
update_ref: Dict[str, int] | None = None
verbose: bool = False
x_kernel: str | KernelConfig | None = None
class alibi_detect.saving.schemas.ContextMMDDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the MMDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the MMDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int | None = 256
c_kernel: Callable | None = None
c_ref: ndarray
device: Literal['cpu', 'cuda'] | None = None
n_folds: int = 5
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
prop_c_held: float = 0.25
update_ref: Dict[str, int] | None = None
verbose: bool = False
x_kernel: Callable | None = None
class alibi_detect.saving.schemas.CustomBaseModel(*args, **kwargs)[source]

Bases: BaseModel

Base pydantic model schema. The default pydantic settings are set here.

class Config[source]

Bases: object

arbitrary_types_allowed = True
extra = 'forbid'
class alibi_detect.saving.schemas.CustomBaseModelWithKwargs(*args, **kwargs)[source]

Bases: BaseModel

Base pydantic model schema. The default pydantic settings are set here.

class Config[source]

Bases: object

arbitrary_types_allowed = True
extra = 'allow'
class alibi_detect.saving.schemas.DeepKernelConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Unresolved schema for DeepKernel’s.

Examples

A DeepKernel, with a trainable GaussianRBF kernel applied to the projected inputs and a custom serialized kernel applied to the raw inputs:

[kernel]
eps = 0.01

[kernel.kernel_a]
src = "@utils.tensorflow.kernels.GaussianRBF"
trainable = true

[kernel.kernel_b]
src = "custom_kernel.dill"
sigma = [ 1.2,]
trainable = false

[kernel.proj]
src = "model/"
eps: float | str = 'trainable'

The proportion (in [0,1]) of weight to assign to the kernel applied to raw inputs. This can be either specified or set to ‘trainable’. Only relevant is kernel_b is not None.

kernel_a: str | KernelConfig = '@utils.tensorflow.kernels.GaussianRBF'

The kernel to apply to the projected inputs. Defaults to a GaussianRBF with trainable bandwidth.

kernel_b: str | KernelConfig | None = '@utils.tensorflow.kernels.GaussianRBF'

The kernel to apply to the raw inputs. Defaults to a GaussianRBF with trainable bandwidth. Set to None in order to use only the deep component (i.e. eps=0).

proj: str | ModelConfig

The projection to be applied to the inputs before applying kernel_a. This should be a Tensorflow or PyTorch model, specified as an object registry reference, or a ModelConfig.

class alibi_detect.saving.schemas.DetectorConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Base detector config schema. Only fields universal across all detectors are defined here.

meta: MetaData | None = None

Config metadata. Should not be edited.

name: str

Name of the detector e.g. MMDDrift.

class alibi_detect.saving.schemas.DriftDetectorConfig(*args, **kwargs)[source]

Bases: DetectorConfig

Unresolved base schema for drift detectors.

data_type: str | None = None

Specify data type added to the metadata. E.g. ‘tabular’`or `‘image’.

input_shape: tuple | None = None

Optionally pass the shape of the input data. Used when saving detectors.

preprocess_fn: str | PreprocessConfig | None = None

Function to preprocess the data before computing the data drift metrics. A string referencing a serialized function in .dill format, an object registry reference, or a PreprocessConfig.

x_ref: str

Data used as reference distribution. Should be a string referencing a NumPy .npy file.

x_ref_preprocessed: bool = False

Whether the given reference data x_ref has been preprocessed yet. If x_ref_preprocessed=True, only the test data x will be preprocessed at prediction time. If x_ref_preprocessed=False, the reference data will also be preprocessed.

class alibi_detect.saving.schemas.DriftDetectorConfigResolved(*args, **kwargs)[source]

Bases: DetectorConfig

Resolved base schema for drift detectors.

data_type: str | None = None

Specify data type added to the metadata. E.g. ‘tabular’ or ‘image’.

input_shape: tuple | None = None

Optionally pass the shape of the input data. Used when saving detectors.

preprocess_fn: Callable | None = None

Function to preprocess the data before computing the data drift metrics.

x_ref: ndarray | list

Data used as reference distribution.

x_ref_preprocessed: bool = False

Whether the given reference data x_ref has been preprocessed yet. If x_ref_preprocessed=True, only the test data x will be preprocessed at prediction time. If x_ref_preprocessed=False, the reference data will also be preprocessed.

class alibi_detect.saving.schemas.EmbeddingConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Unresolved schema for text embedding models. Currently, only pre-trained HuggingFace transformer models are supported.

Examples

Using the hidden states at the output of each layer of a TensorFlow BERT base model as text embeddings:

[embedding]
flavour = "tensorflow"
src = "bert-base-cased"
type = "hidden_state"
layers = [-1, -2, -3, -4, -5, -6, -7, -8]
flavour: Literal['tensorflow', 'pytorch'] = 'tensorflow'

Whether the embedding model is a tensorflow or pytorch model.

layers: List[int] | None = None

List specifying the hidden layers to be used to extract the embedding.

src: str

Model name e.g. “bert-base-cased”, or a filepath to directory storing the model to extract embeddings from (relative to the config.toml file, or absolute).

type: Literal['pooler_output', 'last_hidden_state', 'hidden_state', 'hidden_state_cls']

The type of embedding to be loaded. See embedding_type in TransformerEmbedding.

class alibi_detect.saving.schemas.FETDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the FETDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the FETDrift documentation for a description of each field.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.FETDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the FETDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the FETDrift documentation for a description of each field.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.FETDriftOnlineConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the FETDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the FETDriftOnline documentation for a description of each field.

alternative: Literal['greater', 'less'] = 'greater'
ert: float
lam: float = 0.99
n_bootstraps: int = 10000
n_features: int | None = None
t_max: int | None = None
verbose: bool = True
window_sizes: List[int]
class alibi_detect.saving.schemas.FETDriftOnlineConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the FETDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the FETDriftOnline documentation for a description of each field.

alternative: Literal['greater', 'less'] = 'greater'
ert: float
lam: float = 0.99
n_bootstraps: int = 10000
n_features: int | None = None
t_max: int | None = None
verbose: bool = True
window_sizes: List[int]
class alibi_detect.saving.schemas.KSDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the KSDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the KSDrift documentation for a description of each field.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.KSDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the KSDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the KSDrift documentation for a description of each field. Resolved schema for the KSDrift detector.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.KernelConfig(*args, **kwargs)[source]

Bases: CustomBaseModelWithKwargs

Unresolved schema for kernels, to be passed to a detector’s kernel kwarg.

If src specifies a GaussianRBF kernel, the sigma, trainable and init_sigma_fn fields are passed to it. Otherwise, all fields except src are passed as kwargs.

Examples

A GaussianRBF kernel, with three different bandwidths:

[kernel]
src = "@alibi_detect.utils.tensorflow.GaussianRBF"
trainable = false
sigma = [0.1, 0.2, 0.3]

A serialized kernel with keyword arguments passed:

[kernel]
src = "mykernel.dill"
sigma = 0.42
custom_setting = "xyz"
flavour: Literal['tensorflow', 'pytorch', 'keops']

Whether the kernel is a tensorflow or pytorch kernel.

init_sigma_fn: str | None = None

Function used to compute the bandwidth sigma. Used when sigma is to be inferred. The function’s signature should match sigma_median(). If None, it is set to sigma_median().

sigma: float | List[float] | None = None

Bandwidth used for the kernel. Needn’t be specified if being inferred or trained. Can pass multiple values to eval kernel with and then average.

src: str

A string referencing a filepath to a serialized kernel in .dill format, or an object registry reference.

trainable: bool = False

Whether or not to track gradients w.r.t. sigma to allow it to be trained.

class alibi_detect.saving.schemas.LSDDDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the LSDDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LSDDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
lambda_rd_max: float = 0.2
n_kernel_centers: int | None = None
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
sigma: NDArray[float32] | None = None
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.LSDDDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the LSDDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LSDDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
lambda_rd_max: float = 0.2
n_kernel_centers: int | None = None
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
sigma: NDArray[float32] | None = None
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.LSDDDriftOnlineConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the LSDDDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LSDDDriftOnline documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
ert: float
lambda_rd_max: float = 0.2
n_bootstraps: int = 1000
n_kernel_centers: int | None = None
sigma: ndarray | None = None
verbose: bool = True
window_size: int
class alibi_detect.saving.schemas.LSDDDriftOnlineConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the LSDDDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LSDDDriftOnline documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
ert: float
lambda_rd_max: float = 0.2
n_bootstraps: int = 1000
n_kernel_centers: int | None = None
sigma: ndarray | None = None
verbose: bool = True
window_size: int
class alibi_detect.saving.schemas.LearnedKernelDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the LearnedKernelDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LearnedKernelDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'keops'] = 'tensorflow'
batch_size: int = 32
batch_size_permutations: int = 1000000
batch_size_predict: int = 1000000
dataloader: str | None = None
dataset: str | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
kernel: str | DeepKernelConfig
learning_rate: float = 0.001
n_permutations: int = 100
num_workers: int = 0
optimizer: str | OptimizerConfig | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
preprocess_batch_fn: str | None = None
reg_loss_fn: str | None = None
retrain_from_scratch: bool = True
train_kwargs: dict | None = None
train_size: float | None = 0.75
update_x_ref: Dict[str, int] | None = None
var_reg: float = 1e-05
verbose: int = 0
class alibi_detect.saving.schemas.LearnedKernelDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the LearnedKernelDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the LearnedKernelDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'keops'] = 'tensorflow'
batch_size: int = 32
batch_size_permutations: int = 1000000
batch_size_predict: int = 1000000
dataloader: Callable | None = None
dataset: Callable | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
kernel: Callable | None = None
learning_rate: float = 0.001
n_permutations: int = 100
num_workers: int = 0
optimizer: SupportedOptimizer | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
preprocess_batch_fn: Callable | None = None
reg_loss_fn: Callable | None = None
retrain_from_scratch: bool = True
train_kwargs: dict | None = None
train_size: float | None = 0.75
update_x_ref: Dict[str, int] | None = None
var_reg: float = 1e-05
verbose: int = 0
class alibi_detect.saving.schemas.MMDDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the MMDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the MMDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'keops'] = 'tensorflow'
batch_size_permutations: int = 1000000
configure_kernel_from_x_ref: bool = True
device: Literal['cpu', 'cuda'] | None = None
kernel: str | KernelConfig | None = None
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
sigma: NDArray[float32] | None = None
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.MMDDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the MMDDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the MMDDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch', 'keops'] = 'tensorflow'
batch_size_permutations: int = 1000000
configure_kernel_from_x_ref: bool = True
device: Literal['cpu', 'cuda'] | None = None
kernel: Callable | None = None
n_permutations: int = 100
p_val: float = 0.05
preprocess_at_init: bool = True
sigma: NDArray[float32] | None = None
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.MMDDriftOnlineConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the MMDDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the MMDDriftOnline documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
ert: float
kernel: str | KernelConfig | None = None
n_bootstraps: int = 1000
sigma: ndarray | None = None
verbose: bool = True
window_size: int
class alibi_detect.saving.schemas.MMDDriftOnlineConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the MMDDriftOnline detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the MMDDriftOnline documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
device: Literal['cpu', 'cuda'] | None = None
ert: float
kernel: Callable | None = None
n_bootstraps: int = 1000
sigma: ndarray | None = None
verbose: bool = True
window_size: int
class alibi_detect.saving.schemas.MetaData(*args, **kwargs)[source]

Bases: CustomBaseModel

version: str
version_warning: bool = False
class alibi_detect.saving.schemas.ModelConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Unresolved schema for (ML) models. Note that the model “backend” e.g. ‘tensorflow’, ‘pytorch’, ‘sklearn’, is set by backend in DetectorConfig.

Examples

A TensorFlow classifier model stored in the model/ directory, with the softmax layer extracted:

[model]
flavour = "tensorflow"
src = "model/"
layer = -1
custom_objects: dict | None = None

Dictionary of custom objects. Passed to the tensorflow load_model function. This can be used to pass custom registered functions and classes to a model.

flavour: Literal['tensorflow', 'pytorch', 'sklearn']

Whether the model is a tensorflow, pytorch or sklearn model. XGBoost models following the scikit-learn API are also included under sklearn.

layer: int | None = None

Optional index of hidden layer to extract. If not None, a HiddenOutput or HiddenOutput model is returned (dependent on flavour). Only applies to ‘tensorflow’ and ‘pytorch’ models.

src: str

Filepath to directory storing the model (relative to the config.toml file, or absolute). At present, TensorFlow models must be stored in H5 format.

class alibi_detect.saving.schemas.OptimizerConfig(*args, **kwargs)[source]

Bases: CustomBaseModelWithKwargs

Unresolved schema for optimizers. The optimizer dictionary has two possible formats:

1. A configuration dictionary compatible with tf.keras.optimizers.deserialize. For backend=’tensorflow’ only. 2. A dictionary containing only class_name, where this is a string referencing the optimizer name e.g. optimizer.class_name = ‘Adam’. In this case, the tensorflow or pytorch optimizer class of the same name is loaded. For backend=’tensorflow’ and backend=’pytorch’.

Examples

A TensorFlow Adam optimizer:

[optimizer]
class_name = "Adam"

[optimizer.config]
name = "Adam"
learning_rate = 0.001
decay = 0.0

A PyTorch Adam optimizer:

[optimizer]
class_name = "Adam"
class_name: str
config: Dict[str, Any] | None = None
class alibi_detect.saving.schemas.PreprocessConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Unresolved schema for drift detector preprocess functions, to be passed to a detector’s preprocess_fn kwarg. Once loaded, the function is wrapped in a partial(), to be evaluated within the detector.

If src specifies a generic Python function, the dictionary specified by kwargs is passed to it. Otherwise, if src specifies preprocess_drift() (src=’@cd.tensorflow.preprocess.preprocess_drift’), all fields (except kwargs) are passed to it.

Examples

Preprocessor with a model, text embedding and tokenizer passed to preprocess_drift():

[preprocess_fn]
src = "@cd.tensorflow.preprocess.preprocess_drift"
batch_size = 32
max_len = 100
tokenizer.src = "tokenizer/"  # TokenizerConfig

[preprocess_fn.model]
# ModelConfig
src = "model/"

[preprocess_fn.embedding]
# EmbeddingConfig
src = "embedding/"
type = "hidden_state"
layers = [-1, -2, -3, -4, -5, -6, -7, -8]

A serialized Python function with keyword arguments passed to it:

[preprocess_fn]
src = 'myfunction.dill'
kwargs = {'kwarg1'=0.7, 'kwarg2'=true}
batch_size: int | None = 10000000000

Batch size used during prediction.

device: Literal['cpu', 'cuda'] | None = None

Device type used. The default None tries to use the GPU and falls back on CPU if needed. Only relevant if src=’@cd.torch.preprocess.preprocess_drift’

dtype: str = 'np.float32'

Model output type, e.g. ‘tf.float32’

embedding: str | EmbeddingConfig | None = None

A text embedding model. Either a string referencing a HuggingFace transformer model name, an object registry reference, or a EmbeddingConfig. If model=None, the embedding is passed to preprocess_drift() as model. Otherwise, the model is chained to the output of the embedding as an additional preprocessing step.

kwargs: dict = {}

Dictionary of keyword arguments to be passed to the function specified by src. Only used if src specifies a generic Python function.

max_len: int | None = None

Optional max token length for text drift.

model: str | ModelConfig | None = None

Model used for preprocessing. Either an object registry reference, or a ModelConfig.

preprocess_batch_fn: str | None = None

Optional batch preprocessing function. For example to convert a list of objects to a batch which can be processed by the model.

src: str = '@cd.tensorflow.preprocess.preprocess_drift'

The preprocessing function. A string referencing a filepath to a serialized function in dill format, or an object registry reference.

tokenizer: str | TokenizerConfig | None = None

Optional tokenizer for text drift. Either a string referencing a HuggingFace tokenizer model name, or a TokenizerConfig.

class alibi_detect.saving.schemas.RegressorUncertaintyDriftConfig(*args, **kwargs)[source]

Bases: DetectorConfig

Unresolved schema for the RegressorUncertaintyDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the RegressorUncertaintyDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
data_type: str | None = None
device: str | None = None
input_shape: tuple | None = None
max_len: int | None = None
model: str | ModelConfig
n_evals: int = 25
p_val: float = 0.05
preprocess_batch_fn: str | None = None
tokenizer: str | TokenizerConfig | None = None
uncertainty_type: Literal['mc_dropout', 'ensemble'] = 'mc_dropout'
update_x_ref: Dict[str, int] | None = None
x_ref: str
x_ref_preprocessed: bool = False
class alibi_detect.saving.schemas.RegressorUncertaintyDriftConfigResolved(*args, **kwargs)[source]

Bases: DetectorConfig

Resolved schema for the RegressorUncertaintyDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the RegressorUncertaintyDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
data_type: str | None = None
device: str | None = None
input_shape: tuple | None = None
max_len: int | None = None
model: SupportedModel | None = None
n_evals: int = 25
p_val: float = 0.05
preprocess_batch_fn: Callable | None = None
tokenizer: Callable | None = None
uncertainty_type: Literal['mc_dropout', 'ensemble'] = 'mc_dropout'
update_x_ref: Dict[str, int] | None = None
x_ref: ndarray | list
x_ref_preprocessed: bool = False
class alibi_detect.saving.schemas.SpotTheDiffDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the SpotTheDiffDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the SpotTheDiffDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
binarize_preds: bool = False
dataloader: str | None = None
dataset: str | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
initial_diffs: str | None = None
kernel: str | KernelConfig | None = None
l1_reg: float = 0.01
learning_rate: float = 0.001
n_diffs: int = 1
n_folds: int | None = None
optimizer: str | OptimizerConfig | None = None
p_val: float = 0.05
preprocess_batch_fn: str | None = None
retrain_from_scratch: bool = True
seed: int = 0
train_kwargs: dict | None = None
train_size: float | None = 0.75
verbose: int = 0
class alibi_detect.saving.schemas.SpotTheDiffDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the SpotTheDiffDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the SpotTheDiffDrift documentation for a description of each field.

backend: Literal['tensorflow', 'pytorch'] = 'tensorflow'
batch_size: int = 32
binarize_preds: bool = False
dataloader: Callable | None = None
dataset: Callable | None = None
device: Literal['cpu', 'cuda'] | None = None
epochs: int = 3
initial_diffs: ndarray | None = None
kernel: Callable | None = None
l1_reg: float = 0.01
learning_rate: float = 0.001
n_diffs: int = 1
n_folds: int | None = None
optimizer: SupportedOptimizer | None = None
p_val: float = 0.05
preprocess_batch_fn: Callable | None = None
retrain_from_scratch: bool = True
seed: int = 0
train_kwargs: dict | None = None
train_size: float | None = 0.75
verbose: int = 0
class alibi_detect.saving.schemas.SupportedModel[source]

Bases: object

Pydantic custom type to check the model is one of the supported types (conditional on what optional deps are installed).

classmethod validate_model(model, values)[source]
Return type:

Any

class alibi_detect.saving.schemas.SupportedOptimizer[source]

Bases: object

Pydantic custom type to check the optimizer is one of the supported types (conditional on what optional deps are installed).

classmethod validate_optimizer(optimizer, values)[source]
Return type:

Any

class alibi_detect.saving.schemas.TabularDriftConfig(*args, **kwargs)[source]

Bases: DriftDetectorConfig

Unresolved schema for the TabularDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the TabularDrift documentation for a description of each field.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
categories_per_feature: Dict[int, int | List[int] | None] = None
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.TabularDriftConfigResolved(*args, **kwargs)[source]

Bases: DriftDetectorConfigResolved

Resolved schema for the TabularDrift detector.

Except for the name and meta fields, the fields match the detector’s args and kwargs. Refer to the TabularDrift documentation for a description of each field.

alternative: Literal['two-sided', 'greater', 'less'] = 'two-sided'
categories_per_feature: Dict[int, int | List[int] | None] = None
correction: Literal['bonferroni', 'fdr'] = 'bonferroni'
n_features: int | None = None
p_val: float = 0.05
preprocess_at_init: bool = True
update_x_ref: Dict[str, int] | None = None
class alibi_detect.saving.schemas.TokenizerConfig(*args, **kwargs)[source]

Bases: CustomBaseModel

Unresolved schema for text tokenizers. Currently, only pre-trained HuggingFace tokenizer models are supported.

Examples

BERT base tokenizer with additional keyword arguments passed to the HuggingFace from_pretrained() method:

[tokenizer]
src = "bert-base-cased"

[tokenizer.kwargs]
use_fast = false
force_download = true
kwargs: dict | None = {}

Dictionary of keyword arguments to pass to transformers.AutoTokenizer.from_pretrained().

src: str

Model name e.g. “bert-base-cased”, or a filepath to directory storing the tokenizer model (relative to the config.toml file, or absolute). Passed to passed to transformers.AutoTokenizer.from_pretrained().