alibi_detect.cd.pytorch.preprocess module
- class alibi_detect.cd.pytorch.preprocess.HiddenOutput(model, layer=-1, flatten=False)[source]
Bases:
Module
- class alibi_detect.cd.pytorch.preprocess.UAE(encoder_net=None, input_layer=None, shape=None, enc_dim=None)[source]
Bases:
Module
- alibi_detect.cd.pytorch.preprocess.preprocess_drift(x, model, device=None, preprocess_batch_fn=None, tokenizer=None, max_len=None, batch_size=10000000000, dtype=<class 'numpy.float32'>)[source]
Prediction function used for preprocessing step of drift detector.
- Parameters:
model (
Union
[Module
,Sequential
]) – Model used for preprocessing.device (
Union
[Literal
['cuda'
,'gpu'
,'cpu'
],device
,None
]) – Device type used. The default tries to use the GPU and falls back on CPU if needed. Can be specified by passing either'cuda'
,'gpu'
,'cpu'
or an instance oftorch.device
.preprocess_batch_fn (
Optional
[Callable
]) – Optional batch preprocessing function. For example to convert a list of objects to a batch which can be processed by the PyTorch model.tokenizer (
Optional
[Callable
]) – Optional tokenizer for text drift.max_len (
Optional
[int
]) – Optional max token length for text drift.batch_size (
int
) – Batch size used during prediction.dtype (
Union
[Type
[generic
],dtype
]) – Model output type, e.g. np.float32 or torch.float32.
- Return type:
- Returns:
Numpy array or torch tensor with predictions.