class, layer=-1, flatten=False)[source]

Bases: Module

Return type:


class, input_layer=None, shape=None, enc_dim=None)[source]

Bases: Module

Return type:

Tensor, model, device=None, preprocess_batch_fn=None, tokenizer=None, max_len=None, batch_size=10000000000, dtype=<class 'numpy.float32'>)[source]

Prediction function used for preprocessing step of drift detector.

  • x (Union[ndarray, list]) – Batch of instances.

  • model (Union[Module, Sequential]) – Model used for preprocessing.

  • device (Union[Literal[‘cuda’, ‘gpu’, ‘cpu’], device, None]) – Device type used. The default tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu', 'cpu' or an instance of torch.device.

  • preprocess_batch_fn (Optional[Callable]) – Optional batch preprocessing function. For example to convert a list of objects to a batch which can be processed by the PyTorch model.

  • tokenizer (Optional[Callable]) – Optional tokenizer for text drift.

  • max_len (Optional[int]) – Optional max token length for text drift.

  • batch_size (int) – Batch size used during prediction.

  • dtype (Union[Type[generic], dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type:

Union[ndarray, Tensor, tuple]


Numpy array or torch tensor with predictions.