alibi_detect.utils.pytorch.prediction module

alibi_detect.utils.pytorch.prediction.predict_batch(x, model, device=None, batch_size=10000000000, dtype=numpy.float32)[source]

Make batch predictions on a model.

Parameters
  • x (Union[ndarray, Tensor]) – Batch of instances.

  • model (Union[Module, Sequential]) – PyTorch model.

  • device (Optional[device]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either torch.device(‘cuda’) or torch.device(‘cpu’).

  • batch_size (int) – Batch size used during prediction.

  • dtype (Union[float32, dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type

Union[ndarray, Tensor]

Returns

Numpy array or torch tensor with model outputs.

alibi_detect.utils.pytorch.prediction.predict_batch_transformer(x, model, tokenizer, max_len, device=None, batch_size=10000000000, dtype=numpy.float32)[source]

Make batch predictions using a transformers tokenizer and model.

Parameters
  • x (Union[ndarray, Tensor]) – Batch of instances.

  • model (Union[Module, Sequential]) – PyTorch model.

  • tokenizer (Callable) – Tokenizer for model.

  • max_len (int) – Max sequence length for tokens.

  • device (Optional[device]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either torch.device(‘cuda’) or torch.device(‘cpu’).

  • batch_size (int) – Batch size used during prediction.

  • dtype (Union[float32, dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type

Union[ndarray, Tensor]

Returns

Numpy array or torch tensor with model outputs.