alibi.explainers.backends.cfrl_base module

This module contains utility functions for the Counterfactual with Reinforcement Learning base class, alibi.explainers.cfrl_base, that are common for both Tensorflow and Pytorch backends.

class alibi.explainers.backends.cfrl_base.CounterfactualRLDataset[source]

Bases: ABC

static predict_batches(X, predictor, batch_size)[source]

Predict the classification labels of the input dataset. This is performed in batches.

Parameters:
  • X (ndarray) – Input to be classified.

  • predictor (Callable) – Prediction function.

  • batch_size (int) – Maximum batch size to be used during each inference step.

Return type:

ndarray

Returns:

Classification labels.

alibi.explainers.backends.cfrl_base.generate_empty_condition(X)[source]

Empty conditioning.

Parameters:

X (Any) – Input instance.

Return type:

None

alibi.explainers.backends.cfrl_base.get_classification_reward(Y_pred, Y_true)[source]

Computes classification reward per instance given the prediction output and the true label. The classification reward is a sparse/binary reward: 1 if the most likely classes from the prediction output and the label match, 0 otherwise.

Parameters:
  • Y_pred (ndarray) – Prediction output as a distribution over the possible classes.

  • Y_true (ndarray) – True label as a distribution over the possible classes.

Returns:

Classification reward per instance. 1 if the most likely classes match, 0 otherwise.

alibi.explainers.backends.cfrl_base.get_hard_distribution(Y, num_classes=None)[source]

Constructs the hard label distribution (one-hot encoding).

Parameters:
  • Y (ndarray) – Prediction array. Can be soft or hard label distribution, or a label.

  • num_classes (Optional[int]) – Number of classes to be considered.

Return type:

ndarray

Returns:

Hard label distribution (one-hot encoding).

alibi.explainers.backends.cfrl_base.identity_function(X)[source]

Identity function.

Parameters:

X (Any) – Input instance.

Return type:

Any

Returns:

X – The input instance.