alibi.explainers.anchors.anchor_image module

class alibi.explainers.anchors.anchor_image.AnchorImage(predictor, image_shape, dtype=<class 'numpy.float32'>, segmentation_fn='slic', segmentation_kwargs=None, images_background=None, seed=None)[source]

Bases: Explainer

__init__(predictor, image_shape, dtype=<class 'numpy.float32'>, segmentation_fn='slic', segmentation_kwargs=None, images_background=None, seed=None)[source]

Initialize anchor image explainer.

Parameters:
  • predictor (Callable[[ndarray], ndarray]) – A callable that takes a numpy array of N data points as inputs and returns N outputs.

  • image_shape (tuple) – Shape of the image to be explained. The channel axis is expected to be last.

  • dtype (Type[generic]) – A numpy scalar type that corresponds to the type of input array expected by predictor. This may be used to construct arrays of the given type to be passed through the predictor. For most use cases this argument should have no effect, but it is exposed for use with predictors that would break when called with an array of unsupported type.

  • segmentation_fn (Any) – Any of the built in segmentation function strings: 'felzenszwalb', 'slic' or 'quickshift' or a custom segmentation function (callable) which returns an image mask with labels for each superpixel. The segmentation function is expected to return a segmentation mask containing all integer values from 0 to K-1, where K is the number of image segments (superpixels). See http://scikit-image.org/docs/dev/api/skimage.segmentation.html for more info.

  • segmentation_kwargs (Optional[dict]) – Keyword arguments for the built in segmentation functions.

  • images_background (Optional[ndarray]) – Images to overlay superpixels on.

  • seed (Optional[int]) – If set, ensures different runs with the same input will yield same explanation.

Raises:
explain(image, p_sample=0.5, threshold=0.95, delta=0.1, tau=0.15, batch_size=100, coverage_samples=10000, beam_size=1, stop_on_first=False, max_anchor_size=None, min_samples_start=100, n_covered_ex=10, binary_cache_size=10000, cache_margin=1000, verbose=False, verbose_every=1, **kwargs)[source]

Explain instance and return anchor with metadata.

Parameters:
  • image (ndarray) – Image to be explained.

  • p_sample (float) – The probability of simulating the absence of a superpixel. If the images_background is not provided, the absent superpixels will be replaced by the average value of their constituent pixels. Otherwise, the synthetic instances are created by fixing the present superpixels and superimposing another image from the images_background over the rest of the absent superpixels.

  • threshold (float) – Minimum anchor precision threshold. The algorithm tries to find an anchor that maximizes the coverage under precision constraint. The precision constraint is formally defined as \(P(prec(A) \ge t) \ge 1 - \delta\), where \(A\) is an anchor, \(t\) is the threshold parameter, \(\delta\) is the delta parameter, and \(prec(\cdot)\) denotes the precision of an anchor. In other words, we are seeking for an anchor having its precision greater or equal than the given threshold with a confidence of (1 - delta). A higher value guarantees that the anchors are faithful to the model, but also leads to more computation time. Note that there are cases in which the precision constraint cannot be satisfied due to the quantile-based discretisation of the numerical features. If that is the case, the best (i.e. highest coverage) non-eligible anchor is returned.

  • delta (float) – Significance threshold. 1 - delta represents the confidence threshold for the anchor precision (see threshold) and the selection of the best anchor candidate in each iteration (see tau).

  • tau (float) – Multi-armed bandit parameter used to select candidate anchors in each iteration. The multi-armed bandit algorithm tries to find within a tolerance tau the most promising (i.e. according to the precision) beam_size candidate anchor(s) from a list of proposed anchors. Formally, when the beam_size=1, the multi-armed bandit algorithm seeks to find an anchor \(A\) such that \(P(prec(A) \ge prec(A^\star) - \tau) \ge 1 - \delta\), where \(A^\star\) is the anchor with the highest true precision (which we don’t know), \(\tau\) is the tau parameter, \(\delta\) is the delta parameter, and \(prec(\cdot)\) denotes the precision of an anchor. In other words, in each iteration, the algorithm returns with a probability of at least 1 - delta an anchor \(A\) with a precision within an error tolerance of tau from the precision of the highest true precision anchor \(A^\star\). A bigger value for tau means faster convergence but also looser anchor conditions.

  • batch_size (int) – Batch size used for sampling. The Anchor algorithm will query the black-box model in batches of size batch_size. A larger batch_size gives more confidence in the anchor, again at the expense of computation time since it involves more model prediction calls.

  • coverage_samples (int) – Number of samples used to estimate coverage from during result search.

  • beam_size (int) – Number of candidate anchors selected by the multi-armed bandit algorithm in each iteration from a list of proposed anchors. A bigger beam width can lead to a better overall anchor (i.e. prevents the algorithm of getting stuck in a local maximum) at the expense of more computation time.

  • stop_on_first (bool) – If True, the beam search algorithm will return the first anchor that has satisfies the probability constraint.

  • max_anchor_size (Optional[int]) – Maximum number of features in result.

  • min_samples_start (int) – Min number of initial samples.

  • n_covered_ex (int) – How many examples where anchors apply to store for each anchor sampled during search (both examples where prediction on samples agrees/disagrees with desired_label are stored).

  • binary_cache_size (int) – The result search pre-allocates binary_cache_size batches for storing the binary arrays returned during sampling.

  • cache_margin (int) – When only max(cache_margin, batch_size) positions in the binary cache remain empty, a new cache of the same size is pre-allocated to continue buffering samples.

  • verbose (bool) – Display updates during the anchor search iterations.

  • verbose_every (int) – Frequency of displayed iterations during anchor search process.

Return type:

Explanation

Returns:

explanationExplanation object containing the anchor explaining the instance with additional metadata as attributes. See usage at AnchorImage examples for details.

generate_superpixels(image)[source]

Generates superpixels from (i.e., segments) an image.

Parameters:

image (ndarray) – A grayscale or RGB image.

Return type:

ndarray

Returns:

A [H, W] array of integers. Each integer is a segment (superpixel) label.

overlay_mask(image, segments, mask_features, scale=(0, 255))[source]

Overlay image with mask described by the mask features.

Parameters:
  • image (ndarray) – Image to be explained.

  • segments (ndarray) – Superpixels.

  • mask_features (list) – List with superpixels present in mask.

  • scale (tuple) – Pixel scale for masked image.

Return type:

ndarray

Returns:

masked_image – Image overlaid with mask.

reset_predictor(predictor)[source]

Resets the predictor function.

Parameters:

predictor (Callable) – New predictor function.

Return type:

None

class alibi.explainers.anchors.anchor_image.AnchorImageSampler(predictor, segmentation_fn, custom_segmentation, image, images_background=None, p_sample=0.5, n_covered_ex=10)[source]

Bases: object

__call__(anchor, num_samples, compute_labels=True)[source]

Sample images from a perturbation distribution by masking randomly chosen superpixels from the original image and replacing them with pixel values from superimposed images if background images are provided to the explainer. Otherwise, the superpixels from the original image are replaced with their average values.

Parameters:
  • anchor (Tuple[int, tuple]) –

    • int - order of anchor in the batch.

    • tuple - features (= superpixels) present in the proposed anchor.

  • num_samples (int) – Number of samples used.

  • compute_labels (bool) – If True, an array of comparisons between predictions on perturbed samples and instance to be explained is returned.

Return type:

List[Union[ndarray, float, int]]

Returns:

  • If compute_labels=True, a list containing the following is returned –

    • covered_true - perturbed examples where the anchor applies and the model prediction on perturbed is the same as the instance prediction.

    • covered_false - perturbed examples where the anchor applies and the model prediction on pertrurbed sample is NOT the same as the instance prediction.

    • labels - num_samples ints indicating whether the prediction on the perturbed sample matches (1) the label of the instance to be explained or not (0).

    • data - Matrix with 1s and 0s indicating whether the values in a superpixel will remain unchanged (1) or will be perturbed (0), for each sample.

    • -1.0 - indicates exact coverage is not computed for this algorithm.

    • anchor[0] - position of anchor in the batch request

  • Otherwise, a list containing the data matrix only is returned.

__init__(predictor, segmentation_fn, custom_segmentation, image, images_background=None, p_sample=0.5, n_covered_ex=10)[source]

Initialize anchor image sampler.

Parameters:
  • predictor (Callable) – A callable that takes a numpy array of N data points as inputs and returns N outputs.

  • segmentation_fn (Callable) – Function used to segment the images. The segmentation function is expected to return a segmentation mask containing all integer values from 0 to K-1, where K is the number of image segments (superpixels).

  • image (ndarray) – Image to be explained.

  • images_background (Optional[ndarray]) – Images to overlay superpixels on.

  • p_sample (float) – Probability for a pixel to be represented by the average value of its superpixel.

  • n_covered_ex (int) – How many examples where anchors apply to store for each anchor sampled during search (both examples where prediction on samples agrees/disagrees with desired_label are stored).

compare_labels(samples)[source]

Compute the agreement between a classifier prediction on an instance to be explained and the prediction on a set of samples which have a subset of perturbed superpixels.

Parameters:

samples (ndarray) – Samples whose labels are to be compared with the instance label.

Return type:

ndarray

Returns:

A boolean array indicating whether the prediction was the same as the instance label.

generate_superpixels(image)[source]

Generates superpixels from (i.e., segments) an image.

Parameters:

image (ndarray) – A grayscale or RGB image.

Return type:

ndarray

Returns:

A [H, W] array of integers. Each integer is a segment (superpixel) label.

perturbation(anchor, num_samples)[source]

Perturbs an image by altering the values of selected superpixels. If a dataset of image backgrounds is provided to the explainer, then the superpixels are replaced with the equivalent superpixels from the background image. Otherwise, the superpixels are replaced by their average value.

Parameters:
  • anchor (tuple) – Contains the superpixels whose values are not going to be perturbed.

  • num_samples (int) – Number of perturbed samples to be returned.

Return type:

Tuple[ndarray, ndarray]

Returns:

  • imgs – A [num_samples, H, W, C] array of perturbed images.

  • segments_mask – A [num_samples, M] binary mask, where M is the number of image superpixels segments. 1 indicates the values in that particular superpixels are not perturbed.

alibi.explainers.anchors.anchor_image.scale_image(image, scale=(0, 255))[source]

Scales an image in a specified range.

Parameters:
  • image (ndarray) – Image to be scale.

  • scale (tuple) – The scaling interval.

Return type:

ndarray

Returns:

img_scaled – Scaled image.