alibi.explainers.anchors.anchor_text module

class alibi.explainers.anchors.anchor_text.AnchorText(predictor, sampling_strategy='unknown', nlp=None, language_model=None, seed=0, **kwargs)[source]

Bases: Explainer

CLASS_SAMPLER = {'language_model': <class 'alibi.explainers.anchors.language_model_text_sampler.LanguageModelSampler'>, 'similarity': <class 'alibi.explainers.anchors.text_samplers.SimilaritySampler'>, 'unknown': <class 'alibi.explainers.anchors.text_samplers.UnknownSampler'>}
DEFAULTS: Dict[str, Dict] = {'language_model': {'batch_size_lm': 32, 'filling': 'parallel', 'frac_mask_templates': 0.1, 'punctuation': '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', 'sample_proba': 0.5, 'sample_punctuation': False, 'stopwords': [], 'temperature': 1.0, 'top_n': 100, 'use_proba': False}, 'similarity': {'sample_proba': 0.5, 'temperature': 1.0, 'top_n': 100, 'use_proba': False}, 'unknown': {'sample_proba': 0.5}}
SAMPLING_LANGUAGE_MODEL = 'language_model'

Language model sampling strategy.

SAMPLING_SIMILARITY = 'similarity'

Similarity sampling strategy.

SAMPLING_UNKNOWN = 'unknown'

Unknown sampling strategy.

__init__(predictor, sampling_strategy='unknown', nlp=None, language_model=None, seed=0, **kwargs)[source]

Initialize anchor text explainer.

Parameters:
  • predictor (Callable[[List[str]], ndarray]) – A callable that takes a list of text strings representing N data points as inputs and returns N outputs.

  • sampling_strategy (str) –

    Perturbation distribution method:

    • 'unknown' - replaces words with UNKs.

    • 'similarity' - samples according to a similarity score with the corpus embeddings.

    • 'language_model' - samples according the language model’s output distributions.

  • nlp (Optional[Language]) – spaCy object when sampling method is 'unknown' or 'similarity'.

  • language_model (Optional[LanguageModel]) – Transformers masked language model. This is a model that it adheres to the LanguageModel interface we define in alibi.utils.lang_model.LanguageModel.

  • seed (int) – If set, ensure identical random streams.

  • kwargs (Any) –

    Sampling arguments can be passed as kwargs depending on the sampling_strategy. Check default arguments defined in:

    • alibi.explainers.anchor_text.DEFAULT_SAMPLING_UNKNOWN

    • alibi.explainers.anchor_text.DEFAULT_SAMPLING_SIMILARITY

    • alibi.explainers.anchor_text.DEFAULT_SAMPLING_LANGUAGE_MODEL

Raises:
compare_labels(samples)[source]

Compute the agreement between a classifier prediction on an instance to be explained and the prediction on a set of samples which have a subset of features fixed to a given value (aka compute the precision of anchors).

Parameters:

samples (ndarray) – Samples whose labels are to be compared with the instance label.

Return type:

ndarray

Returns:

A numpy boolean array indicating whether the prediction was the same as the instance label.

explain(text, threshold=0.95, delta=0.1, tau=0.15, batch_size=100, coverage_samples=10000, beam_size=1, stop_on_first=True, max_anchor_size=None, min_samples_start=100, n_covered_ex=10, binary_cache_size=10000, cache_margin=1000, verbose=False, verbose_every=1, **kwargs)[source]

Explain instance and return anchor with metadata.

Parameters:
  • text (str) – Text instance to be explained.

  • threshold (float) – Minimum anchor precision threshold. The algorithm tries to find an anchor that maximizes the coverage under precision constraint. The precision constraint is formally defined as \(P(prec(A) \ge t) \ge 1 - \delta\), where \(A\) is an anchor, \(t\) is the threshold parameter, \(\delta\) is the delta parameter, and \(prec(\cdot)\) denotes the precision of an anchor. In other words, we are seeking for an anchor having its precision greater or equal than the given threshold with a confidence of (1 - delta). A higher value guarantees that the anchors are faithful to the model, but also leads to more computation time. Note that there are cases in which the precision constraint cannot be satisfied due to the quantile-based discretisation of the numerical features. If that is the case, the best (i.e. highest coverage) non-eligible anchor is returned.

  • delta (float) – Significance threshold. 1 - delta represents the confidence threshold for the anchor precision (see threshold) and the selection of the best anchor candidate in each iteration (see tau).

  • tau (float) – Multi-armed bandit parameter used to select candidate anchors in each iteration. The multi-armed bandit algorithm tries to find within a tolerance tau the most promising (i.e. according to the precision) beam_size candidate anchor(s) from a list of proposed anchors. Formally, when the beam_size=1, the multi-armed bandit algorithm seeks to find an anchor \(A\) such that \(P(prec(A) \ge prec(A^\star) - \tau) \ge 1 - \delta\), where \(A^\star\) is the anchor with the highest true precision (which we don’t know), \(\tau\) is the tau parameter, \(\delta\) is the delta parameter, and \(prec(\cdot)\) denotes the precision of an anchor. In other words, in each iteration, the algorithm returns with a probability of at least 1 - delta an anchor \(A\) with a precision within an error tolerance of tau from the precision of the highest true precision anchor \(A^\star\). A bigger value for tau means faster convergence but also looser anchor conditions.

  • batch_size (int) – Batch size used for sampling. The Anchor algorithm will query the black-box model in batches of size batch_size. A larger batch_size gives more confidence in the anchor, again at the expense of computation time since it involves more model prediction calls.

  • coverage_samples (int) – Number of samples used to estimate coverage from during anchor search.

  • beam_size (int) – Number of candidate anchors selected by the multi-armed bandit algorithm in each iteration from a list of proposed anchors. A bigger beam width can lead to a better overall anchor (i.e. prevents the algorithm of getting stuck in a local maximum) at the expense of more computation time.

  • stop_on_first (bool) – If True, the beam search algorithm will return the first anchor that has satisfies the probability constraint.

  • max_anchor_size (Optional[int]) – Maximum number of features to include in an anchor.

  • min_samples_start (int) – Number of samples used for anchor search initialisation.

  • n_covered_ex (int) – How many examples where anchors apply to store for each anchor sampled during search (both examples where prediction on samples agrees/disagrees with predicted label are stored).

  • binary_cache_size (int) – The anchor search pre-allocates binary_cache_size batches for storing the boolean arrays returned during sampling.

  • cache_margin (int) – When only max(cache_margin, batch_size) positions in the binary cache remain empty, a new cache of the same size is pre-allocated to continue buffering samples.

  • verbose (bool) – Display updates during the anchor search iterations.

  • verbose_every (int) – Frequency of displayed iterations during anchor search process.

  • **kwargs (Any) – Other keyword arguments passed to the anchor beam search and the text sampling and perturbation functions.

Return type:

Explanation

Returns:

Explanation object containing the anchor explaining the instance with additional metadata as attributes. Contains the following data-related attributes –

  • anchor : List[str] - a list of words in the proposed anchor.

  • precision : float - the fraction of times the sampled instances where the anchor holds yields the same prediction as the original instance. The precision will always be threshold for a valid anchor.

  • coverage : float - the fraction of sampled instances the anchor applies to.

meta: dict

Object metadata.

model: spacy.language.Language | LanguageModel

Language model to be used.

perturbation: Any

Perturbation method.

reset_predictor(predictor)[source]

Resets the predictor function.

Parameters:

predictor (Callable) – New predictor function.

Return type:

None

sampler(anchor, num_samples, compute_labels=True)[source]

Generate perturbed samples while maintaining features in positions specified in anchor unchanged.

Parameters:
  • anchor (Tuple[int, tuple]) –

    • int - the position of the anchor in the input batch.

    • tuple - the anchor itself, a list of words to be kept unchanged.

  • num_samples (int) – Number of generated perturbed samples.

  • compute_labels (bool) – If True, an array of comparisons between predictions on perturbed samples and instance to be explained is returned.

Return type:

Union[List[Union[ndarray, float, int]], List[ndarray]]

Returns:

  • If compute_labels=True, a list containing the following is returned –

    • covered_true - perturbed examples where the anchor applies and the model prediction on perturbation is the same as the instance prediction.

    • covered_false - perturbed examples where the anchor applies and the model prediction is NOT the same as the instance prediction.

    • labels - num_samples ints indicating whether the prediction on the perturbed sample matches (1) the label of the instance to be explained or not (0).

    • data - Matrix with 1s and 0s indicating whether a word in the text has been perturbed for each sample.

    • -1.0 - indicates exact coverage is not computed for this algorithm.

    • anchor[0] - position of anchor in the batch request.

  • Otherwise, a list containing the data matrix only is returned.

alibi.explainers.anchors.anchor_text.DEFAULT_SAMPLING_LANGUAGE_MODEL = {'batch_size_lm': 32, 'filling': 'parallel', 'frac_mask_templates': 0.1, 'punctuation': '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', 'sample_proba': 0.5, 'sample_punctuation': False, 'stopwords': [], 'temperature': 1.0, 'top_n': 100, 'use_proba': False}

Default perturbation options for 'language_model' sampling

  • 'filling' : str - filling method for language models. Allowed values: 'parallel', 'autoregressive'. 'parallel' method corresponds to a single forward pass through the language model. The masked words are sampled independently, according to the selected probability distribution (see top_n, temperature, use_proba). autoregressive method fills the words one at the time. This corresponds to multiple forward passes through the language model which is computationally expensive.

  • 'sample_proba' : float - probability of a word to be masked.

  • 'top_n' : int - number of similar words to sample for perturbations.

  • 'temperature' : float - sample weight hyper-parameter if use_proba equals True.

  • 'use_proba' : bool - whether to sample according to the predicted words distribution. If set to False, the top_n words are sampled uniformly at random.

  • 'frac_mask_template' : float - fraction from the number of samples of mask templates to be generated. In each sampling call, will generate int(frac_mask_templates * num_samples) masking templates. Lower fraction corresponds to lower computation time since the batch fed to the language model is smaller. After the words’ distributions is predicted for each mask, a total of num_samples will be generated by sampling evenly from each template. Note that lower fraction might correspond to less diverse sample. A sample_proba=1 corresponds to masking each word. For this case only one masking template will be constructed. A filling=’autoregressive’ will generate num_samples masking templates regardless of the value of frac_mask_templates.

  • 'batch_size_lm' : int - batch size used for the language model forward pass.

  • 'punctuation' : str - string of punctuation not to be masked.

  • 'stopwords' : List[str] - list of words not to be masked.

  • 'sample_punctuation' : bool - whether to sample punctuation to fill the masked words. If False, the punctuation defined in punctuation will not be sampled.

alibi.explainers.anchors.anchor_text.DEFAULT_SAMPLING_SIMILARITY = {'sample_proba': 0.5, 'temperature': 1.0, 'top_n': 100, 'use_proba': False}

Default perturbation options for 'similarity' sampling

  • 'sample_proba' : float - probability of a word to be masked.

  • 'top_n' : int - number of similar words to sample for perturbations.

  • 'temperature' : float - sample weight hyper-parameter if use_proba=True.

  • 'use_proba' : bool - whether to sample according to the words similarity.

alibi.explainers.anchors.anchor_text.DEFAULT_SAMPLING_UNKNOWN = {'sample_proba': 0.5}

Default perturbation options for 'unknown' sampling

  • 'sample_proba' : float - probability of a word to be masked.