alibi.explainers.shap_wrappers module

alibi.explainers.shap_wrappers.DISTRIBUTED_OPTS: Dict = {'batch_size': 1, 'n_cpus': None}

Default distributed options for KernelShap:

  • 'ncpus' : int - number of available CPUs available to parallelize explanations. Performance is significantly boosted when the number specified represents physical CPUs, but small (nonlinear) gains are observed when virtual CPUs are specified. If set to None, the code will run sequentially.

  • 'batch_size': int, how many instances are explained in the same remote process at once. The shap library of KernelShap is not vectorised, so no significant gains are made by specifying batches. See blog post for batch size experiments results. If set to None, an input array is split in (roughly) equal parts and distributed across the available CPUs.

class alibi.explainers.shap_wrappers.KernelExplainerWrapper(*args, **kwargs)[source]

Bases: KernelExplainer

A wrapper around shap.KernelExplainer that supports:

  • fixing the seed when instantiating the KernelExplainer in a separate process.

  • passing a batch index to the explainer so that a parallel explainer pool can return batches in arbitrary order.

__init__(*args, **kwargs)[source]
Parameters:
  • *args – Arguments and keyword arguments for shap.KernelExplainer constructor.

  • **kwargs – Arguments and keyword arguments for shap.KernelExplainer constructor.

get_explanation(X, **kwargs)[source]

Wrapper around shap.KernelExplainer.shap_values that allows calling the method with a tuple containing a batch index and a batch of instances.

Parameters:
  • X (Union[Tuple[int, ndarray], ndarray]) – When called from a distributed context, it is a tuple containing a batch index and a batch to be explained. Otherwise, it is an array of instances to be explained.

  • **kwargsshap.KernelExplainer.shap_values kwarg values.

Return type:

Union[Tuple[int, ndarray], Tuple[int, List[ndarray]], ndarray, List[ndarray]]

return_attribute(name)[source]

Returns an attribute specified by its name. Used in a distributed context where the actor properties cannot be accessed using the dot syntax.

Return type:

Any

class alibi.explainers.shap_wrappers.KernelShap(predictor, link='identity', feature_names=None, categorical_names=None, task='classification', seed=None, distributed_opts=None)[source]

Bases: Explainer, FitMixin

__init__(predictor, link='identity', feature_names=None, categorical_names=None, task='classification', seed=None, distributed_opts=None)[source]

A wrapper around the shap.KernelExplainer class. It extends the current shap library functionality by allowing the user to specify variable groups in order to treat one-hot encoded categorical as one during sampling. The user can also specify whether to aggregate the shap values estimate for the encoded levels of categorical variables as an optional argument to explain, if grouping arguments are not passed to fit.

Parameters:
  • predictor (Callable[[ndarray], ndarray]) – A callable that takes as an input a samples x features array and outputs a samples x n_outputs model outputs. The n_outputs should represent model output in margin space. If the model outputs probabilities, then the link should be set to 'logit' to ensure correct force plots.

  • link (str) –

    Valid values are 'identity' or 'logit'. A generalized linear model link to connect the feature importance values to the model output. Since the feature importance values, \(\phi\), sum up to the model output, it often makes sense to connect them to the ouput with a link function where \(link(output - expected\_value) = sum(\phi)\). Therefore, for a model which outputs probabilities, link='logit' makes the feature effects have log-odds (evidence) units and link='identity' means that the feature effects have probability units. Please see this example for an in-depth discussion about the semantics of explaining the model in the probability or margin space.

  • feature_names (Union[List[str], Tuple[str], None]) – Used to infer group names when categorical data is treated by grouping and group_names input to fit is not specified, assuming it has the same length as the groups argument of fit method. It is also used to compute the names field, which appears as a key in each of the values of explanation.data[‘raw’][‘importances’].

  • categorical_names (Optional[Dict[int, List[str]]]) – Keys are feature column indices in the background_data matrix (see fit). Each value contains strings with the names of the categories for the feature. Used to select the method for background data summarisation (if specified, subsampling is performed as opposed to k-means clustering). In the future it may be used for visualisation.

  • task (str) – Can have values 'classification' and 'regression'. It is only used to set the contents of explanation.data[‘raw’][‘prediction’]

  • seed (Optional[int]) – Fixes the random number stream, which influences which subsets are sampled during shap value estimation.

  • distributed_opts (Optional[Dict]) – A dictionary that controls the algorithm distributed execution. See alibi.explainers.shap_wrappers.DISTRIBUTED_OPTS documentation for details.

explain(X, summarise_result=False, cat_vars_start_idx=None, cat_vars_enc_dim=None, **kwargs)[source]

Explains the instances in the array X.

Parameters:
  • X (Union[ndarray, DataFrame, spmatrix]) – Instances to be explained.

  • summarise_result (bool) – Specifies whether the shap values corresponding to dimensions of encoded categorical variables should be summed so that a single shap value is returned for each categorical variable. Both the start indices of the categorical variables (cat_vars_start_idx) and the encoding dimensions (cat_vars_enc_dim) have to be specified

  • cat_vars_start_idx (Optional[Sequence[int]]) – The start indices of the categorical variables. If specified, cat_vars_enc_dim should also be specified.

  • cat_vars_enc_dim (Optional[Sequence[int]]) – The length of the encoding dimension for each categorical variable. If specified cat_vars_start_idx should also be specified.

  • **kwargs

    Keyword arguments specifying explain behaviour. Valid arguments are:

    • nsamples - controls the number of predictor calls and therefore runtime.

    • l1_reg - the algorithm is exponential in the feature dimension. If set to auto the algorithm will first run a feature selection algorithm to select the top features, provided the fraction of sampled sets of missing features is less than 0.2 from the number of total subsets. The Akaike Information Criterion is used in this case. See our examples for more details about available settings for this parameter. Note that by first running a feature selection step, the shapley values of the remainder of the features will be different to those estimated from the entire set.

    For more details, please see the shap library documentation .

Return type:

Explanation

Returns:

explanation – An explanation object containing the shap values and prediction in the data field, along with a meta field containing additional data. See usage at KernelSHAP examples for details.

fit(background_data, summarise_background=False, n_background_samples=300, group_names=None, groups=None, weights=None, **kwargs)[source]

This takes a background dataset (usually a subsample of the training set) as an input along with several user specified options and initialises a KernelShap explainer. The runtime of the algorithm depends on the number of samples in this dataset and on the number of features in the dataset. To reduce the size of the dataset, the summarise_background option and n_background_samples should be used. To reduce the feature dimensionality, encoded categorical variables can be treated as one during the feature perturbation process; this decreases the effective feature dimensionality, can reduce the variance of the shap values estimation and reduces slightly the number of calls to the predictor. Further runtime savings can be achieved by changing the nsamples parameter in the call to explain. Runtime reduction comes with an accuracy trade-off, so it is better to experiment with a runtime reduction method and understand results stability before using the system.

Parameters:
  • background_data (Union[ndarray, spmatrix, DataFrame, Data]) – Data used to estimate feature contributions and baseline values for force plots. The rows of the background data should represent samples and the columns features.

  • summarise_background (Union[bool, str]) – A large background dataset impacts the runtime and memory footprint of the algorithm. By setting this argument to True, only n_background_samples from the provided data are selected. If group_names or groups arguments are specified, the algorithm assumes that the data contains categorical variables so the records are selected uniformly at random. Otherwise, shap.kmeans (a wrapper around sklearn k-means implementation) is used for selection. If set to 'auto', a default of KERNEL_SHAP_BACKGROUND_THRESHOLD samples is selected.

  • n_background_samples (int) – The number of samples to keep in the background dataset if summarise_background=True.

  • groups (Optional[List[Union[Tuple[int], List[int]]]]) – A list containing sub-lists specifying the indices of features belonging to the same group.

  • group_names (Union[List[str], Tuple[str], None]) – If specified, this array is used to treat groups of features as one during feature perturbation. This feature can be useful, for example, to treat encoded categorical variables as one and can result in computational savings (this may require adjusting the nsamples parameter).

  • weights (Union[List[float], Tuple[float], ndarray, None]) – A sequence or array of weights. This is used only if grouping is specified and assigns a weight to each point in the dataset.

  • **kwargs – Expected keyword arguments include keep_index (bool) and should be used if a data frame containing an index column is passed to the algorithm.

Return type:

KernelShap

reset_predictor(predictor)[source]

Resets the prediction function.

Parameters:

predictor (Callable) – New prediction function.

Return type:

None

class alibi.explainers.shap_wrappers.TreeShap(predictor, model_output='raw', feature_names=None, categorical_names=None, task='classification', seed=None)[source]

Bases: Explainer, FitMixin

__init__(predictor, model_output='raw', feature_names=None, categorical_names=None, task='classification', seed=None)[source]

A wrapper around the shap.TreeExplainer class. It adds the following functionality:

  1. Input summarisation options to allow control over background dataset size and hence runtime

  2. Output summarisation for sklearn models with one-hot encoded categorical variables.

Users are strongly encouraged to familiarise themselves with the algorithm by reading the method overview in the documentation.

Parameters:
  • predictor (Any) – A fitted model to be explained. XGBoost, LightGBM, CatBoost and most tree-based scikit-learn models are supported. In the future, Pyspark could also be supported. Please open an issue if this is a use case for you.

  • model_output (str) –

    Supported values are: 'raw', 'probability', 'probability_doubled', 'log_loss':

    • 'raw' - the raw model of the output, which varies by task, is explained. This option should always be used if the fit is called without arguments. It should also be set to compute shap interaction values. For regression models it is the standard output, for binary classification in XGBoost it is the log odds ratio.

    • 'probability' - the probability output is explained. This option should only be used if fit was called with the background_data argument set. The effect of specifying this parameter is that the shap library will use this information to transform the shap values computed in margin space (aka using the raw output) to shap values that sum to the probability output by the model plus the model expected output probability. This requires knowledge of the type of output for predictor which is inferred by the shap library from the model type (e.g., most sklearn models with exception of sklearn.tree.DecisionTreeClassifier, sklearn.ensemble.RandomForestClassifier, sklearn.ensemble.ExtraTreesClassifier output logits) or on the basis of the mapping implemented in the shap.TreeEnsemble constructor. Only trees that output log odds and probabilities are supported currently.

    • 'probability_doubled' - used for binary classification problem in situations where the model outputs the logits/probabilities for the positive class but shap values for both outcomes are desired. This option should be used only if fit was called with the background_data argument set. In this case the expected value for the negative class is 1 - expected_value for positive class and the shap values for the negative class are the negative values of the positive class shap values. As before, the explanation happens in the margin space, and the shap values are subsequently adjusted. convert the model output to probabilities. The same considerations as for probability apply for this output type too.

    • 'log_loss' - logarithmic loss is explained. This option shoud be used only if fit was called with the background_data argument set and requires specifying labels, y, when calling explain. If the objective is squared error, then the transformation \((output - y)^2\) is applied. For binary cross-entropy objective, the transformation \(log(1 + exp(output)) - y * output\) with \(y \in \{0, 1\}\). Currently only binary cross-entropy and squared error losses can be explained.

  • feature_names (Union[List[str], Tuple[str], None]) – Used to compute the names field, which appears as a key in each of the values of the importances sub-field of the response raw field.

  • categorical_names (Optional[Dict[int, List[str]]]) – Keys are feature column indices. Each value contains strings with the names of the categories for the feature. Used to select the method for background data summarisation (if specified, subsampling is performed as opposed to kmeans clustering). In the future it may be used for visualisation.

  • task (str) – Can have values 'classification' and 'regression'. It is only used to set the contents of the prediction field in the data[‘raw’] response field.

Notes

Tree SHAP is an additive attribution method so it is best suited to explaining output in margin space (the entire real line). For discussion related to explaining models in output vs probability space, please consult this resource.

explain(X, y=None, interactions=False, approximate=False, check_additivity=True, tree_limit=None, summarise_result=False, cat_vars_start_idx=None, cat_vars_enc_dim=None, **kwargs)[source]

Explains the instances in X. y should be passed if the model loss function is to be explained, which can be useful in order to understand how various features affect model performance over time. This is only possible if the explainer has been fitted with a background dataset and requires setting model_output=’log_loss’.

Parameters:
  • X (Union[ndarray, DataFrame, Pool]) – Instances to be explained.

  • y (Optional[ndarray]) – Labels corresponding to rows of X. Should be passed only if a background dataset was passed to the fit method.

  • interactions (bool) – If True, the shap value for every feature of every instance in X is decomposed into X.shape[1] - 1 shap value interactions and one main effect. This is only supported if fit is called with background_dataset=None.

  • approximate (bool) –

    If True, an approximation to the shap values that does not account for feature order is computed. This was proposed by Ando Sabaas here . Check this resource for more details. This option is currently only supported for xgboost and sklearn models.

  • check_additivity (bool) – If True, output correctness is ensured if model_output='raw' has been passed to the constructor.

  • tree_limit (Optional[int]) – Explain the output of a subset of the first tree_limit trees in an ensemble model.

  • summarise_result (bool) – This should be set to True only when some of the columns in X represent encoded dimensions of a categorical variable and one single shap value per categorical variable is desired. Both cat_vars_start_idx and cat_vars_enc_dim should be specified as detailed below to allow this.

  • cat_vars_start_idx (Optional[Sequence[int]]) – The start indices of the categorical variables.

  • cat_vars_enc_dim (Optional[Sequence[int]]) – The length of the encoding dimension for each categorical variable.

Return type:

Explanation

Returns:

explanation – An Explanation object containing the shap values and prediction in the data field, along with a meta field containing additional data. See usage at TreeSHAP examples for details.

fit(background_data=None, summarise_background=False, n_background_samples=1000, **kwargs)[source]

This function instantiates an explainer which can then be use to explain instances using the explain method. If no background dataset is passed, the explainer uses the path-dependent feature perturbation algorithm to explain the values. As such, only the model raw output can be explained and this should be reflected by passing model_output='raw' when instantiating the explainer. If a background dataset is passed, the interventional feature perturbation algorithm is used. Using this algorithm, probability outputs can also be explained. Additionally, if the model_output='log_loss' option is passed to the explainer constructor, then the model loss function can be explained by passing the labels as the y argument to the explain method. A limited number of loss functions are supported, as detailed in the constructor documentation.

Parameters:
  • background_data (Union[ndarray, DataFrame, None]) – Data used to estimate feature contributions and baseline values for force plots. The rows of the background data should represent samples and the columns features.

  • summarise_background (Union[bool, str]) – A large background dataset may impact the runtime and memory footprint of the algorithm. By setting this argument to True, only n_background_samples from the provided data are selected. If the categorical_names argument has been passed to the constructor, subsampling of the data is used. Otherwise, shap.kmeans (a wrapper around sklearn.kmeans implementation) is used for selection. If set to 'auto', a default of TREE_SHAP_BACKGROUND_WARNING_THRESHOLD samples is selected.

  • n_background_samples (int) – The number of samples to keep in the background dataset if summarise_background=True.

Return type:

TreeShap

reset_predictor(predictor)[source]

Resets the predictor.

Parameters:

predictor (Any) – New prediction.

Return type:

None

alibi.explainers.shap_wrappers.rank_by_importance(shap_values, feature_names=None)[source]

Given the shap values estimated for a multi-output model, this function ranks features according to their importance. The feature importance is the average absolute value for a given feature.

Parameters:
  • shap_values (List[ndarray]) – Each element corresponds to a samples x features array of shap values corresponding to each model output.

  • feature_names (Union[List[str], Tuple[str], None]) – Each element is the name of the column with the corresponding index in each of the arrays in the shap_values list.

Return type:

Dict

Returns:

importances

A dictionary of the form:

{
    '0': {'ranked_effect': array([0.2, 0.5, ...]), 'names': ['feat_3', 'feat_5', ...]},
    '1': {'ranked_effect': array([0.3, 0.2, ...]), 'names': ['feat_6', 'feat_1', ...]},
    ...
    'aggregated': {'ranked_effect': array([0.9, 0.7, ...]), 'names': ['feat_3', 'feat_6', ...]}
}

The keys of the first level represent the index of the model output. The feature effects in ranked_effect and the corresponding feature names in names are sorted from highest (most important) to lowest (least important). The values in the aggregated field are obtained by summing the shap values for all the model outputs and then computing the effects. Given an output, the effects are defined as the average magnitude of the shap values across the instances to be explained.

alibi.explainers.shap_wrappers.sum_categories(values, start_idx, enc_feat_dim)[source]

This function is used to reduce specified slices in a two- or three- dimensional array.

For two-dimensional values arrays, for each entry in start_idx, the function sums the following k columns where k is the corresponding entry in the enc_feat_dim sequence. The columns whose indices are not in start_idx are left unchanged. This arises when the slices contain the shap values for each dimension of an encoded categorical variable and a single shap value for each variable is desired.

For three-dimensional values arrays, the reduction is applied for each rank 2 subarray, first along the column dimension and then across the row dimension. This arises when summarising shap interaction values. Each rank 2 array is a E x E matrix of shap interaction values, where E is the dimension of the data after one-hot encoding. The result of applying the reduction yields a rank 2 array of dimension F x F, where F is the number of features (i.e., the feature dimension of the data matrix before encoding). By applying this transformation, a single value describing the interaction of categorical features i and j and a single value describing the interaction of j and i is returned.

Parameters:
  • values (ndarray) – A two or three dimensional array to be reduced, as described above.

  • start_idx (Sequence[int]) – The start indices of the columns to be summed.

  • enc_feat_dim (Sequence[int]) – The number of columns to be summed, one for each start index.

Returns:

new_values – An array whose columns have been summed according to the entries in start_idx and enc_feat_dim.