This page was generated from notebooks/explainer_examples.ipynb.

Example Model Explanations with Seldon

Seldon core supports various out-of-the-box explainers that leverage the Alibi ML Expalinability open source library.

In this notebook we show how you can use the pre-packaged explainer functionality that simplifies the creation of advanced AI model explainers.

Seldon provides the following out-of-the-box pre-packaged explainers:

  • Anchor Tabular Explainer

    • A black box Explainer that uses the anchor technique for tabular data

    • It basically answers the question of what are the most “powerul” or “important” features in a tabular prediction

  • Anchor Image Explainer

    • A black box Explainer that uses the anchor technique for image data

    • It basically answers the question of what are the most “powerul” or “important” pixels in an image prediction

  • Anchor Text Explainer

    • A black box Explainer that uses the anchor technique for text data

    • It basically answers the question of what are the most “powerul” or “important” tokens in a text prediction

  • Kernel Shap Explainer

    • A black box Explainer that uses the kernel shap technique for tabular data

    • It provides positive and negative feature attributions that contributed to the predictions

  • Integrated Gradient Explainer

  • Tree Shap Explainer

    • A white box explainer that uses the TreeShap technique for tree based models

    • It provides positive and negative feature attributions that contributed to the predictions

Running this notebook

This should install the required package dependencies, if not please also install:

Setup Seldon Core

Follow the instructions to Setup Cluster with Ambassador Ingress and Install Seldon Core.

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:8080

Create Namespace for experimentation

We will first set up the namespace of Seldon where we will be deploying all our models

[ ]:
!kubectl create namespace seldon

And then we will set the current workspace to use the seldon namespace so all our commands are run there by default (instead of running everything in the default namespace.)

[ ]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
[ ]:
import json

Income Prediction Model with Anchors Explainer

The model and explainer used here can be trained yourself following the full example in the Alibi Anchor Explanations for Income Notebook in the Alibi project documentation.

This example also shows you can specify the replicas for your explainer.

Note we used a python3.7 and version 0.6.0 of Alibi.

[ ]:
%%writefile resources/income_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: income
spec:
  name: income
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/income/model-0.23.2
      name: classifier
    explainer:
      type: AnchorTabular
      modelUri: gs://seldon-models/sklearn/income/explainer-py37-0.6.0
      replicas: 2
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/income_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import numpy as np

from seldon_core.seldon_client import SeldonClient

sc = SeldonClient(
    deployment_name="income",
    namespace="seldon",
    gateway="ambassador",
    gateway_endpoint="localhost:8003",
)

Use python client library to get a prediction.

[ ]:
data = np.array([[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]])
r = sc.predict(data=data)
print(r.response)

Use curl to get a prediction.

[ ]:
!curl -d '{"data": {"ndarray":[[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]]}}' \
   -X POST http://localhost:8003/seldon/seldon/income/api/v1.0/predictions \
   -H "Content-Type: application/json"

Use python client library to get an explanation.

[ ]:
data = np.array([[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]])
explanation = sc.explain(deployment_name="income", predictor="default", data=data)
print(explanation.response["data"]["anchor"])

Using curl to get an explanation.

[ ]:
!curl -s -X POST -H 'Content-Type: application/json' \
    -d '{"data": {"names": ["text"], "ndarray": [[52,  4,  0,  2,  8,  4,  2,  0,  0,  0, 60, 9]]}}' \
    http://localhost:8003/seldon/seldon/income-explainer/default/api/v1.0/explain | jq ".data.anchor"
[ ]:
!kubectl delete -f resources/income_explainer.yaml

Movie Sentiment Model

The model used here can be trained yourself by following the full example Anchor explanations for movie sentiment taken from the Alibi documentation.

[ ]:
%%writefile resources/moviesentiment_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: movie
spec:
  name: movie
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: "gs://seldon-models/v1.12.0-dev/sklearn/moviesentiment"
      name: classifier
    explainer:
      type: AnchorText
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/moviesentiment_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=movie -o jsonpath='{.items[0].metadata.name}')
[ ]:
!kubectl rollout status deploy/movie-default-explainer
[ ]:
import numpy as np

from seldon_core.seldon_client import SeldonClient

sc = SeldonClient(
    deployment_name="movie",
    namespace="seldon",
    gateway_endpoint="localhost:8003",
    payload_type="ndarray",
)
[ ]:
!curl -d '{"data": {"ndarray":["This film has great actors"]}}' \
   -X POST http://localhost:8003/seldon/seldon/movie/api/v1.0/predictions \
   -H "Content-Type: application/json"
[ ]:
data = np.array(["this film has great actors"])
r = sc.predict(data=data)
print(r)
assert r.success == True
[ ]:
!curl -s -d '{"data": {"ndarray":["a visually exquisite but narratively opaque and emotionally vapid experience of style and mystification"]}}' \
   -X POST http://localhost:8003/seldon/seldon/movie-explainer/default/api/v1.0/explain \
   -H "Content-Type: application/json" | jq ".data.anchor"
[ ]:
data = np.array(
    [
        "a visually exquisite but narratively opaque and emotionally vapid experience of style and mystification"
    ]
)
explanation = sc.explain(predictor="default", data=data)
print(explanation.response["data"]["anchor"])
[ ]:
!kubectl delete -f resources/moviesentiment_explainer.yaml

Tensorflow CIFAR10 Model

A full Kubeflow example with training of the model and explainer can be found in the Kubeflow Pipelines project examples.

[ ]:
%%writefile resources/cifar10_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: cifar10-classifier
spec:
  protocol: tensorflow
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      implementation: TENSORFLOW_SERVER
      modelUri: gs://seldon-models/tfserving/cifar10/resnet32
      name: cifar10-classifier
      logger:
         mode: all
    explainer:
      type: AnchorImages
      modelUri: gs://seldon-models/tfserving/cifar10/explainer-py36-0.5.2
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/cifar10_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import os

import matplotlib.pyplot as plt
import tensorflow as tf

url = "https://storage.googleapis.com/seldon-models/alibi-detect/classifier/"
path_model = os.path.join(url, "cifar10", "resnet32", "model.h5")
save_path = tf.keras.utils.get_file("resnet32", path_model)
model = tf.keras.models.load_model(save_path)

train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test

X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
class_names = [
    "airplane",
    "automobile",
    "bird",
    "cat",
    "deer",
    "dog",
    "frog",
    "horse",
    "ship",
    "truck",
]
[ ]:
import json
from subprocess import PIPE, Popen, run

import numpy as np

idx = 12
test_example = X_test[idx : idx + 1].tolist()
payload = '{"instances":' + f"{test_example}" + " }"
cmd = f"""curl -s -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier/v1/models/cifar10-classifier/:predict \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True, stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
print(raw)
res = json.loads(raw)
arr = np.array(res["predictions"])
X = X_test[idx].reshape(1, 32, 32, 3)
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
print("class:", class_names[y_test[idx][0]])
print("prediction:", class_names[arr[0].argmax()])
[ ]:
test_example = X_test[idx : idx + 1].tolist()
payload = '{"instances":' + f"{test_example}" + " }"
cmd = f"""curl -s -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier-explainer/default/v1/models/cifar10-classifier:explain \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True, stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
explanation = json.loads(raw)
arr = np.array(explanation["data"]["anchor"])
plt.imshow(arr)
[ ]:
# or using non-standard seldon extension

test_example = X_test[idx : idx + 1].tolist()
payload = '{"instances":' + f"{test_example}" + " }"
cmd = f"""curl -s -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier-explainer/default/v1/models/:explain \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True, stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
explanation = json.loads(raw)
arr = np.array(explanation["data"]["anchor"])
plt.imshow(arr)
[ ]:
!kubectl delete -f resources/cifar10_explainer.yaml

Wine Prediction Model with Shap Explainer

The model and explainer used here can be trained yourself following the full example in the Kernel SHAP explanation for multinomial logistic regression models in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[ ]:
import shap

shap.initjs()
[ ]:
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split

wine = load_wine()
data = wine.data
target = wine.target
target_names = wine.target_names
feature_names = wine.feature_names
X_train, X_test, y_train, y_test = train_test_split(
    data,
    target,
    test_size=0.2,
    random_state=0,
)
print("Training records: {}".format(X_train.shape[0]))
print("Testing records: {}".format(X_test.shape[0]))
[ ]:
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler().fit(X_train)
X_train_norm = scaler.transform(X_train)
X_test_norm = scaler.transform(X_test)
[ ]:
%%writefile resources/wine_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: wine
spec:
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/wine/model-py36-0.23.2
      name: classifier
      parameters:
        - name: method
          type: STRING
          value: decision_function
    explainer:
      type: KernelShap
      modelUri: gs://seldon-models/sklearn/wine/kernel_shap_py36_alibi_0.5.5
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/wine_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import numpy as np

from seldon_core.seldon_client import SeldonClient

sc = SeldonClient(
    deployment_name="wine",
    namespace="seldon",
    gateway="ambassador",
    gateway_endpoint="localhost:8003",
)

Use python client library to get a prediction.

[ ]:
data = np.array(
    [
        [
            -0.24226334,
            0.26757916,
            0.42085937,
            0.7127641,
            0.84067236,
            -1.27747161,
            -0.60582812,
            -0.9706341,
            -0.5873972,
            2.42611713,
            -2.06608025,
            -1.55017035,
            -0.86659858,
        ]
    ]
)
r = sc.predict(data=data)
print(r.response)
class_idx = np.argmax(np.array(r.response["data"]["tensor"]["values"]))

Use curl to get a prediction.

[ ]:
!curl -d '{"data": {"ndarray":[[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236, -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713, -2.06608025, -1.55017035, -0.86659858]]}}' \
   -X POST http://localhost:8003/seldon/seldon/wine/api/v1.0/predictions \
   -H "Content-Type: application/json"

Use python client library to get an explanation.

[ ]:
import json

data = np.array(
    [
        [
            -0.24226334,
            0.26757916,
            0.42085937,
            0.7127641,
            0.84067236,
            -1.27747161,
            -0.60582812,
            -0.9706341,
            -0.5873972,
            2.42611713,
            -2.06608025,
            -1.55017035,
            -0.86659858,
        ]
    ]
)
explanation = sc.explain(deployment_name="wine", predictor="default", data=data)
explanation = explanation.response
expStr = json.dumps(explanation)
[ ]:
from alibi.api.interfaces import Explanation

explanation = Explanation.from_json(expStr)
[ ]:
explanation.shap_values = np.array(explanation.shap_values)
explanation.raw["instances"] = np.array(explanation.raw["instances"])
[ ]:
idx = 0
shap.force_plot(
    explanation.expected_value[class_idx],
    explanation.shap_values[class_idx][idx, :],
    explanation.raw["instances"][idx][None, :],
    explanation.feature_names,
)

Using curl to get an explanation.

[ ]:
!curl -s -X POST -H 'Content-Type: application/json' \
    -d '{"data": {"names": ["text"], "ndarray": [[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236, -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713, -2.06608025, -1.55017035, -0.86659858]]}}' \
    http://localhost:8003/seldon/seldon/wine-explainer/default/api/v1.0/explain | jq .
[ ]:
!kubectl delete -f resources/wine_explainer.yaml

MNIST Model with Integrated Gradients Explainer

The model and explainer used here can be trained yourself following the full example in the Integrated gradients for MNIST in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[ ]:
%%writefile resources/mnist_rest_explainer.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: tfserving
spec:
  name: mnist
  predictors:
  - graph:
      children: []
      implementation: TENSORFLOW_SERVER
      modelUri: gs://seldon-models/tfserving/mnist-model
      name: mnist-model
      parameters:
        - name: signature_name
          type: STRING
          value: predict_images
        - name: model_name
          type: STRING
          value: mnist-model
    explainer:
      type: IntegratedGradients
      modelUri: gs://seldon-models/keras/mnist
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/mnist_rest_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import numpy as np
import tensorflow as tf
from tensorflow.keras.utils import to_categorical

train, test = tf.keras.datasets.mnist.load_data()
X_train, y_train = train
X_test, y_test = test
test_labels = y_test.copy()
train_labels = y_train.copy()

X_train = X_train.reshape(-1, 28, 28, 1).astype("float64") / 255
X_test = X_test.reshape(-1, 28, 28, 1).astype("float64") / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
[ ]:
nb_samples = 10
X_test_sample = X_test[:nb_samples]
[ ]:
import json

d = {
    "data": {"tensor": {"shape": [10, 784], "values": X_test_sample.flatten().tolist()}}
}
with open("input.json", "w") as f:
    f.write(json.dumps(d))
[ ]:
res=!curl -s -H 'Content-Type: application/json' \
    -d @./input.json \
    http://localhost:8003/seldon/seldon/tfserving/api/v1.0/predictions
res=json.loads(res[0])
[ ]:
predictions = np.array(res["data"]["tensor"]["values"]).reshape(
    res["data"]["tensor"]["shape"]
)
predictions = predictions.argmax(axis=1)
[ ]:
import json

d = {
    "data": {
        "tensor": {
            "shape": X_test_sample.shape,
            "values": X_test_sample.flatten().tolist(),
        }
    }
}
with open("input.json", "w") as f:
    f.write(json.dumps(d))
[ ]:
res=!curl -s -H 'Content-Type: application/json' \
    -d @./input.json \
    http://localhost:8003/seldon/seldon/tfserving-explainer/default/api/v1.0/explain
res=json.loads(res[0])
[ ]:
attrs = np.array(res["data"]["attributions"][0])
[ ]:
import matplotlib.pyplot as plt

fig, ax = plt.subplots(nrows=3, ncols=4, figsize=(10, 7))
image_ids = [0, 1, 9]
cmap_bound = np.abs(attrs[[0, 1, 9]]).max()

for row, image_id in enumerate(image_ids):
    # original images
    ax[row, 0].imshow(X_test[image_id].squeeze(), cmap="gray")
    ax[row, 0].set_title(f"Prediction: {predictions[image_id]}")

    # attributions
    attr = attrs[image_id]
    im = ax[row, 1].imshow(
        attr.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap="PiYG"
    )

    # positive attributions
    attr_pos = attr.clip(0, 1)
    im_pos = ax[row, 2].imshow(
        attr_pos.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap="PiYG"
    )

    # negative attributions
    attr_neg = attr.clip(-1, 0)
    im_neg = ax[row, 3].imshow(
        attr_neg.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap="PiYG"
    )

ax[0, 1].set_title("Attributions")
ax[0, 2].set_title("Positive attributions")
ax[0, 3].set_title("Negative attributions")

for ax in fig.axes:
    ax.axis("off")

fig.colorbar(im, cax=fig.add_axes([0.95, 0.25, 0.03, 0.5]));
[ ]:
!kubectl delete -f resources/mnist_rest_explainer.yaml

XGBoost Model with TreeShap Explainer

The model and explainer used here can be trained yourself following the full example in the Explaining Tree Models with Interventional Feature Perturbation Tree SHAP in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[ ]:
%%writefile resources/income_explainer.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: income
spec:
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: XGBOOST_SERVER
      modelUri: gs://seldon-models/xgboost/adult/model_1.0.2
      name: income-model
    explainer:
      type: TreeShap
      modelUri: gs://seldon-models/xgboost/adult/tree_shap_py37_alibi_0.6.0
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/income_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import numpy as np

from seldon_core.seldon_client import SeldonClient

sc = SeldonClient(
    deployment_name="income",
    namespace="seldon",
    gateway="istio",
    gateway_endpoint="localhost:8003",
)

Use python client library to get a prediction.

[ ]:
data = np.array([[52, 4, 0, 2, 8, 4, 2, 0, 0, 0, 60, 9]])
r = sc.predict(data=data)
print(r.response)

Use python client library to get an explanation.

[ ]:
from alibi.datasets import fetch_adult

adult = fetch_adult()
data = adult.data
feature_names = adult.feature_names
[ ]:
import json
import time

# data = np.array([[52,  4,  0,  2,  8,  4,  2,  0,  0,  0, 60,  9]])
start = time.time()
res = sc.explain(deployment_name="income", predictor="default", data=data[0:1000])
end = time.time()
print("Elapsed time:", end - start)
explanation = res.response
explanationStr = json.dumps(explanation)
[ ]:
from alibi.api.interfaces import Explanation

explanation = Explanation.from_json(explanationStr)
[ ]:
explanation.shap_values = np.array(explanation.shap_values)
explanation.raw["instances"] = np.array(explanation.raw["instances"])
[ ]:
def decode_data(X, feature_names, category_map):
    """
    Given an encoded data matrix `X` returns a matrix where the
    categorical levels have been replaced by human readable categories.
    """

    # expect 2D array
    # if len(X.shape) == 1:
    #    X = X.reshape(1, -1)

    X_new = np.zeros(X.shape, dtype=object)
    # Check if a column is categorical and replace it with values from category map
    for idx, name in enumerate(feature_names):
        categories = category_map.get(str(idx), None)
        if categories:
            for j, category in enumerate(categories):
                encoded_vals = X[:, idx] == j
                X_new[encoded_vals, idx] = category
        else:
            X_new[:, idx] = X[:, idx]

    return X_new
[ ]:
decoded_features = decode_data(
    data, explanation.feature_names, explanation.categorical_names
)
[ ]:
import shap

shap.initjs()
[ ]:
shap.force_plot(
    explanation.expected_value,  # 0 is a class index but we have single-output model
    explanation.shap_values[0],
    decoded_features,
    feature_names,
)
[ ]:
!kubectl delete -f resources/income_explainer.yaml

Experimental: XGBoost Model with GPU TreeShap Explainer

The model and explainer used here can be trained yourself following the full example in the Explaining Tree Models with Interventional Feature Perturbation Tree SHAP in the Alibi project documentation.

Note we used a python 3.8.5 and Alibi master to fit the GPU based TreeShap model.

  • You will need a cluster with GPUs. This has been tested on GKE with NVIDIA Tesla P100 GPUs.

[ ]:
%%writefile resources/income_gpu_explainer.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: incomegpu
spec:
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: XGBOOST_SERVER
      modelUri: gs://seldon-models/xgboost/adult/model_1.0.2
      name: income-model
    explainer:
      type: TreeShap
      modelUri: gs://seldon-models/xgboost/adult/tree_shap_gpu
      containerSpec:
        name: explainer
        image: seldonio/alibiexplainer-gpu:1.7.0-dev
        resources:
          limits:
            nvidia.com/gpu: 1
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/income_gpu_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import numpy as np

from seldon_core.seldon_client import SeldonClient

sc = SeldonClient(
    deployment_name="incomegpu",
    namespace="seldon",
    gateway="istio",
    gateway_endpoint="localhost:8003",
)

Use python client library to get a prediction.

[ ]:
data = np.array([[52, 4, 0, 2, 8, 4, 2, 0, 0, 0, 60, 9]])
r = sc.predict(data=data)
print(r.response)

Use python client library to get an explanation.

[ ]:
from alibi.datasets import fetch_adult

adult = fetch_adult()
data = adult.data
[ ]:
import time

start = time.time()
res = sc.explain(deployment_name="incomegpu", predictor="default", data=data[0:1000])
end = time.time()
print("Elapsed time:", end - start)

For running this test on P100 GPUs on GKE we see at least a 15x speed up over the CPU example above.

[ ]:
from alibi.api.interfaces import Explanation

if res.success:
    print("Successful explanation")
    explanation = res.response
    explanationStr = json.dumps(explanation)
    explanation = Explanation.from_json(explanationStr)

    explanation.shap_values = np.array(explanation.shap_values)
    explanation.raw["instances"] = np.array(explanation.raw["instances"])
else:
    explanation = None
    print("Explanation not successful: are you running on GPU enabled cluster?")
[ ]:
def decode_data(X, feature_names, category_map):
    """
    Given an encoded data matrix `X` returns a matrix where the
    categorical levels have been replaced by human readable categories.
    """

    # expect 2D array
    if len(X.shape) == 1:
        X = X.reshape(1, -1)

    X_new = np.zeros(X.shape, dtype=object)
    # Check if a column is categorical and replace it with values from category map
    for idx, name in enumerate(feature_names):
        categories = category_map.get(str(idx), None)
        if categories:
            for j, category in enumerate(categories):
                encoded_vals = X[:, idx] == j
                X_new[encoded_vals, idx] = category
        else:
            X_new[:, idx] = X[:, idx]

    return X_new
[ ]:
import shap

shap.initjs()
[ ]:
if explanation is not None:
    decoded_features = decode_data(
        data, explanation.feature_names, explanation.categorical_names
    )
    shap.force_plot(
        explanation.expected_value[
            0
        ],  # 0 is a class index but we have single-output model
        explanation.shap_values[0],
        decoded_features,
        explanation.feature_names,
    )
[ ]:
!kubectl delete -f resources/income_gpu_explainer.yaml

Triton CIFAR10 Model

[ ]:
%%writefile resources/cifar10_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: cifar10-classifier
spec:
  protocol: kfserving
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      implementation: TRITON_SERVER
      modelUri: gs://seldon-models/triton/tf_cifar10
      name: cifar10
      logger:
         mode: all
    explainer:
      type: AnchorImages
      modelUri: gs://seldon-models/tfserving/cifar10/explainer-py36-0.5.2
    name: default
    replicas: 1
[ ]:
!kubectl apply -f resources/cifar10_explainer.yaml
[ ]:
!kubectl wait --for condition=ready --timeout=300s sdep --all -n seldon
[ ]:
import os

import matplotlib.pyplot as plt
import tensorflow as tf

url = "https://storage.googleapis.com/seldon-models/alibi-detect/classifier/"
path_model = os.path.join(url, "cifar10", "resnet32", "model.h5")
save_path = tf.keras.utils.get_file("resnet32", path_model)
model = tf.keras.models.load_model(save_path)

train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test

X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
class_names = [
    "airplane",
    "automobile",
    "bird",
    "cat",
    "deer",
    "dog",
    "frog",
    "horse",
    "ship",
    "truck",
]
[ ]:
import json
from subprocess import PIPE, Popen, run

import numpy as np

idx = 12
test_example = X_test[idx : idx + 1].tolist()
payload = (
    '{"inputs":[{"name":"input_1","datatype":"FP32","shape":[1, 32, 32, 3],"data":'
    + f"{test_example}"
    + "}]}"
)
cmd = f"""curl -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier/v2/models/cifar10/infer \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True, stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
res = json.loads(raw)
arr = np.array(res["outputs"][0]["data"])
X = X_test[idx].reshape(1, 32, 32, 3)
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
print("class:", class_names[y_test[idx][0]])
print("prediction:", class_names[arr.argmax()])
[ ]:
test_example = X_test[idx : idx + 1].tolist()
payload = (
    '{"inputs":[{"name":"input_1","datatype":"FP32","shape":[1, 32, 32, 3],"data":'
    + f"{test_example}"
    + "}]}"
)
cmd = f"""curl -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier-explainer/default/v2/models/cifar10/explain \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True, stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
explanation = json.loads(raw)
arr = np.array(explanation["data"]["anchor"])
plt.imshow(arr)
[ ]:
!kubectl delete -f resources/cifar10_explainer.yaml
[ ]: