This page was generated from notebooks/explainer_examples.ipynb.

Example Model Explanations with Seldon

Seldon core supports various out-of-the-box explainers that leverage the Alibi ML Expalinability open source library.

In this notebook we show how you can use the pre-packaged explainer functionality that simplifies the creation of advanced AI model explainers.

Seldon provides the following out-of-the-box pre-packaged explainers: * Anchor Tabular Explainer * A black box Explainer that uses the anchor technique for tabular data * It basically answers the question of what are the most “powerul” or “important” features in a tabular prediction * Anchor Image Explainer * A black box Explainer that uses the anchor technique for image data * It basically answers the question of what are the most “powerul” or “important” pixels in an image prediction * Anchor Text Explainer * A black box Explainer that uses the anchor technique for text data * It basically answers the question of what are the most “powerul” or “important” tokens in a text prediction * Kernel Shap Explainer * A black box Explainer that uses the kernel shap technique for tabular data * It provides postive and negative feature attributions that contributed to the predictions * Integrated Gradient Explainer * A white box explainer that uses the Integrated Gradients technique for Keras models * It provides importance values for each feature * Tree Shap Explainer * A white box explainer that uses the TreeShap technqiue for tree based models * It provides positive and negative feature attributions that contributed to the predictions

Running this notebook

For the ImageNet Model you will need:

This should install the required package dependencies, if not please also install: - Pillow package (pip install Pillow) - matplotlib package (pip install matplotlib) - tensorflow package (pip install tensorflow)

You will also need to start Jupyter with settings to allow for large payloads, for example:

jupyter notebook --NotebookApp.iopub_data_rate_limit=1000000000

Setup Seldon Core

Follow the instructions to Setup Cluster with Ambassador Ingress and Install Seldon Core.

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80

Create Namespace for experimentation

We will first set up the namespace of Seldon where we will be deploying all our models

[1]:
!kubectl create namespace seldon
Error from server (AlreadyExists): namespaces "seldon" already exists

And then we will set the current workspace to use the seldon namespace so all our commands are run there by default (instead of running everything in the default namespace.)

[2]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
Context "kind-kind" modified.

Income Prediction Model with Anchors Explainer

The model and explainer used here can be trained yourself following the full example in the Alibi Anchor Explanations for Income Notebook in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[3]:
%%writefile resources/income_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: income
spec:
  name: income
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/income/model-0.23.2
      name: classifier
    explainer:
      type: AnchorTabular
      modelUri: gs://seldon-models/sklearn/income/explainer-py36-0.5.2
    name: default
    replicas: 1
Overwriting resources/income_explainer.yaml
[4]:
!kubectl apply -f resources/income_explainer.yaml
seldondeployment.machinelearning.seldon.io/income created
[5]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=income -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "income-default-0-classifier" rollout to finish: 0 of 1 updated replicas are available...
deployment "income-default-0-classifier" successfully rolled out
[6]:
!kubectl rollout status deploy/income-default-explainer
Waiting for deployment "income-default-explainer" rollout to finish: 0 of 1 updated replicas are available...
deployment "income-default-explainer" successfully rolled out
[7]:
from seldon_core.seldon_client import SeldonClient
import numpy as np
sc = SeldonClient(deployment_name="income",namespace="seldon", gateway="ambassador", gateway_endpoint="localhost:8003")

Use python client library to get a prediction.

[8]:
data = np.array([[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]])
r = sc.predict(data=data)
print(r.response)
{'data': {'names': ['t:0', 't:1'], 'tensor': {'shape': [1, 2], 'values': [0.8585304277244477, 0.14146957227555243]}}, 'meta': {}}

Use curl to get a prediction.

[9]:
!curl -d '{"data": {"ndarray":[[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]]}}' \
   -X POST http://localhost:8003/seldon/seldon/income/api/v1.0/predictions \
   -H "Content-Type: application/json"
{"data":{"names":["t:0","t:1"],"ndarray":[[0.8585304277244477,0.14146957227555243]]},"meta":{}}

Use python client library to get an explanation.

[10]:
data = np.array([[39, 7, 1, 1, 1, 1, 4, 1, 2174, 0, 40, 9]])
explanation = sc.explain(deployment_name="income", predictor="default", data=data)
print(explanation.response["data"]["anchor"])
['Marital Status = Never-Married']

Using curl to get an explanation.

[11]:
!curl -X POST -H 'Content-Type: application/json' \
    -d '{"data": {"names": ["text"], "ndarray": [[52,  4,  0,  2,  8,  4,  2,  0,  0,  0, 60, 9]]}}' \
    http://localhost:8003/seldon/seldon/income-explainer/default/api/v1.0/explain | jq ".data.anchor"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1715  100  1624  100    91   8837    495 --:--:-- --:--:-- --:--:--  8826
[
  "Marital Status = Separated"
]
[12]:
!kubectl delete -f resources/income_explainer.yaml
seldondeployment.machinelearning.seldon.io "income" deleted

Movie Sentiment Model

The model used here can be trained yourself by following the full example Anchor explanations for movie sentiment taken from the Alibi documentation.

[13]:
%%writefile resources/moviesentiment_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: movie
spec:
  name: movie
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/moviesentiment
      name: classifier
    explainer:
      type: AnchorText
    name: default
    replicas: 1
Overwriting resources/moviesentiment_explainer.yaml
[14]:
!kubectl apply -f resources/moviesentiment_explainer.yaml
seldondeployment.machinelearning.seldon.io/movie created
[15]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=movie -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "movie-default-0-classifier" rollout to finish: 0 of 1 updated replicas are available...
deployment "movie-default-0-classifier" successfully rolled out
[16]:
!kubectl rollout status deploy/movie-default-explainer
deployment "movie-default-explainer" successfully rolled out
[17]:
from seldon_core.seldon_client import SeldonClient
import numpy as np
sc = SeldonClient(deployment_name="movie", namespace="seldon", gateway_endpoint="localhost:8003", payload_type='ndarray')
[18]:
!curl -d '{"data": {"ndarray":["This film has great actors"]}}' \
   -X POST http://localhost:8003/seldon/seldon/movie/api/v1.0/predictions \
   -H "Content-Type: application/json"
{"data":{"names":["t:0","t:1"],"ndarray":[[0.21266916924914636,0.7873308307508536]]},"meta":{}}
[19]:
data = np.array(['this film has great actors'])
r = sc.predict(data=data)
print(r)
assert(r.success==True)
Success:True message:
Request:
meta {
}
data {
  ndarray {
    values {
      string_value: "this film has great actors"
    }
  }
}

Response:
{'data': {'names': ['t:0', 't:1'], 'ndarray': [[0.21266916924914636, 0.7873308307508536]]}, 'meta': {}}
[20]:
!curl -s -d '{"data": {"ndarray":["a visually exquisite but narratively opaque and emotionally vapid experience of style and mystification"]}}' \
   -X POST http://localhost:8003/seldon/seldon/movie-explainer/default/api/v1.0/explain \
   -H "Content-Type: application/json" | jq ".data.anchor"
[
  "emotionally",
  "vapid"
]
[21]:
data = np.array(['a visually exquisite but narratively opaque and emotionally vapid experience of style and mystification'])
explanation = sc.explain(predictor="default", data=data)
print(explanation.response["data"]["anchor"])
['emotionally', 'vapid']
[22]:
!kubectl delete -f resources/moviesentiment_explainer.yaml
seldondeployment.machinelearning.seldon.io "movie" deleted

Tensorflow CIFAR10 Model

A full Kubeflow example with training of the model and explainer can be found in the Kubeflow Pipelines project examples.

[1]:
%%writefile resources/cifar10_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: cifar10-classifier
spec:
  protocol: tensorflow
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - componentSpecs:
    graph:
      implementation: TENSORFLOW_SERVER
      modelUri: gs://seldon-models/tfserving/cifar10/resnet32
      name: cifar10-classifier
      logger:
         mode: all
    explainer:
      type: AnchorImages
      modelUri: gs://seldon-models/tfserving/cifar10/explainer-py36-0.5.2
    name: default
    replicas: 1
Overwriting resources/cifar10_explainer.yaml
[2]:
!kubectl apply -f resources/cifar10_explainer.yaml
seldondeployment.machinelearning.seldon.io/cifar10-classifier created
[4]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=cifar10-classifier -o jsonpath='{.items[0].metadata.name}')
deployment "cifar10-classifier-default-0-cifar10-classifier" successfully rolled out
[5]:
!kubectl rollout status deploy/cifar10-classifier-default-explainer
deployment "cifar10-classifier-default-explainer" successfully rolled out
[6]:
import tensorflow as tf
import matplotlib.pyplot as plt
import os

url = 'https://storage.googleapis.com/seldon-models/alibi-detect/classifier/'
path_model = os.path.join(url, "cifar10", "resnet32", 'model.h5')
save_path = tf.keras.utils.get_file("resnet32", path_model)
model = tf.keras.models.load_model(save_path)

train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test

X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
               'dog', 'frog', 'horse', 'ship', 'truck']
WARNING: Logging before flag parsing goes to stderr.
W1106 08:48:27.877613 139714443523840 deprecation.py:506] From /home/clive/anaconda3/envs/seldon-core/lib/python3.6/site-packages/tensorflow_core/python/keras/initializers.py:143: calling RandomNormal.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W1106 08:48:27.892694 139714443523840 deprecation.py:506] From /home/clive/anaconda3/envs/seldon-core/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
[7]:
from subprocess import run, Popen, PIPE
import json
import numpy as np
idx=12
test_example=X_test[idx:idx+1].tolist()
payload='{"instances":'+f"{test_example}"+' }'
cmd=f"""curl -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier/v1/models/cifar10-classifier/:predict \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True,stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
print(raw)
res=json.loads(raw)
arr=np.array(res["predictions"])
X = X_test[idx].reshape(1, 32, 32, 3)
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
print("class:",class_names[y_test[idx][0]])
print("prediction:",class_names[arr[0].argmax()])
{
    "predictions": [[8.98417127e-08, 1.35163679e-12, 5.20754609e-13, 9.01404201e-05, 4.04729e-12, 0.999909759, 9.77382086e-09, 1.30629796e-09, 5.39957488e-12, 3.7917457e-14]
    ]
}
../_images/examples_explainer_examples_38_1.png
class: dog
prediction: dog
[8]:
test_example=X_test[idx:idx+1].tolist()
payload='{"instances":'+f"{test_example}"+' }'
cmd=f"""curl -d '{payload}' \
   http://localhost:8003/seldon/seldon/cifar10-classifier-explainer/default/v1/models/cifar10-classifier:explain \
   -H "Content-Type: application/json"
"""
ret = Popen(cmd, shell=True,stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
explanation = json.loads(raw)
arr = np.array(explanation["data"]["anchor"])
plt.imshow(arr)
[8]:
<matplotlib.image.AxesImage at 0x7f11753fd128>
../_images/examples_explainer_examples_39_1.png
[9]:
!kubectl delete -f resources/cifar10_explainer.yaml
seldondeployment.machinelearning.seldon.io "cifar10-classifier" deleted

Wine Prediction Model with Shap Explainer

The model and explainer used here can be trained yourself following the full example in the Kernel SHAP explanation for multinomial logistic regression models in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[49]:
import shap
shap.initjs()
[50]:
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
wine = load_wine()
data = wine.data
target = wine.target
target_names = wine.target_names
feature_names  = wine.feature_names
X_train, X_test, y_train, y_test = train_test_split(data,
                                                    target,
                                                    test_size=0.2,
                                                    random_state=0,
                                                   )
print("Training records: {}".format(X_train.shape[0]))
print("Testing records: {}".format(X_test.shape[0]))
Training records: 142
Testing records: 36
[51]:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_norm = scaler.transform(X_train)
X_test_norm = scaler.transform(X_test)
[52]:
%%writefile resources/wine_explainer.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: wine
spec:
  annotations:
    seldon.io/rest-timeout: "100000"
  predictors:
  - graph:
      children: []
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/wine/model-py36-0.23.2
      name: classifier
      parameters:
        - name: method
          type: STRING
          value: decision_function
    explainer:
      type: KernelShap
      modelUri: gs://seldon-models/sklearn/wine/kernel_shap_py36_alibi_0.5.5
    name: default
    replicas: 1
Overwriting resources/wine_explainer.yaml
[53]:
!kubectl apply -f resources/wine_explainer.yaml
seldondeployment.machinelearning.seldon.io/wine created
[54]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=wine -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "wine-default-0-classifier" rollout to finish: 0 of 1 updated replicas are available...
deployment "wine-default-0-classifier" successfully rolled out
[55]:
!kubectl rollout status deploy/wine-default-explainer
deployment "wine-default-explainer" successfully rolled out
[56]:
from seldon_core.seldon_client import SeldonClient
import numpy as np
sc = SeldonClient(deployment_name="wine",namespace="seldon", gateway="ambassador", gateway_endpoint="localhost:8003")

Use python client library to get a prediction.

[57]:
data = np.array([[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236,
       -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713,
       -2.06608025, -1.55017035, -0.86659858]])
r = sc.predict(data=data)
print(r.response)
class_idx = np.argmax(np.array(r.response["data"]["tensor"]["values"]))
{'data': {'names': ['t:0', 't:1', 't:2'], 'tensor': {'shape': [1, 3], 'values': [-0.203700284044519, 0.8934751316557469, 2.2237213335499804]}}, 'meta': {}}

Use curl to get a prediction.

[58]:
!curl -d '{"data": {"ndarray":[[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236, -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713, -2.06608025, -1.55017035, -0.86659858]]}}' \
   -X POST http://localhost:8003/seldon/seldon/wine/api/v1.0/predictions \
   -H "Content-Type: application/json"
{"data":{"names":["t:0","t:1","t:2"],"ndarray":[[-0.203700284044519,0.8934751316557469,2.2237213335499804]]},"meta":{}}

Use python client library to get an explanation.

[59]:
import json
data = np.array([[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236,
       -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713,
       -2.06608025, -1.55017035, -0.86659858]])
explanation = sc.explain(deployment_name="wine", predictor="default", data=data)
explanation = explanation.response
expStr = json.dumps(explanation)
[60]:
from alibi.api.interfaces import Explanation
explanation = Explanation.from_json(expStr)
[61]:
explanation.shap_values = np.array(explanation.shap_values)
explanation.raw["instances"] = np.array(explanation.raw["instances"])
[62]:
idx=0
shap.force_plot(
    explanation.expected_value[class_idx],
    explanation.shap_values[class_idx][idx, :],
    explanation.raw['instances'][idx][None, :],
    explanation.feature_names,
)
[62]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.

Using curl to get an explanation.

[63]:
!curl -X POST -H 'Content-Type: application/json' \
    -d '{"data": {"names": ["text"], "ndarray": [[-0.24226334,  0.26757916,  0.42085937,  0.7127641 ,  0.84067236, -1.27747161, -0.60582812, -0.9706341 , -0.5873972 ,  2.42611713, -2.06608025, -1.55017035, -0.86659858]]}}' \
    http://localhost:8003/seldon/seldon/wine-explainer/default/api/v1.0/explain | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4129  100  3916  100   213   1267     68  0:00:03  0:00:03 --:--:--  1267
{
  "meta": {
    "name": "KernelShap",
    "type": [
      "blackbox"
    ],
    "task": "classification",
    "explanations": [
      "local",
      "global"
    ],
    "params": {
      "link": "identity",
      "group_names": null,
      "grouped": false,
      "groups": null,
      "weights": null,
      "summarise_background": false,
      "summarise_result": false,
      "transpose": false,
      "kwargs": {}
    }
  },
  "data": {
    "shap_values": [
      [
        [
          -0.018454421208892513,
          0.012763470836013313,
          -0.001740270040221814,
          -0.07633537428284093,
          0.006251732078452754,
          -0.13734297799429607,
          -0.1184209545879712,
          0.016528383221730947,
          -0.035244307767622385,
          -0.1230174198090298,
          -0.14524487323369617,
          -0.2507145522127333,
          -0.1309476564266916
        ]
      ],
      [
        [
          0.015452322719423872,
          -0.03832005916153608,
          -0.04544081251256327,
          -7.582651671816931e-05,
          -0.04651784436924722,
          0.01980318229602418,
          -0.05109329519459854,
          0.0071260827408027305,
          -0.03975300296252354,
          -0.16144118472393876,
          -0.15110853673546232,
          -0.1006346397643918,
          0.06837621797012206
        ]
      ],
      [
        [
          0.005403884749745513,
          0.020371174031396766,
          0.04217363001738739,
          0.07712885770720579,
          0.03400723775835046,
          0.1216306489876593,
          0.16988456143104136,
          -0.02030618995668898,
          0.07351750311197458,
          0.2879617844043466,
          0.29473542234118644,
          0.3536965903131064,
          0.06890108397640105
        ]
      ]
    ],
    "expected_value": [
      0.7982189373832798,
      1.4171025278703537,
      0.6946151446768677
    ],
    "categorical_names": {},
    "feature_names": [
      "alcohol",
      "malic_acid",
      "ash",
      "alcalinity_of_ash",
      "magnesium",
      "total_phenols",
      "flavanoids",
      "nonflavanoid_phenols",
      "proanthocyanins",
      "color_intensity",
      "hue",
      "od280/od315_of_diluted_wines",
      "proline"
    ],
    "raw": {
      "raw_prediction": [
        [
          -0.203700284044519,
          0.8934751316557469,
          2.2237213335499804
        ]
      ],
      "prediction": [
        2
      ],
      "instances": [
        [
          -0.24226334,
          0.26757916,
          0.42085937,
          0.7127641,
          0.84067236,
          -1.27747161,
          -0.60582812,
          -0.9706341,
          -0.5873972,
          2.42611713,
          -2.06608025,
          -1.55017035,
          -0.86659858
        ]
      ],
      "importances": {
        "0": {
          "ranked_effect": [
            0.2507145522127333,
            0.14524487323369617,
            0.13734297799429607,
            0.1309476564266916,
            0.1230174198090298,
            0.1184209545879712,
            0.07633537428284093,
            0.035244307767622385,
            0.018454421208892513,
            0.016528383221730947,
            0.012763470836013313,
            0.006251732078452754,
            0.001740270040221814
          ],
          "names": [
            "od280/od315_of_diluted_wines",
            "hue",
            "total_phenols",
            "proline",
            "color_intensity",
            "flavanoids",
            "alcalinity_of_ash",
            "proanthocyanins",
            "alcohol",
            "nonflavanoid_phenols",
            "malic_acid",
            "magnesium",
            "ash"
          ]
        },
        "1": {
          "ranked_effect": [
            0.16144118472393876,
            0.15110853673546232,
            0.1006346397643918,
            0.06837621797012206,
            0.05109329519459854,
            0.04651784436924722,
            0.04544081251256327,
            0.03975300296252354,
            0.03832005916153608,
            0.01980318229602418,
            0.015452322719423872,
            0.0071260827408027305,
            7.582651671816931e-05
          ],
          "names": [
            "color_intensity",
            "hue",
            "od280/od315_of_diluted_wines",
            "proline",
            "flavanoids",
            "magnesium",
            "ash",
            "proanthocyanins",
            "malic_acid",
            "total_phenols",
            "alcohol",
            "nonflavanoid_phenols",
            "alcalinity_of_ash"
          ]
        },
        "2": {
          "ranked_effect": [
            0.3536965903131064,
            0.29473542234118644,
            0.2879617844043466,
            0.16988456143104136,
            0.1216306489876593,
            0.07712885770720579,
            0.07351750311197458,
            0.06890108397640105,
            0.04217363001738739,
            0.03400723775835046,
            0.020371174031396766,
            0.02030618995668898,
            0.005403884749745513
          ],
          "names": [
            "od280/od315_of_diluted_wines",
            "hue",
            "color_intensity",
            "flavanoids",
            "total_phenols",
            "alcalinity_of_ash",
            "proanthocyanins",
            "proline",
            "ash",
            "magnesium",
            "malic_acid",
            "nonflavanoid_phenols",
            "alcohol"
          ]
        },
        "aggregated": {
          "ranked_effect": [
            0.7050457822902315,
            0.5910888323103449,
            0.5724203889373152,
            0.3393988112136111,
            0.27877680927797954,
            0.2682249583732147,
            0.1535400585067649,
            0.1485148138421205,
            0.08935471257017247,
            0.08677681420605043,
            0.07145470402894616,
            0.04396065591922266,
            0.0393106286780619
          ],
          "names": [
            "od280/od315_of_diluted_wines",
            "hue",
            "color_intensity",
            "flavanoids",
            "total_phenols",
            "proline",
            "alcalinity_of_ash",
            "proanthocyanins",
            "ash",
            "magnesium",
            "malic_acid",
            "nonflavanoid_phenols",
            "alcohol"
          ]
        }
      }
    }
  }
}
[64]:
!kubectl delete -f resources/wine_explainer.yaml
seldondeployment.machinelearning.seldon.io "wine" deleted

MNIST Model with Integrated Gradients Explainer

The model and explainer used here can be trained yourself following the full example in the Integrated gradients for MNIST in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[65]:
%%writefile resources/mnist_rest_explainer.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: tfserving
spec:
  name: mnist
  predictors:
  - graph:
      children: []
      implementation: TENSORFLOW_SERVER
      modelUri: gs://seldon-models/tfserving/mnist-model
      name: mnist-model
      parameters:
        - name: signature_name
          type: STRING
          value: predict_images
        - name: model_name
          type: STRING
          value: mnist-model
    explainer:
      type: IntegratedGradients
      modelUri: gs://seldon-models/keras/mnist
    name: default
    replicas: 1
Overwriting resources/mnist_rest_explainer.yaml
[66]:
!kubectl apply -f resources/mnist_rest_explainer.yaml
seldondeployment.machinelearning.seldon.io/tfserving created
[67]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=tfserving -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "tfserving-default-0-mnist-model" rollout to finish: 0 of 1 updated replicas are available...
deployment "tfserving-default-0-mnist-model" successfully rolled out
[68]:
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
import numpy as np
train, test = tf.keras.datasets.mnist.load_data()
X_train, y_train = train
X_test, y_test = test
test_labels = y_test.copy()
train_labels = y_train.copy()

X_train = X_train.reshape(-1, 28, 28, 1).astype('float64') / 255
X_test = X_test.reshape(-1, 28, 28, 1).astype('float64') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
(60000, 28, 28, 1) (60000, 10) (10000, 28, 28, 1) (10000, 10)
[69]:
nb_samples = 10
X_test_sample = X_test[:nb_samples]
[70]:
import json
d = {"data": {"tensor":{"shape":[10,784],"values":X_test_sample.flatten().tolist()}}}
with open("input.json","w") as f:
    f.write(json.dumps(d))
[71]:
res=!curl -s -H 'Content-Type: application/json' \
    -d @./input.json \
    http://localhost:8003/seldon/seldon/tfserving/api/v1.0/predictions
res=json.loads(res[0])
[72]:
predictions = np.array(res["data"]["tensor"]["values"]).reshape(res["data"]["tensor"]["shape"])
predictions = predictions.argmax(axis=1)
[73]:
predictions
[73]:
array([7, 2, 1, 0, 4, 1, 4, 9, 6, 9])
[74]:
import json
d = {"data": {"tensor":{"shape":X_test_sample.shape,"values":X_test_sample.flatten().tolist()}}}
with open("input.json","w") as f:
    f.write(json.dumps(d))
[75]:
res=!curl -s -H 'Content-Type: application/json' \
    -d @./input.json \
    http://localhost:8003/seldon/seldon/tfserving-explainer/default/api/v1.0/explain
res=json.loads(res[0])
[76]:
attrs = np.array(res["data"]["attributions"])
[77]:
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(nrows=3, ncols=4, figsize=(10, 7))
image_ids = [0, 1, 9]
cmap_bound = np.abs(attrs[[0, 1, 9]]).max()

for row, image_id in enumerate(image_ids):
    # original images
    ax[row, 0].imshow(X_test[image_id].squeeze(), cmap='gray')
    ax[row, 0].set_title(f'Prediction: {predictions[image_id]}')

    # attributions
    attr = attrs[image_id]
    im = ax[row, 1].imshow(attr.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap='PiYG')

    # positive attributions
    attr_pos = attr.clip(0, 1)
    im_pos = ax[row, 2].imshow(attr_pos.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap='PiYG')

    # negative attributions
    attr_neg = attr.clip(-1, 0)
    im_neg = ax[row, 3].imshow(attr_neg.squeeze(), vmin=-cmap_bound, vmax=cmap_bound, cmap='PiYG')

ax[0, 1].set_title('Attributions');
ax[0, 2].set_title('Positive attributions');
ax[0, 3].set_title('Negative attributions');

for ax in fig.axes:
    ax.axis('off')

fig.colorbar(im, cax=fig.add_axes([0.95, 0.25, 0.03, 0.5]));
../_images/examples_explainer_examples_75_0.png
[78]:
!kubectl delete -f resources/mnist_rest_explainer.yaml
seldondeployment.machinelearning.seldon.io "tfserving" deleted

XGBoost Model with TreeShap Explainer

The model and explainer used here can be trained yourself following the full example in the Explaining Tree Models with Interventional Feature Perturbation Tree SHAP in the Alibi project documentation.

Note we used a python3.6 and version 0.5.2 of Alibi.

[79]:
%%writefile resources/income_explainer.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  name: income
spec:
  predictors:
  - graph:
      children: []
      implementation: XGBOOST_SERVER
      modelUri: gs://seldon-models/xgboost/adult/model_1.0.2
      name: income-model
    explainer:
      type: TreeShap
      modelUri: gs://seldon-models/xgboost/adult/tree_shap_py368_alibi_0.5.5
    name: default
    replicas: 1
Overwriting resources/income_explainer.yaml
[80]:
!kubectl apply -f resources/income_explainer.yaml
seldondeployment.machinelearning.seldon.io/income created
[81]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=income -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "income-default-0-income-model" rollout to finish: 0 of 1 updated replicas are available...
deployment "income-default-0-income-model" successfully rolled out
[82]:
from seldon_core.seldon_client import SeldonClient
import numpy as np
sc = SeldonClient(deployment_name="income",namespace="seldon", gateway="ambassador", gateway_endpoint="localhost:8003")

Use python client library to get a prediction.

[83]:
data = np.array([[52,  4,  0,  2,  8,  4,  2,  0,  0,  0, 60,  9]])
r = sc.predict(data=data)
print(r.response)
{'data': {'names': [], 'tensor': {'shape': [1], 'values': [-1.2381880283355713]}}, 'meta': {}}

Use python client library to get an explanation.

[84]:
import json
data = np.array([[52,  4,  0,  2,  8,  4,  2,  0,  0,  0, 60,  9]])
res = sc.explain(deployment_name="income", predictor="default", data=data)
explanation = res.response
explanationStr = json.dumps(explanation)
[85]:
from alibi.api.interfaces import Explanation
explanation = Explanation.from_json(explanationStr)
[86]:
explanation.shap_values = np.array(explanation.shap_values)
explanation.raw["instances"] = np.array(explanation.raw["instances"])
[87]:
def decode_data(X, feature_names, category_map):
    """
    Given an encoded data matrix `X` returns a matrix where the
    categorical levels have been replaced by human readable categories.
    """

    # expect 2D array
    if len(X.shape) == 1:
        X = X.reshape(1, -1)

    X_new = np.zeros(X.shape, dtype=object)
    # Check if a column is categorical and replace it with values from category map
    for idx, name in enumerate(feature_names):
        categories = category_map.get(str(idx), None)
        if categories:
            for j, category in enumerate(categories):
                encoded_vals = X[:, idx] == j
                X_new[encoded_vals, idx] = category
        else:
            X_new[:, idx] = X[:, idx]

    return X_new
[88]:
decoded_features = decode_data(data,explanation.feature_names,explanation.categorical_names)
[89]:
import shap
shap.initjs()
[90]:
shap.force_plot(
    explanation.expected_value[0], # 0 is a class index but we have single-output model
    explanation.shap_values[0][0, :] ,
    decoded_features,
    explanation.feature_names,
)
[90]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
[91]:
!kubectl delete -f resources/income_explainer.yaml
seldondeployment.machinelearning.seldon.io "income" deleted
[ ]: