This page was generated from examples/feedback/metrics-server/README.ipynb.
Stateful Model Feedback Metrics Server¶
In this example we will add statistical performance metrics capabilities by levering the Seldon metrics server.
Dependencies * Seldon Core installed * Ingress provider (Istio or Ambassador)
An easy way is to run examples/centralized-logging/full-kind-setup.sh
and then:
helm delete seldon-core-loadtesting
helm delete seldon-single-model
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
Ambassador:
kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080
Istio:
kubectl port-forward -n istio-system svc/istio-ingressgateway 8003:80
[1]:
!kubectl create namespace seldon || echo "namespace already created"
Error from server (AlreadyExists): namespaces "seldon" already exists
namespace already created
[2]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
Context "kind-ansible" modified.
[3]:
!mkdir -p config
Create a simple model¶
We create a multiclass classification model - iris classifier.
The iris classifier takes an input array, and returns the prediction of the 4 classes.
The prediction can be done as numeric or as a probability array.
[4]:
%%bash
kubectl apply -f - << END
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: multiclass-model
spec:
predictors:
- graph:
children: []
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/v1.19.0-dev/sklearn/iris
name: classifier
logger:
url: http://seldon-multiclass-model-metrics.seldon.svc.cluster.local:80/
mode: all
name: default
replicas: 1
END
seldondeployment.machinelearning.seldon.io/multiclass-model created
[5]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=multiclass-model -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "multiclass-model-default-0-classifier" rollout to finish: 0 of 1 updated replicas are available...
deployment "multiclass-model-default-0-classifier" successfully rolled out
Send test request¶
[8]:
res=!curl -X POST "http://localhost:8003/seldon/seldon/multiclass-model/api/v1.0/predictions" \
-H "Content-Type: application/json" -d '{"data": { "ndarray": [[1,2,3,4]]}, "meta": { "puid": "hello" }}'
print(res)
import json
j=json.loads(res[-1])
assert(len(j["data"]["ndarray"][0])==3)
[' % Total % Received % Xferd Average Speed Time Time Time Current', ' Dload Upload Total Spent Left Speed', '', ' 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0', '100 266 100 202 100 64 15538 4923 --:--:-- --:--:-- --:--:-- 20461', '{"data":{"names":["t:0","t:1","t:2"],"ndarray":[[0.0006985194531162835,0.00366803903943666,0.995633441507447]]},"meta":{"puid":"hello","requestPath":{"classifier":"seldonio/sklearnserver:1.12.0-dev"}}}']
Metrics Server¶
You can create a kubernetes deployment of the metrics server with this:
[9]:
%%writefile config/multiclass-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: seldon-multiclass-model-metrics
namespace: seldon
labels:
app: seldon-multiclass-model-metrics
spec:
replicas: 1
selector:
matchLabels:
app: seldon-multiclass-model-metrics
template:
metadata:
labels:
app: seldon-multiclass-model-metrics
spec:
securityContext:
runAsUser: 8888
containers:
- name: user-container
image: seldonio/alibi-detect-server:1.19.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- multiclassserver
- --http_port
- '8080'
- --protocol
- seldonfeedback.http
- --storage_uri
- "adserver.cm_models.multiclass_one_hot.MulticlassOneHot"
- --reply_url
- http://message-dumper.default
- --event_type
- io.seldon.serving.feedback.metrics
- --event_source
- io.seldon.serving.feedback
- MetricsServer
env:
- name: "SELDON_DEPLOYMENT_ID"
value: "multiclass-model"
- name: "PREDICTIVE_UNIT_ID"
value: "classifier"
- name: "PREDICTIVE_UNIT_IMAGE"
value: "seldonio/alibi-detect-server:1.19.0-dev"
- name: "PREDICTOR_ID"
value: "default"
---
apiVersion: v1
kind: Service
metadata:
name: seldon-multiclass-model-metrics
namespace: seldon
labels:
app: seldon-multiclass-model-metrics
spec:
selector:
app: seldon-multiclass-model-metrics
ports:
- protocol: TCP
port: 80
targetPort: 8080
Overwriting config/multiclass-deployment.yaml
[10]:
!kubectl apply -n seldon -f config/multiclass-deployment.yaml
deployment.apps/seldon-multiclass-model-metrics created
service/seldon-multiclass-model-metrics created
[11]:
!kubectl rollout status deploy/seldon-multiclass-model-metrics
deployment "seldon-multiclass-model-metrics" successfully rolled out
[12]:
import time
time.sleep(20)
Send feedback¶
[13]:
res=!curl -X POST "http://localhost:8003/seldon/seldon/multiclass-model/api/v1.0/feedback" \
-H "Content-Type: application/json" \
-d '{"response": {"data": {"ndarray": [[0.0006985194531162841,0.003668039039435755,0.9956334415074478]]}}, "truth":{"data": {"ndarray": [[0,0,1]]}}}'
print(res)
import json
j=json.loads(res[-1])
assert("data" in j)
[' % Total % Received % Xferd Average Speed Time Time Time Current', ' Dload Upload Total Spent Left Speed', '', ' 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0', '100 252 100 108 100 144 9000 12000 --:--:-- --:--:-- --:--:-- 21000', '{"data":{"tensor":{"shape":[0]}},"meta":{"requestPath":{"classifier":"seldonio/sklearnserver:1.12.0-dev"}}}']
[14]:
import time
time.sleep(3)
Check that metrics are recorded¶
[15]:
res=!kubectl logs $(kubectl get pods -l app=seldon-multiclass-model-metrics \
-n seldon -o jsonpath='{.items[0].metadata.name}') | grep "PROCESSING Feedback Event"
print(res)
assert(len(res)>0)
['[I 211208 11:08:09 cm_model:99] PROCESSING Feedback Event.']
Cleanup¶
[19]:
!kubectl delete -n seldon -f config/multiclass-deployment.yaml
deployment.apps "seldon-multiclass-model-metrics" deleted
service "seldon-multiclass-model-metrics" deleted
[20]:
!kubectl delete sdep multiclass-model
seldondeployment.machinelearning.seldon.io "multiclass-model" deleted