Scikit-Learn Iris Model using customData

  • Wrap a scikit-learn python model for use as a prediction microservice in seldon-core

    • Run locally on Docker to test

    • Deploy on seldon-core running on a Kubernetes cluster


  • s2i

  • Seldon Core v1.0.3+ installed

  • pip install sklearn seldon-core protobuf grpcio

Train locally

[ ]:
import numpy as np
import os
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
from sklearn import datasets

def main():
    clf = LogisticRegression()
    p = Pipeline([('clf', clf)])
    print('Training model...'), y)
    print('Model trained!')

    filename_p = 'IrisClassifier.sav'
    print('Saving model in %s' % filename_p)
    joblib.dump(p, filename_p)
    print('Model saved!')

if __name__ == "__main__":
    print('Loading iris data set...')
    iris = datasets.load_iris()
    X, y =,
    print('Dataset loaded!')

Custom Protobuf Specification

First, we’ll need to define our custom protobuf specification so that it can be leveraged.

[ ]:
%%writefile iris.proto

syntax = "proto3";

package iris;

message IrisPredictRequest {
    float sepal_length = 1;
    float sepal_width = 2;
    float petal_length = 3;
    float petal_width = 4;

message IrisPredictResponse {
    float setosa = 1;
    float versicolor = 2;
    float virginica = 3;

Custom Protobuf Compilation

We will need to compile our custom protobuf for python so that we can unpack the customData field passed to our predict method later on.

[ ]:
!python -m --python_out=./ --proto_path=. iris.proto

gRPC test

Wrap model using s2i

[ ]:
!s2i build . seldonio/seldon-core-s2i-python37-ubi8:1.7.0-dev seldonio/sklearn-iris-customdata:0.1

Serve the model locally

[ ]:
!docker run --name "iris_predictor" -d --rm -p 5000:5000 seldonio/sklearn-iris-customdata:0.1

Test using custom protobuf payload

[ ]:
from iris_pb2 import IrisPredictRequest, IrisPredictResponse
from seldon_core.proto import prediction_pb2, prediction_pb2_grpc
import grpc

channel = grpc.insecure_channel("localhost:5000")
stub = prediction_pb2_grpc.ModelStub(channel)

iris_request = IrisPredictRequest(sepal_length=7.233, sepal_width=4.652, petal_length=7.39, petal_width=0.324)

seldon_request = prediction_pb2.SeldonMessage()

response = stub.Predict(seldon_request)

iris_response = IrisPredictResponse()


Stop serving model

[ ]:
!docker rm iris_predictor --force

Setup Seldon Core

Use the setup notebook to setup Seldon Core with an ingress - either Ambassador or Istio

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l -o jsonpath='{.items[0]}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0]}') -n istio-system 8003:80

[ ]:
!kubectl create namespace seldon
[ ]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon

Deploy your Seldon Model

We first create a configuration file:

[ ]:
%%writefile sklearn_iris_customdata_deployment.yaml

kind: SeldonDeployment
  name: seldon-deployment-example
  name: sklearn-iris-deployment
  - componentSpecs:
    - spec:
        - image: groszewn/sklearn-iris-customdata:0.1
          imagePullPolicy: IfNotPresent
          name: sklearn-iris-classifier
      children: []
        type: GRPC
      name: sklearn-iris-classifier
      type: MODEL
    name: sklearn-iris-predictor
    replicas: 1

Run the model in our cluster

Apply the Seldon Deployment configuration file we just created

[ ]:
!kubectl create -f sklearn_iris_customdata_deployment.yaml

Check that the model has been deployed

[ ]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-deployment-example -o jsonpath='{.items[0]}')

Test by sending prediction calls

IrisPredictRequest sent via the customData field.

[ ]:
iris_request = IrisPredictRequest(sepal_length=7.233, sepal_width=4.652, petal_length=7.39, petal_width=0.324)

seldon_request = prediction_pb2.SeldonMessage()

channel = grpc.insecure_channel("localhost:8003")
stub = prediction_pb2_grpc.SeldonStub(channel)

metadata = [("seldon", "seldon-deployment-example"), ("namespace", "seldon")]

response = stub.Predict(request=seldon_request, metadata=metadata)

iris_response = IrisPredictResponse()


Cleanup our deployment

[ ]:
!kubectl delete -f sklearn_iris_customdata_deployment.yaml