Canary Rollout with Seldon and Ambassador

Prerequistes

You will need

Creating a Kubernetes Cluster

Follow the Kubernetes documentation to create a cluster.

Once created ensure kubectl is authenticated against the running cluster.

Setup

[1]:
!kubectl create namespace seldon
namespace/seldon created
[2]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
Context "minikube" modified.
[3]:
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created

Install Helm

[4]:
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$HELM_HOME has been configured at /home/clive/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
[5]:
!kubectl rollout status deploy/tiller-deploy -n kube-system
Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller-deploy" successfully rolled out

Start seldon-core

[6]:
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
NAME:   seldon-core
LAST DEPLOYED: Sun Jun 30 17:10:55 2019
NAMESPACE: seldon-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME                          AGE
seldon-operator-manager-role  1s

==> v1/ClusterRoleBinding
NAME                                 AGE
seldon-operator-manager-rolebinding  1s

==> v1/ConfigMap
NAME                     DATA  AGE
seldon-spartakus-config  3     1s

==> v1/Pod(related)
NAME                                         READY  STATUS             RESTARTS  AGE
seldon-operator-controller-manager-0         0/1    ContainerCreating  0         1s
seldon-spartakus-volunteer-5866b6df59-vd58f  0/1    ContainerCreating  0         1s

==> v1/Secret
NAME                                   TYPE    DATA  AGE
seldon-operator-webhook-server-secret  Opaque  0     1s

==> v1/Service
NAME                                        TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
seldon-operator-controller-manager-service  ClusterIP  10.107.70.152  <none>       443/TCP  1s

==> v1/ServiceAccount
NAME                              SECRETS  AGE
seldon-core-seldon-core-operator  1        1s
seldon-spartakus-volunteer        1        1s

==> v1/StatefulSet
NAME                                READY  AGE
seldon-operator-controller-manager  0/1    1s

==> v1beta1/ClusterRole
NAME                        AGE
seldon-spartakus-volunteer  1s

==> v1beta1/ClusterRoleBinding
NAME                        AGE
seldon-spartakus-volunteer  1s

==> v1beta1/CustomResourceDefinition
NAME                                         AGE
seldondeployments.machinelearning.seldon.io  1s

==> v1beta1/Deployment
NAME                        READY  UP-TO-DATE  AVAILABLE  AGE
seldon-spartakus-volunteer  0/1    1           0          1s


NOTES:
NOTES: TODO


[7]:
!kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system
partitioned roll out complete: 1 new pods have been updated...

Setup Ingress

Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473).

[8]:
!helm install stable/ambassador --name ambassador --set crds.keep=false
NAME:   ambassador
LAST DEPLOYED: Sun Jun 30 17:12:39 2019
NAMESPACE: seldon
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME        READY  UP-TO-DATE  AVAILABLE  AGE
ambassador  0/3    3           0          1s

==> v1/Pod(related)
NAME                         READY  STATUS             RESTARTS  AGE
ambassador-778b689797-kzj5f  0/1    ContainerCreating  0         1s
ambassador-778b689797-r8mqj  0/1    ContainerCreating  0         1s
ambassador-778b689797-wjm2k  0/1    ContainerCreating  0         1s

==> v1/Service
NAME               TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
ambassador         LoadBalancer  10.108.250.98   <pending>    80:30114/TCP,443:32175/TCP  1s
ambassador-admins  ClusterIP     10.109.203.166  <none>       8877/TCP                    1s

==> v1/ServiceAccount
NAME        SECRETS  AGE
ambassador  1        1s

==> v1beta1/ClusterRole
NAME        AGE
ambassador  1s

==> v1beta1/ClusterRoleBinding
NAME        AGE
ambassador  1s

==> v1beta1/CustomResourceDefinition
NAME                                AGE
authservices.getambassador.io       1s
mappings.getambassador.io           1s
modules.getambassador.io            1s
ratelimitservices.getambassador.io  1s
tcpmappings.getambassador.io        1s
tlscontexts.getambassador.io        1s
tracingservices.getambassador.io    1s


NOTES:
Congratuations! You've successfully installed Ambassador.

For help, visit our Slack at https://d6e.co/slack or view the documentation online at https://www.getambassador.io.

To get the IP address of Ambassador, run the following commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
     You can watch the status of by running 'kubectl get svc -w  --namespace seldon ambassador'

  On GKE/Azure:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

  On AWS:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  echo http://$SERVICE_IP:

[9]:
!kubectl rollout status deployment.apps/ambassador
Waiting for deployment "ambassador" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 2 of 3 updated replicas are available...
deployment "ambassador" successfully rolled out

Port Forward to Ambassador

kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

Launch main model

We will create a very simple Seldon Deployment with a dummy model image seldonio/mock_classifier:1.0. This deployment is named example.

[10]:
!pygmentize model.json
{
    "apiVersion": "machinelearning.seldon.io/v1alpha2",
    "kind": "SeldonDeployment",
    "metadata": {
        "labels": {
            "app": "seldon"
        },
        "name": "example"
    },
    "spec": {
        "name": "production-model",
        "predictors": [
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier:1.0",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier"
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    }}
                                  ],
                "graph":
                {
                    "children": [],
                    "name": "classifier",
                    "type": "MODEL",
                    "endpoint": {
                        "type": "REST"
                    }},
                "name": "main",
                "replicas": 1
            }
        ]
    }
}
[22]:
!kubectl create -f model.json
seldondeployment.machinelearning.seldon.io/example created
[24]:
!kubectl rollout status deploy/canary-example-main-7cd068f
Waiting for deployment "canary-example-main-7cd068f" rollout to finish: 0 of 1 updated replicas are available...
deployment "canary-example-main-7cd068f" successfully rolled out

Get predictions

[25]:
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="example",namespace="seldon")

REST Request

[26]:
r = sc.predict(gateway="ambassador",transport="rest")
print(r)
Success:True message:
Request:
data {
  tensor {
    shape: 1
    shape: 1
    values: 0.6010146752277817
  }
}

Response:
meta {
  puid: "s4rqgg7i9cu4a1emd59m01rujd"
  requestPath {
    key: "classifier"
    value: "seldonio/mock_classifier:1.0"
  }
}
data {
  names: "proba"
  tensor {
    shape: 1
    shape: 1
    values: 0.08983493916158691
  }
}

gRPC Request

[36]:
r = sc.predict(gateway="ambassador",transport="grpc")
print(r)
Success:True message:
Request:
data {
  tensor {
    shape: 1
    shape: 1
    values: 0.9518124169318304
  }
}

Response:
meta {
  puid: "mplvldoiter3si62gulsmahs58"
  requestPath {
    key: "classifier"
    value: "seldonio/mock_classifier_rest:1.1"
  }
}
data {
  names: "proba"
  tensor {
    shape: 1
    shape: 1
    values: 0.12294266589479223
  }
}

Launch Canary

We will now extend the existing graph and add a new predictor as a canary using a new model seldonio/mock_classifier_rest:1.1. We will add traffic values to split traffic 75/25 to the main and canary.

[29]:
!pygmentize canary.json
{
    "apiVersion": "machinelearning.seldon.io/v1alpha2",
    "kind": "SeldonDeployment",
    "metadata": {
        "labels": {
            "app": "seldon"
        },
        "name": "example"
    },
    "spec": {
        "name": "canary-example",
        "predictors": [
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier:1.0",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier"
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    }}
                                  ],
                "graph":
                {
                    "children": [],
                    "name": "classifier",
                    "type": "MODEL",
                    "endpoint": {
                        "type": "REST"
                    }},
                "name": "main",
                "replicas": 1,
                "traffic": 75
            },
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier_rest:1.1",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier"
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    }}
                                  ],
                "graph":
                {
                    "children": [],
                    "name": "classifier",
                    "type": "MODEL",
                    "endpoint": {
                        "type": "REST"
                    }},
                "name": "canary",
                "replicas": 1,
                "traffic": 25
            }
        ]
    }
}
[30]:
!kubectl apply -f canary.json
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
seldondeployment.machinelearning.seldon.io/example configured
[33]:
!kubectl rollout status deploy/canary-example-main-7cd068f
!kubectl rollout status deploy/canary-example-canary-4c8805f
deployment "canary-example-main-7cd068f" successfully rolled out
deployment "canary-example-canary-4c8805f" successfully rolled out

Show our REST requests are now split with roughly 25% going to the canary.

[42]:
from collections import defaultdict
counts = defaultdict(int)
n = 100
for i in range(n):
    r = sc.predict(gateway="ambassador",transport="rest")
    counts[r.response.meta.requestPath["classifier"]] += 1
for k in counts:
    print(k,(counts[k]/float(n))*100,"%")

seldonio/mock_classifier:1.0 81.0 %
seldonio/mock_classifier_rest:1.1 19.0 %

Now lets test gRPC

[37]:
counts = defaultdict(int)
n = 100
for i in range(n):
    r = sc.predict(gateway="ambassador",transport="grpc")
    counts[r.response.meta.requestPath["classifier"]] += 1
for k in counts:
    print(k,(counts[k]/float(n))*100,"%")

seldonio/mock_classifier:1.0 75.0 %
seldonio/mock_classifier_rest:1.1 25.0 %
[ ]: