Canary Rollout with Seldon and Ambassador

Prerequistes

You will need

Creating a Kubernetes Cluster

Follow the Kubernetes documentation to create a cluster.

Once created ensure kubectl is authenticated against the running cluster.

Setup

[2]:
!kubectl create namespace seldon
namespace/seldon created
[3]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
Context "minikube" modified.
[4]:
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created

Install Helm

[5]:
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$HELM_HOME has been configured at /home/clive/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
[6]:
!kubectl rollout status deploy/tiller-deploy -n kube-system
Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller-deploy" successfully rolled out

Start seldon-core

[15]:
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
NAME:   seldon-core
LAST DEPLOYED: Tue Apr 16 09:05:16 2019
NAMESPACE: seldon-system
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/ClusterRoleBinding
NAME                        AGE
seldon-spartakus-volunteer  0s

==> v1/ClusterRoleBinding
seldon-operator-manager-rolebinding  0s

==> v1beta1/Deployment
NAME                        DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
seldon-spartakus-volunteer  1        0        0           0          0s

==> v1/StatefulSet
NAME                                DESIRED  CURRENT  AGE
seldon-operator-controller-manager  1        1        0s

==> v1beta1/ClusterRole
NAME                        AGE
seldon-spartakus-volunteer  0s

==> v1/Service
NAME                                        TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
seldon-operator-controller-manager-service  ClusterIP  10.104.156.241  <none>       443/TCP  0s

==> v1/ServiceAccount
NAME                        SECRETS  AGE
seldon-spartakus-volunteer  1        0s

==> v1/Pod(related)
NAME                                  READY  STATUS             RESTARTS  AGE
seldon-operator-controller-manager-0  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME                                   TYPE    DATA  AGE
seldon-operator-webhook-server-secret  Opaque  0     0s

==> v1/ConfigMap
NAME                     DATA  AGE
seldon-spartakus-config  3     0s

==> v1beta1/CustomResourceDefinition
NAME                                         AGE
seldondeployments.machinelearning.seldon.io  0s

==> v1/ClusterRole
seldon-operator-manager-role  0s


NOTES:
NOTES: TODO


[16]:
!kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system
partitioned roll out complete: 1 new pods have been updated...

Setup Ingress

There are gRPC issues with the latest Ambassador, so we rewcommend 0.40.2 until these are fixed.

[17]:
!helm install stable/ambassador --name ambassador --set image.tag=0.40.2
NAME:   ambassador
LAST DEPLOYED: Tue Apr 16 09:05:42 2019
NAMESPACE: seldon
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME               TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
ambassador-admins  ClusterIP     10.99.202.178  <none>       8877/TCP                    0s
ambassador         LoadBalancer  10.109.52.98   <pending>    80:31387/TCP,443:31577/TCP  0s

==> v1/Deployment
NAME        DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
ambassador  3        3        3           0          0s

==> v1/Pod(related)
NAME                         READY  STATUS             RESTARTS  AGE
ambassador-5b89d44544-jk68t  0/1    ContainerCreating  0         0s
ambassador-5b89d44544-kf6zh  0/1    ContainerCreating  0         0s
ambassador-5b89d44544-smz5f  0/1    ContainerCreating  0         0s

==> v1/ServiceAccount
NAME        SECRETS  AGE
ambassador  1        0s

==> v1beta1/ClusterRole
NAME        AGE
ambassador  0s

==> v1beta1/ClusterRoleBinding
NAME        AGE
ambassador  0s


NOTES:
Congratuations! You've successfully installed Ambassador.

For help, visit our Slack at https://d6e.co/slack or view the documentation online at https://www.getambassador.io.

To get the IP address of Ambassador, run the following commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
     You can watch the status of by running 'kubectl get svc -w  --namespace seldon ambassador'

  On GKE/Azure:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

  On AWS:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  echo http://$SERVICE_IP:

[18]:
!kubectl rollout status deployment.apps/ambassador
Waiting for deployment "ambassador" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 2 of 3 updated replicas are available...
deployment "ambassador" successfully rolled out

Port Forward to Ambassador

kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

Launch main model

We will create a very simple Seldon Deployment with a dummy model image seldonio/mock_classifier:1.0. This deployment is named example.

[19]:
!pygmentize model.json
{
    "apiVersion": "machinelearning.seldon.io/v1alpha2",
    "kind": "SeldonDeployment",
    "metadata": {
        "labels": {
            "app": "seldon"
        },
        "name": "example"
    },
    "spec": {
        "name": "production-model",
        "predictors": [
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier:1.0",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier"
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    }}
                                  ],
                "graph":
                {
                    "children": [],
                    "name": "classifier",
                    "type": "MODEL",
                    "endpoint": {
                        "type": "REST"
                    }},
                "name": "single",
                "replicas": 1
            }
        ]
    }
}
[20]:
!kubectl create -f model.json
seldondeployment.machinelearning.seldon.io/example created
[21]:
!kubectl rollout status deploy/production-model-single-7cd068f
Waiting for deployment "production-model-single-7cd068f" rollout to finish: 0 of 1 updated replicas are available...
deployment "production-model-single-7cd068f" successfully rolled out

Get predictions

[22]:
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="example",namespace="seldon")

REST Request

[23]:
r = sc.predict(gateway="ambassador",transport="rest")
print(r)
Success:True message:
Request:
data {
  tensor {
    shape: 1
    shape: 1
    values: 0.7187140576488892
  }
}

Response:
meta {
  puid: "9kgmhbs29ip5fkbh2o4pvatf9j"
  requestPath {
    key: "classifier"
    value: "seldonio/mock_classifier:1.0"
  }
}
data {
  names: "proba"
  tensor {
    shape: 1
    shape: 1
    values: 0.0999344962270818
  }
}

gRPC Request

[24]:
r = sc.predict(gateway="ambassador",transport="grpc")
print(r)
Success:True message:
Request:
data {
  tensor {
    shape: 1
    shape: 1
    values: 0.7613858961910068
  }
}

Response:
meta {
  puid: "4am0cd7h6gg6jgjo3s2oju0oqk"
  requestPath {
    key: "classifier"
    value: "seldonio/mock_classifier:1.0"
  }
}
data {
  names: "proba"
  tensor {
    shape: 1
    shape: 1
    values: 0.10383878514662628
  }
}

Launch Canary

We will now create a new graph for our Canary with a new model seldonio/mock_classifier_rest:1.1. To make it a canary of the original example deployment we add two annotations

"annotations": {
        "seldon.io/ambassador-weight":"25",
        "seldon.io/ambassador-service-name":"example"
    },

The first says we want to weight 25% of the traffic to this canary. The second days we want to use example as our service endpoint rather than the default which would be our deployment name - in this case example-canary. This will ensure that this Ambassador setting will apply to the same prefix as the previous one.

[25]:
!pygmentize canary.json
{
    "apiVersion": "machinelearning.seldon.io/v1alpha2",
    "kind": "SeldonDeployment",
    "metadata": {
        "labels": {
            "app": "seldon"
        },
        "name": "example-canary"
    },
    "spec": {
        "name": "canary-model",
        "annotations": {
            "seldon.io/ambassador-weight":"25",
            "seldon.io/ambassador-service-name":"example"
        },
        "predictors": [
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier_rest:1.1",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier"
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    }}
                                  ],
                "graph":
                {
                    "children": [],
                    "name": "classifier",
                    "type": "MODEL",
                    "endpoint": {
                        "type": "REST"
                    }},
                "name": "single",
                "replicas": 1
            }
        ]
    }
}
[26]:
!kubectl create -f canary.json
seldondeployment.machinelearning.seldon.io/example-canary created
[27]:
!kubectl rollout status deploy/canary-model-single-4c8805f
Waiting for deployment "canary-model-single-4c8805f" rollout to finish: 0 of 1 updated replicas are available...
deployment "canary-model-single-4c8805f" successfully rolled out

Show our REST requests are now split with roughly 25% going to the canary.

[28]:
from collections import defaultdict
counts = defaultdict(int)
n = 100
for i in range(n):
    r = sc.predict(gateway="ambassador",transport="rest")
    counts[r.response.meta.requestPath["classifier"]] += 1
for k in counts:
    print(k,(counts[k]/float(n))*100,"%")

seldonio/mock_classifier:1.0 77.0 %
seldonio/mock_classifier_rest:1.1 23.0 %

Now lets test gRPC

[31]:
counts = defaultdict(int)
n = 100
for i in range(n):
    r = sc.predict(gateway="ambassador",transport="grpc")
    counts[r.response.meta.requestPath["classifier"]] += 1
for k in counts:
    print(k,(counts[k]/float(n))*100,"%")

seldonio/mock_classifier:1.0 80.0 %
seldonio/mock_classifier_rest:1.1 20.0 %
[ ]: