Autoscaling Seldon Deployments

Prerequistes

You will need

  • Git clone of Seldon Core

  • A running Kubernetes cluster with kubectl authenticated

    • The cluster should have heapster and metric-server running in the kube-system namespace
    • For Minikube run:
    minikube addons enable metrics-server
    minikube addons enable heapster
    
  • seldon-core Python package (pip install seldon-core)

  • Helm client

Creating a Kubernetes Cluster

Follow the Kubernetes documentation to create a cluster.

Once created ensure kubectl is authenticated against the running cluster.

Setup

[1]:
!kubectl create namespace seldon
namespace/seldon created
[2]:
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
Context "gke_seldon-demos_europe-west4-c_standard-cluster-1" modified.
[3]:
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created

Install Helm

[4]:
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$HELM_HOME has been configured at /home/clive/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
[5]:
!kubectl rollout status deploy/tiller-deploy -n kube-system
deployment "tiller-deploy" successfully rolled out

Start seldon-core

[6]:
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
NAME:   seldon-core
LAST DEPLOYED: Thu Aug 29 13:09:08 2019
NAMESPACE: seldon-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME                          AGE
seldon-operator-manager-role  2s

==> v1/ClusterRoleBinding
NAME                                 AGE
seldon-operator-manager-rolebinding  1s

==> v1/ConfigMap
NAME                     DATA  AGE
seldon-config            1     2s
seldon-spartakus-config  1     2s

==> v1/Pod(related)
NAME                                         READY  STATUS             RESTARTS  AGE
seldon-operator-controller-manager-0         0/1    ContainerCreating  0         1s
seldon-spartakus-volunteer-5b568c587b-ww66l  0/1    ContainerCreating  0         1s

==> v1/Secret
NAME                                   TYPE    DATA  AGE
seldon-operator-webhook-server-secret  Opaque  0     2s

==> v1/Service
NAME                                        TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)  AGE
seldon-operator-controller-manager-service  ClusterIP  10.0.22.99   <none>       443/TCP  1s
webhook-server-service                      ClusterIP  10.0.20.211  <none>       443/TCP  1s

==> v1/ServiceAccount
NAME                              SECRETS  AGE
seldon-core-seldon-core-operator  1        2s
seldon-spartakus-volunteer        1        2s

==> v1/StatefulSet
NAME                                READY  AGE
seldon-operator-controller-manager  0/1    1s

==> v1beta1/ClusterRole
NAME                        AGE
seldon-spartakus-volunteer  2s

==> v1beta1/ClusterRoleBinding
NAME                        AGE
seldon-spartakus-volunteer  1s

==> v1beta1/CustomResourceDefinition
NAME                                         AGE
seldondeployments.machinelearning.seldon.io  2s

==> v1beta1/Deployment
NAME                        READY  UP-TO-DATE  AVAILABLE  AGE
seldon-spartakus-volunteer  0/1    1           0          1s


NOTES:
NOTES: TODO


[7]:
!kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system
Waiting for 1 pods to be ready...
partitioned roll out complete: 1 new pods have been updated...

Setup Ingress

Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473).

[8]:
!helm install stable/ambassador --name ambassador --set crds.keep=false
NAME:   ambassador
LAST DEPLOYED: Thu Aug 29 13:10:03 2019
NAMESPACE: seldon
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME        READY  UP-TO-DATE  AVAILABLE  AGE
ambassador  0/3    3           0          1s

==> v1/Pod(related)
NAME                         READY  STATUS             RESTARTS  AGE
ambassador-684d6f8cd9-cfxwc  0/1    ContainerCreating  0         1s
ambassador-684d6f8cd9-lxwcd  0/1    ContainerCreating  0         1s
ambassador-684d6f8cd9-ncv8b  0/1    ContainerCreating  0         1s

==> v1/Service
NAME              TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE
ambassador        LoadBalancer  10.0.21.195  <pending>    80:30644/TCP,443:32376/TCP  1s
ambassador-admin  ClusterIP     10.0.30.220  <none>       8877/TCP                    1s

==> v1/ServiceAccount
NAME        SECRETS  AGE
ambassador  1        1s

==> v1beta1/ClusterRole
NAME             AGE
ambassador       1s
ambassador-crds  1s

==> v1beta1/ClusterRoleBinding
NAME             AGE
ambassador       1s
ambassador-crds  1s

==> v1beta1/CustomResourceDefinition
NAME                                          AGE
authservices.getambassador.io                 1s
consulresolvers.getambassador.io              1s
kubernetesendpointresolvers.getambassador.io  1s
kubernetesserviceresolvers.getambassador.io   1s
mappings.getambassador.io                     1s
modules.getambassador.io                      1s
ratelimitservices.getambassador.io            1s
tcpmappings.getambassador.io                  1s
tlscontexts.getambassador.io                  1s
tracingservices.getambassador.io              1s


NOTES:
Congratuations! You've successfully installed Ambassador.

For help, visit our Slack at https://d6e.co/slack or view the documentation online at https://www.getambassador.io.

To get the IP address of Ambassador, run the following commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
     You can watch the status of by running 'kubectl get svc -w  --namespace seldon ambassador'

  On GKE/Azure:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

  On AWS:
  export SERVICE_IP=$(kubectl get svc --namespace seldon ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  echo http://$SERVICE_IP:

[9]:
!kubectl rollout status deployment.apps/ambassador
Waiting for deployment "ambassador" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "ambassador" rollout to finish: 2 of 3 updated replicas are available...
deployment "ambassador" successfully rolled out

Create model with autoscaler

To create a model with an HorizontalPodAutoscaler there are three steps:

  1. Ensure you have a resource request for the metric you want to scale on if it is a standard metric such as cpu or memory, e.g.:
"resources": {
   "requests": {
      "cpu": "0.5"
   }
}
  1. Add an HPA Spec refering to this Deployment, e.g.:
"hpaSpec":
       {
       "minReplicas": 1,
       "maxReplicas": 3,
       "metrics":
           [ {
           "type": "Resource",
           "resource": {
               "name": "cpu",
               "targetAverageUtilization": 10
           }
           }]
       },

The full SeldonDeployment spec is shown below.

[10]:
!pygmentize model_with_hpa.json
{
    "apiVersion": "machinelearning.seldon.io/v1alpha2",
    "kind": "SeldonDeployment",
    "metadata": {
        "name": "seldon-model"
    },
    "spec": {
        "name": "test-deployment",
        "oauth_key": "oauth-key",
        "oauth_secret": "oauth-secret",
        "predictors": [
            {
                "componentSpecs": [{
                    "spec": {
                        "containers": [
                            {
                                "image": "seldonio/mock_classifier:1.0",
                                "imagePullPolicy": "IfNotPresent",
                                "name": "classifier",
                                "resources": {
                                    "requests": {
                                        "cpu": "0.5"
                                    }
                                }
                            }
                        ],
                        "terminationGracePeriodSeconds": 1
                    },
                    "hpaSpec":
                    {
                        "minReplicas": 1,
                        "maxReplicas": 3,
                        "metrics":
                            [ {
                                "type": "Resource",
                                "resource": {
                                    "name": "cpu",
                                    "targetAverageUtilization": 10
                                }
                            }]
                    }
                }],
                "graph": {
                    "children": [],
                    "name": "classifier",
                    "endpoint": {
                        "type" : "REST"
                    },
                    "type": "MODEL"
                },
                "name": "example",
                "replicas": 1
            }
        ]
    }
}
[11]:
!kubectl create -f model_with_hpa.json
seldondeployment.machinelearning.seldon.io/seldon-model created

Create Load

[12]:
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
node/gke-standard-cluster-1-default-pool-b1c35e14-rrbd labeled
[15]:
!helm install ../../../helm-charts/seldon-core-loadtesting --name loadtest  \
    --set locust.host=http://seldon-model-test-deployment-example:8000 \
    --set oauth.enabled=false \
    --set oauth.key=oauth-key \
    --set oauth.secret=oauth-secret \
    --set locust.hatchRate=1 \
    --set locust.clients=1 \
    --set loadtest.sendFeedback=0 \
    --set locust.minWait=0 \
    --set locust.maxWait=0 \
    --set replicaCount=1
NAME:   loadtest
LAST DEPLOYED: Thu Aug 29 13:17:11 2019
NAMESPACE: seldon
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                   READY  STATUS             RESTARTS  AGE
locust-master-1-znncw  0/1    ContainerCreating  0         0s
locust-slave-1-hnx8n   0/1    ContainerCreating  0         0s

==> v1/ReplicationController
NAME             DESIRED  CURRENT  READY  AGE
locust-master-1  1        1        0      0s
locust-slave-1   1        1        0      0s

==> v1/Service
NAME             TYPE      CLUSTER-IP   EXTERNAL-IP  PORT(S)                                       AGE
locust-master-1  NodePort  10.0.31.100  <none>       5557:32552/TCP,5558:32023/TCP,8089:32677/TCP  0s


After a few mins you should see the deployment my-dep scaled to 3 deployments

[16]:
!kubectl get pods,deployments,hpa
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/ambassador-684d6f8cd9-cfxwc                        1/1     Running   0          10m
pod/ambassador-684d6f8cd9-lxwcd                        1/1     Running   0          10m
pod/ambassador-684d6f8cd9-ncv8b                        1/1     Running   0          10m
pod/locust-master-1-znncw                              1/1     Running   0          3m13s
pod/locust-slave-1-hnx8n                               1/1     Running   0          3m13s
pod/test-deployment-example-7cd068f-6cc64774ff-dtqwv   2/2     Running   0          2m40s
pod/test-deployment-example-7cd068f-6cc64774ff-gdkb8   2/2     Running   0          8m57s
pod/test-deployment-example-7cd068f-6cc64774ff-l4mn5   2/2     Running   0          5m11s

NAME                                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/ambassador                        3         3         3            3           10m
deployment.extensions/test-deployment-example-7cd068f   3         3         3            3           8m57s

NAME                                                                  REFERENCE                                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/test-deployment-example-7cd068f   Deployment/test-deployment-example-7cd068f   51%/10%   1         3         3          8m57s

Remove Load

After 5-10 mins you should see the deployments replicas decrease to 1

[17]:
!helm delete loadtest --purge
release "loadtest" deleted
[19]:
!kubectl get pods,deployments,hpa
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/ambassador-684d6f8cd9-cfxwc                        1/1     Running   0          16m
pod/ambassador-684d6f8cd9-lxwcd                        1/1     Running   0          16m
pod/ambassador-684d6f8cd9-ncv8b                        1/1     Running   0          16m
pod/test-deployment-example-7cd068f-6cc64774ff-gdkb8   2/2     Running   0          15m

NAME                                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/ambassador                        3         3         3            3           16m
deployment.extensions/test-deployment-example-7cd068f   1         1         1            1           15m

NAME                                                                  REFERENCE                                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/test-deployment-example-7cd068f   Deployment/test-deployment-example-7cd068f   1%/10%    1         3         1          15m
[ ]: