This page was generated from examples/keda/keda_prom_auto_scale.ipynb.
Scale Seldon Deployments based on Prometheus Metrics.¶
This notebook shows how you can scale Seldon Deployments based on Prometheus metrics via KEDA.
KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
With the support of KEDA in Seldon, you can scale your seldon deployments with any scalers listed here. In this example we will scale the seldon deployment with Prometheus metrics as an example.
Install Seldon Core¶
Install Seldon Core as described in docs
Make sure add --set keda.enabled=true
Install Seldon Core Analytic¶
seldon-core-analytics contains Prometheus and Grafana installation with a basic Grafana dashboard showing the default Prometheus metrics exposed by Seldon for each inference graph deployed. Later we will use the Prometheus service installed to provide metrics in order to scale the Seldon models.
Install Seldon Core Analytics as described in docs
[ ]:
!helm install seldon-core-analytics ../../helm-charts/seldon-core-analytics -n seldon-system --wait
Install KEDA¶
[ ]:
!kubectl delete -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
!kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
[ ]:
!kubectl get pod -n keda
Create model with KEDA¶
To create a model with KEDA autoscaling you just need to add a KEDA spec refering in the Deployment, e.g.:
kedaSpec:
pollingInterval: 15 # Optional. Default: 30 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
triggers:
- type: prometheus
metadata:
# Required
serverAddress: http://seldon-core-analytics-prometheus-seldon.seldon-system.svc.cluster.local
metricName: access_frequency
threshold: '10'
query: rate(seldon_api_executor_client_requests_seconds_count{seldon_app=~"seldon-model-example"}[10s]
The full SeldonDeployment spec is shown below.
[ ]:
VERSION=!cat ../../version.txt
VERSION=VERSION[0]
VERSION
[ ]:
%%writefile model_with_keda_prom.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:1.5.0-dev
imagePullPolicy: IfNotPresent
name: classifier
resources:
requests:
cpu: '0.5'
kedaSpec:
pollingInterval: 15 # Optional. Default: 30 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
triggers:
- type: prometheus
metadata:
# Required
serverAddress: http://seldon-core-analytics-prometheus-seldon.seldon-system.svc.cluster.local
metricName: access_frequency
threshold: '10'
query: rate(seldon_api_executor_client_requests_seconds_count{seldon_app=~"seldon-model-example"}[1m])
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: example
[ ]:
!kubectl create -f model_with_keda_prom.yaml
[ ]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model -o jsonpath='{.items[0].metadata.name}')
Create Load¶
We label some nodes for the loadtester. We attempt the first two as for Kind the first node shown will be the master.
[ ]:
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}') role=locust
Before add loads to the model, there is only one replica
[ ]:
!kubectl get deployment seldon-model-example-0-classifier
[ ]:
!helm install seldon-core-loadtesting seldon-core-loadtesting --repo https://storage.googleapis.com/seldon-charts \
--set locust.host=http://seldon-model-example:8000 \
--set oauth.enabled=false \
--set locust.hatchRate=1 \
--set locust.clients=1 \
--set loadtest.sendFeedback=0 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=1
After a few mins you should see the deployment scaled to 5 replicas
[ ]:
import json
import time
def getNumberPods():
dp=!kubectl get deployment seldon-model-example-0-classifier -o json
dp=json.loads("".join(dp))
return dp["status"]["replicas"]
scaled = False
for i in range(60):
pods = getNumberPods()
print(pods)
if pods > 1:
scaled = True
break
time.sleep(5)
assert(scaled)
[ ]:
!kubectl get deployment/seldon-model-example-0-classifier scaledobject/seldon-model-example-0-classifier
Remove Load¶
[ ]:
!helm delete seldon-core-loadtesting
After 5-10 mins you should see the deployment replica number decrease to 1
[ ]:
!kubectl get pods,deployments,hpa,scaledobject
[ ]:
!kubectl delete -f model_with_keda_prom.yaml
[ ]: