Custom URL prefix with Seldon and Ambassador

This notebook shows how you can deploy Seldon Deployments with custom Ambassador configuration.

[2]:
from IPython.core.magic import register_line_cell_magic

@register_line_cell_magic
def writetemplate(line, cell):
    with open(line, 'w') as f:
        f.write(cell.format(**globals()))
[3]:
VERSION=!cat ../../../version.txt
VERSION=VERSION[0]
VERSION
[3]:
'1.5.0-dev'

Setup Seldon Core

Uset the setup notebook to Setup Cluster with Ambassador Ingress and Install Seldon Core. Instructions also online.

Launch main model

We will create a very simple Seldon Deployment with a dummy model image seldonio/mock_classifier:1.0. This deployment is named example. We will add custom Ambassador config which sets the Ambassador prefix to /mycompany/ml

We must ensure we set the correct service endpoint. Seldon Core creates an endpoint of the form:

<spec.name>-<predictor.name>.<namespace>:<port>

Where

  • <spec-name> is the name you give to the Seldon Deployment spec: example below

  • <predcitor.name> is the predictor name in the Seldon Deployment: single below

  • <namespace> is the namespace your Seldon Deployment is deployed to

  • <port> is the port either 8000 for REST or 5000 for gRPC

This will allow you to set the service value in the Ambassador config you create. So for the example below we have:

service: production-model-example.seldon:8000
[6]:
%%writetemplate model_custom_ambassador.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
  labels:
    app: seldon
  name: example-custom
spec:
  annotations:
    seldon.io/ambassador-config: 'apiVersion: ambassador/v1

      kind: Mapping

      name: seldon_example_rest_mapping

      prefix: /mycompany/ml/

      service: example-custom-single.seldon:8000

      timeout_ms: 3000'
  name: production-model
  predictors:
  - componentSpecs:
    - spec:
        containers:
        - image: seldonio/mock_classifier:{VERSION}
          imagePullPolicy: IfNotPresent
          name: classifier
        terminationGracePeriodSeconds: 1
    graph:
      children: []
      endpoint:
        type: REST
      name: classifier
      type: MODEL
    name: single
    replicas: 1

[7]:
!kubectl create -f model_custom_ambassador.yaml
seldondeployment.machinelearning.seldon.io/example-custom created
[8]:
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-custom -o jsonpath='{.items[0].metadata.name}')
Waiting for deployment "example-custom-single-0-classifier" rollout to finish: 0 of 1 updated replicas are available...
deployment "example-custom-single-0-classifier" successfully rolled out

Get predictions

[9]:
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="example-custom",namespace="seldon")

REST Request

[10]:
r = sc.predict(gateway="ambassador",transport="rest",gateway_prefix="/mycompany/ml")
assert(r.success==True)
print(r)
Success:True message:
Request:
meta {
}
data {
  tensor {
    shape: 1
    shape: 1
    values: 0.7361019060122931
  }
}

Response:
{'data': {'names': ['proba'], 'tensor': {'shape': [1, 1], 'values': [0.10150940716895476]}}, 'meta': {}}
[11]:
!kubectl delete -f model_custom_ambassador.json
error: the path "model_custom_ambassador.json" does not exist
[ ]: