Iris Prediction Demo
We provide a demo for creating a multi-class classification predictive endpoint for the classic Iris classification task using the dataset provided here.
The steps are:
- Download the static iris data and create JSON events
- Create predictive pipelines with XGBoost or Vowpal Wabbit
- Start runtime prediction microservices
- Integrate into Seldon Server
The code for creating the models and predictive pipeline can be found in
- A *nix based system with some standard tools: make, wget, python
- A running Seldon server if you wish to do the final Seldon server integeation step
The Iris data is provided as is and we therefore download and create a JSON dataset to allow us to get started easily. Alternatively, we could start the Seldon server and injest the data via the /events endpoint.
python/docker/examples/iris and run:
This will download the raw data and convert into JSON placing the JSON data in a seldon structured client/DAY folder.
For the iris dataset we create very simple pipelines that do the following tasks:
- Create an id feature from the name feature
- Create an SVMLight feature from the four core predictive features (for use by XGBoost)
- Build a model using XGBoost or Vowpal Wabbit.
Example code for the XGBoost pipeline is shown below:
The various pipelines can run as follows
- Create an XGBoost pipeline :
- Create a VW pipeline :
The models for the pipelines are stored in the locations above
Online Prediction Microservices
Now that we have built various models we can run a realtime predictor as a microservice that will take in raw features, run our saved feature extraction pipeline and pass these features to the runtime model to score returning a result.
The various services for each pipeline can be started as below
- Run XGBoost microservice :
- Run VW microservice :
We can test test the pipelines with:
- Send an example to XGBoost microservice :
- Send an example to VW microservice :
Which uses curl to fire a JSON test set of feeatures to the microservice, for xgboost microservice running on port 5001 this would be
The response should be like:
This shows the predition to be “Iris-setosa”.
We can now integrate our microservice(s) into the Seldon server. You will need a running Seldon server.
First add a new client “iris” to the server by editing the server_config.json.
Then run the setup script
python initial_setup.py in the
scripts folder. note the JS consumer key that is provided
Next update zookeeper with settings for the “iris” client so that prediction requests are sent to the Vowpal Wabbit microservice. Fom inside a zookeeper client run:
You should now be able to call Seldon API predict requests using the JS consumer key provided able.
This should give a response like: