Tensorflow Deep MNIST Advanced Tutorial
This example will take you through creating a microservice that recognizes numbers between 0 and 9, based on the CNN model from the tensorflow deep MNIST demo. In this example we will go through every single step for packaging your model into a fully functional seldon microservice operating in the cloud. If you are just interested in testing the prepackaged docker image check out this tutorial.
- Create Model
- Save as a pipeline
- Create script to start microservice
- Create the docker image
- Pushing the image to your docker hub
- Creating a seldon kubernetes cluster
- Launching the microservice on your cluster
- Testing your service
Let’s start by creating a model using tensorflow. We are going to implement the digit recognition model that can be found in the tensorflow advanced tutorial, using the python implementation of tensorflow and seldon.
Here we define a convolutional neural network and train it over the MNIST dataset shipped with tensorflow.
Now that we have a fully functional machine learning model, we are going to turn it into a pipeline and package it as a microservice using Seldon tools. We can do all of this easily in python:
Let’s look in more detail at each one of these steps
Here we use the TensorFlowWrapper class to standardise our model so that it can be integrated in a generic machine learning pipeline (next step). We need to pass as arguments:
- the tensorflow session (sess)
- the input variable (x)
- the output variable (y)
- a list of the variables that need to take a constant value at prediction time (keep_prob must be set as 1.0)
Here we create a scikit learn pipeline comprising a single step which corresponds to our tensorflow model. More information on predictive pipelines can be found here.
Finally we use seldon’s pipeline wrapper to save the pipeline to the disk. The microservice will be generated from this save file.
We are going to write a simple bash script to launch the microservice and save it in a file called run_microservice.sh
This uses seldon’s script to launch a microservice from the saved pipeline. You can try to run it but be aware that it needs for your pipeline to be saved in /home/seldon.
We are going to create a docker image from our microservice. Here is the content of the dockerfile:
Let’s look at it line by line:
We are going to base our image on pyseldon’s image that has seldon installed as well as python and a number of libraries like tensorflow.
We copy the pipeline and the script that starts the microservice
And finally, we call our script to launch the microservice!
Now you can choose a name for your image (we went for deep_mnist) and ask docker to build it using the following command line:
seldonio is the name of our docker hub (more about this in the next part) and 1.0 is the version of the image.
In order for your kubernetes cluster on google cloud (or any cloud service) to find your docker image it needs to be pushed to a docker hub. You can create a docker hub very easily here. Once you have a docker hub user id you can use the following command to log into it and push your image:
The full documentation for this step can be found here
The first step is to create a client for your microservice. More information on seldon clients can be found here. This can be done using seldon-cli as follows:
This requires an existing datasource. ClientDB is a datasource that is created by seldon on start-up but you can use another one that you create using seldon-cli db.
Finally, you can launch your microservice using kubernetes/bin/start-microservice.
The microservice takes as input a vector of 784 floats corresponding to the pixels of a 28x28 image and returns a list of probabilities for each number between 0 and 9. In order to test it you can use the flask webapp we have created for this purpose.