Installing Seldon

Download Seldon

git clone -b v1.4.10

Create a Kubernetes Cluster

Seldon runs inside a Kubernetes cluster so you need to follow their guides to create a cluster locally, on servers or in the cloud. We support kubernetes >= 1.6. Seldon should run on Kubernetes >= 1.2 except for the glusterfs persistent volume claim.

If you are testing Seldon on a single machine you will need at least 6G of memory for your Kubernetes cluster. For single machine exploration we suggest using minikube.

To create a Kubernetes cluster on Google Cloud you can follow our guidelines.

Create Kubernetes Configuration

Once you have a Kubernetes cluster Seldon can be started as a series of containers which run within it. As a first step you have to create the required JSON Kubernetes files. A Makefile to create these can be found in kubernetes/conf You create the configuration by calling:

make clean conf

This will be sufficient for a single node configuration with default settings. To customize settings you will need to edit the Makefile or provide changes when calling make.

You will need to optionally configure:

Memory requests and limits for Kubernetes

The default configuration has memory requests and limits. If you run seldon with Spark (2 workers) it will require 10G of memory. You can decrease the spark memory requests in the and or you can run without Spark as discussed below.

For a production system you should carefully set the memory requests and limits for your system based on your workload you expect to run.

Grafana and Spark UI Passwords

Seldon will start a Grafana dashboard for showing analytics about the runtime predictions and also provide access to the Spark UI for monitoring Spark jobs. These are password protected by default with the initial passwords set in the conguration Makefile:


Please change the above before creating the conf.

Persistent Storage

Seldon uses a Kubernetes volume to store and share data between containers. A persistent volume claim is made to provide this storage. Out of the box we provide two types of external storage examples:

You are free to add your own Persistent Volumes including dynamic storage providers that will satisfy the persistent storage claims made by the pods we use.


GlusterFS works well for a production setting. For this you will need to have setup your own GlusterFS cluster. We provide some notes to help you. Our assumes your GlusterFS volume is call gv0.

You will need to provide two ip addresses of two nodes in your GlusterFS cluster, e.g.:

 cd kubernetes/conf
 make clean conf GLUSTERFS_IP1= GLUSTERFS_IP2=

Seldon API Endpoint

By default the Seldon API server endpoint is set to a Kubernetes NodePort at port 30015. If you run in the cloud you can change this to LoadBalancer, e.g.

 cd kubernetes/conf
 make clean conf SELDON_SERVICE_TYPE=LoadBalancer

External MySQL

By default Seldon starts a single MySQL server utilizing the persistent storage for its external store. It is probably advisable for production settings to use an external database outside of Docker to ensure full data integrity. We provide kubernetes conffiguration to replace the server running inside the cluster with a proxy to an external Google SQL server. You will need to follow the steps described here and then create conf with:

 cd kubernetes/conf
 make clean conf GOOGLE_SQL_INSTANCE=<google sql instance connection name>

Launch Seldon

Scripts seldon-up and seldon-down in kubernetes/bin start and stop Seldon and should be in your PATH.

To launch seldon with all components run


To start with GlusterFS run


To start with GlusterFS and an external google MySQL server run as follows, replacing your DB proxy user and password as required when you setup the connection to the external Google SQL server.


To shutdown seldon run


The first time you run seldon-up it may take some time to complete as it will need to download all the images from DockerHub.

On successful completion you will have a standard Seldon installation with mysql, memcache and zookeeper running within the cluster as well as a single Seldon API server and Spark cluster. The appropriate seldon-cli commands would have be run to set up the default settings and a “test” client.

Next Steps


Check the reason its not finishing using: kubectl get all and kubectl get events

If you plan to test Seldon on a non-local cluster you will need to ensure your cluster is large enough to run all the Seldon services or disable the Kubernetes LimitRanger plugin. In the current version of Kubernetes to disable this plugin do the following. Edit <kubernetes>/cluster/<provider>/ and remove LimitRanger from the following line:


Check you have enough memory. At least 12G is needed to run everything locally on a single node with Spark running two workers. If you are using minikube then you can start a minikube kubernetes with 12G of memory with minikube start --memory=12000

In addition, the following may help:

  1. Reduce the memory allocation for mysql and seldon-server pods before running seldon-up using the following commands:
 cd kubernetes/conf
 make clean conf MYSQL_RESOURCES='"requests":{ "memory" : "2Gi" }' SELDON_SERVER_RESOURCES='"requests":{ "memory" : "2Gi" }'

This command reduces the memory allocation for the mysql and seldon-server pods from the default 3GB each to 2GB each. If you just need to run the Reuters recommendation and Iris prediction examples, even 1 GB for mysql will work.

  1. Disable Spark, in case you do not intend to use it. To do so, invoke seldon-up like this:
SELDON_WITH_SPARK=false seldon-up

Removing Spark would allow you to run with 7G of memory.

If you are using a Vagrant VM to run your kubernetes cluster ensure it has enough memory available from the host machine.

Check you have enough memory. At least 6G is needed to run everything locally on a single node. If you are using minikube then you can start a minikube kubernetes with 6G of memory with minikube start --memory=6000

The first time you run seldon-up it will need to pull all the container images from Docker Hub. This may take some time on a slow internet connection.

If you are using minikube, first remove old nodes using the minikube delete command.