MLflow Server

If you have a trained an MLflow model you are able to deploy one (or several) of the versions saved using Seldon’s prepackaged MLflow server. During initialisation, the built-in reusable server will create the Conda environment specified on your conda.yaml file.


To use the built-in MLflow server the following pre-requisites need to be met:

  • Your MLmodel artifact folder needs to be accessible remotely (e.g. as gs://seldon-models/mlflow/elasticnet_wine).
  • Your model needs to be compatible with the python_function flavour.
  • Your MLproject environment needs to be specified using Conda.

Conda environment creation

The MLflow built-in server will create the Conda environment specified on your MLmodel’s conda.yaml file during initialisation. Note that this approach may slow down your Kubernetes SeldonDeployment startup time considerably.

In some cases, it may be worth to consider creating your own custom reusable server. For example, when the Conda environment can be considered stable, you can create your own image with a fixed set of dependencies. This image can then be re-used across different model versions using the same pre-loaded environment.


An example for a saved Iris prediction model can be found below:

kind: SeldonDeployment
  name: mlflow
  name: wines
    - graph:
        children: []
        implementation: MLFLOW_SERVER
        modelUri: gs://seldon-models/mlflow/elasticnet_wine
        name: classifier
      name: default
      replicas: 1

You can also try out a worked notebook or check our talk at the Spark + AI Summit 2019.