If you have a trained an MLflow model you are able to deploy one (or several)
of the versions saved using Seldon’s prepackaged MLflow server.
During initialisation, the built-in reusable server will create the Conda
specified on your
To use the built-in MLflow server the following pre-requisites need to be met:
Conda environment creation¶
The MLflow built-in server will create the Conda environment specified on your
conda.yaml file during initialisation.
Note that this approach may slow down your Kubernetes
startup time considerably.
In some cases, it may be worth to consider creating your own custom reusable server. For example, when the Conda environment can be considered stable, you can create your own image with a fixed set of dependencies. This image can then be re-used across different model versions using the same pre-loaded environment.
An example for a saved Iris prediction model can be found below:
apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: mlflow spec: name: wines predictors: - graph: children:  implementation: MLFLOW_SERVER modelUri: gs://seldon-models/mlflow/elasticnet_wine name: classifier name: default replicas: 1