Seldon Core Language Wrappers¶
When you have a custom use-case where our pre-packaged inference servers cannot cover, you are able to leverage our language wrappers to containerise your machine learning model and logic.
All our pre-packaged model servers are built using our language wrappers, which means that you can also build your own reusable inference server if required.
This page provides a high level overview around the concepts and best practices when using the pre-package model servers.
Language Wrappers Available¶
The language wrappers supported, including their current stability are outlined below
Graduated Language Wrappers¶
Below are languages that are now signed off as stable.
For any Python based machine learning models, it is possible to use our Python Language wrapper to containerise them and expose any logic through a simple Python class.
This it currently the most popular wrapper (followed by the Java Wrapper), and it is currently used across a large number of use-cases, serving custom logic with models trained using Keras, PyTorch, StatsModels, XGBoost, scikit-learn and even custom operating system based proprietary engines.
Please check the Python model wrapping section for more information on how to use it.
Incubating Language Wrappers¶
Below are languages that have not yet been signed off as graduated and stable, but have an active roadmap with a completion strategy towards graduation.
Currently the Java wrapper is being used across a large amount of critical environments, however we are currently requiring the Java wrapper to have the same level of features as the python wrapper for graduation. You can follow the progress for this through our GitHub issue #1344.
Please read the Java models wrapped using source-to-image for further information in regards to how it can be used.