Wrappers and SDKs¶
When you have a custom use-case where our pre-packaged inference servers cannot cover, you are able to leverage our language wrappers to containerise your machine learning model and logic.
All our pre-packaged model servers are built using our language wrappers, which means that you can also build your own reusable inference server if required.