seldon-openvino¶
:exclamation: This Helm Chart is deprecated!
Proxy integration to deploy models optimized for Intel OpenVINO in Seldon Core v1
Usage¶
To use this chart, you will first need to add the seldonio
Helm repo:
helm repo add seldonio https://storage.googleapis.com/seldon-charts
helm repo update
Once that’s done, you should then be able to use the inference graph template as:
helm template $MY_MODEL_NAME seldonio/seldon-openvino --namespace $MODELS_NAMESPACE
Note that you can also deploy the inference graph directly to your cluster using:
helm install $MY_MODEL_NAME seldonio/seldon-openvino --namespace $MODELS_NAMESPACE
Source Code¶
Values¶
Key |
Type |
Default |
Description |
---|---|---|---|
engine.env.SELDON_LOG_MESSAGES_EXTERNALLY |
bool |
|
|
engine.env.SELDON_LOG_REQUESTS |
bool |
|
|
engine.env.SELDON_LOG_RESPONSES |
bool |
|
|
engine.resources.requests.cpu |
string |
|
|
openvino.image |
string |
|
|
openvino.model.env.LOG_LEVEL |
string |
|
|
openvino.model.input |
string |
|
|
openvino.model.name |
string |
|
|
openvino.model.output |
string |
|
|
openvino.model.path |
string |
|
|
openvino.model.resources |
object |
|
|
openvino.model.src |
string |
|
|
openvino.model_volume |
string |
|
|
openvino.port |
int |
|
|
predictorLabels.fluentd |
string |
|
|
predictorLabels.version |
string |
|
|
sdepLabels.app |
string |
|
|
tfserving_proxy.image |
string |
|