diff --git a/examples/models/openvino_imagenet_ensemble/car.png b/examples/models/openvino_imagenet_ensemble/car.png
deleted file mode 100644
index f22d8d66fa..0000000000
Binary files a/examples/models/openvino_imagenet_ensemble/car.png and /dev/null differ
diff --git a/examples/models/openvino_imagenet_ensemble/input_images.txt b/examples/models/openvino_imagenet_ensemble/input_images.txt
new file mode 100644
index 0000000000..b3bfe87734
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/input_images.txt
@@ -0,0 +1,3 @@
+dog.jpeg 248
+zebra.jpeg 340
+pelican.jpeg 144
\ No newline at end of file
diff --git a/examples/models/openvino_imagenet_ensemble/openvino_imagenet_ensemble.ipynb b/examples/models/openvino_imagenet_ensemble/openvino_imagenet_ensemble.ipynb
index 0f0f02248e..9bb2359919 100644
--- a/examples/models/openvino_imagenet_ensemble/openvino_imagenet_ensemble.ipynb
+++ b/examples/models/openvino_imagenet_ensemble/openvino_imagenet_ensemble.ipynb
@@ -11,80 +11,6 @@
""
]
},
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Dependencies\n",
- "\n",
- " * Seldon-core (```pip install seldon-core```)\n",
- " * Numpy\n",
- " * Keras\n",
- " * Matplotlib\n",
- " * Tensorflow"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Download Squeezenet Model\n",
- "\n",
- "We will download a pre-trained and optimized model for OpenVINO CPU into a local folder."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "--2019-01-07 12:40:12-- https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.xml\n",
- "Resolving s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)... 52.218.48.60\n",
- "Connecting to s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)|52.218.48.60|:443... connected.\n",
- "HTTP request sent, awaiting response... 200 OK\n",
- "Length: 37345 (36K) [text/xml]\n",
- "Saving to: ‘models/squeezenet/1/squeezenet1.1.xml’\n",
- "\n",
- "models/squeezenet/1 100%[===================>] 36.47K --.-KB/s in 0.02s \n",
- "\n",
- "2019-01-07 12:40:12 (1.47 MB/s) - ‘models/squeezenet/1/squeezenet1.1.xml’ saved [37345/37345]\n",
- "\n",
- "--2019-01-07 12:40:12-- https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.mapping\n",
- "Resolving s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)... 52.218.48.60\n",
- "Connecting to s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)|52.218.48.60|:443... connected.\n",
- "HTTP request sent, awaiting response... 200 OK\n",
- "Length: 9318 (9.1K) [binary/octet-stream]\n",
- "Saving to: ‘models/squeezenet/1/squeezenet1.1.mapping’\n",
- "\n",
- "models/squeezenet/1 100%[===================>] 9.10K --.-KB/s in 0s \n",
- "\n",
- "2019-01-07 12:40:13 (109 MB/s) - ‘models/squeezenet/1/squeezenet1.1.mapping’ saved [9318/9318]\n",
- "\n",
- "--2019-01-07 12:40:13-- https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.bin\n",
- "Resolving s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)... 52.218.48.60\n",
- "Connecting to s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)|52.218.48.60|:443... connected.\n",
- "HTTP request sent, awaiting response... 200 OK\n",
- "Length: 4941984 (4.7M) [application/octet-stream]\n",
- "Saving to: ‘models/squeezenet/1/squeezenet1.1.bin’\n",
- "\n",
- "models/squeezenet/1 100%[===================>] 4.71M 11.4MB/s in 0.4s \n",
- "\n",
- "2019-01-07 12:40:13 (11.4 MB/s) - ‘models/squeezenet/1/squeezenet1.1.bin’ saved [4941984/4941984]\n",
- "\n"
- ]
- }
- ],
- "source": [
- "!mkdir -p models/squeezenet/1 && \\\n",
- " wget -O models/squeezenet/1/squeezenet1.1.xml https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.xml && \\\n",
- " wget -O models/squeezenet/1/squeezenet1.1.mapping https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.mapping && \\\n",
- " wget -O models/squeezenet/1/squeezenet1.1.bin https://s3-eu-west-1.amazonaws.com/seldon-public/openvino-squeeznet-model/squeezenet1.1.bin "
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -221,7 +147,9 @@
{
"cell_type": "code",
"execution_count": 8,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [
{
"name": "stdout",
@@ -367,20 +295,33 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Mount local folder onto minikube for HostPath\n",
- "Run in the current folder:\n",
- "```\n",
- "minikube mount ./models:/opt/ml\n",
- "```\n",
- "\n",
- "This will allow the model folder containing the Squeezenet model to be accessed. For production deployments you would use a NFS volume."
+ "## Build Seldon base images with OpenVINO"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!cd ../../../wrappers/s2i/python_openvino && cp ../python/s2i . && \\\n",
+ "docker build -f Dockerfile_openvino_base --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -t seldon_openvino_base:latest ."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Build Combiner and Transformer Images"
+ "## Build Model, Combiner and Transformer Images"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!cd resources/model && s2i build -E environment_grpc . seldon_openvino_base:latest seldon-openvino-prediction:0.1"
]
},
{
@@ -398,13 +339,15 @@
}
],
"source": [
- "!eval $(minikube docker-env) && cd resources/combiner && s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.5-SNAPSHOT seldonio/imagenet_combiner:0.1"
+ "!cd resources/combiner && s2i build -E environment_grpc . seldon_openvino_base:latest /imagenet_combiner:0.1"
]
},
{
"cell_type": "code",
"execution_count": 50,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [
{
"name": "stdout",
@@ -438,14 +381,15 @@
}
],
"source": [
- "!eval $(minikube docker-env) && cd resources/transformer && s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.5-SNAPSHOT seldonio/imagenet_transformer:0.1"
+ "!cd resources/transformer && s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_transformer:0.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Deploy Seldon Intel OpenVINO Graph"
+ "## Deploy Seldon Intel OpenVINO Graph\n",
+ "\n"
]
},
{
@@ -590,314 +534,24 @@
}
],
"source": [
- "get_graph(\"seldon_openvino_ensemble.json\")"
+ "get_graph(\"seldon_ov_predict_ensemble.json\")"
]
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "{\r\n",
- " \u001b[34;01m\"apiVersion\"\u001b[39;49;00m: \u001b[33m\"machinelearning.seldon.io/v1alpha2\"\u001b[39;49;00m,\r\n",
- " \u001b[34;01m\"kind\"\u001b[39;49;00m: \u001b[33m\"SeldonDeployment\"\u001b[39;49;00m,\r\n",
- " \u001b[34;01m\"metadata\"\u001b[39;49;00m: {\r\n",
- " \u001b[34;01m\"labels\"\u001b[39;49;00m: {\r\n",
- " \u001b[34;01m\"app\"\u001b[39;49;00m: \u001b[33m\"seldon\"\u001b[39;49;00m\r\n",
- " },\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"openvino-model\"\u001b[39;49;00m,\r\n",
- "\t\u001b[34;01m\"namespace\"\u001b[39;49;00m: \u001b[33m\"seldon\"\u001b[39;49;00m\t\r\n",
- " },\r\n",
- " \u001b[34;01m\"spec\"\u001b[39;49;00m: {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"openvino\"\u001b[39;49;00m,\r\n",
- " \u001b[34;01m\"predictors\"\u001b[39;49;00m: [\r\n",
- " {\r\n",
- " \u001b[34;01m\"componentSpecs\"\u001b[39;49;00m: [{\r\n",
- " \u001b[34;01m\"spec\"\u001b[39;49;00m: {\r\n",
- " \u001b[34;01m\"containers\"\u001b[39;49;00m: [\r\n",
- " {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-itransformer\"\u001b[39;49;00m,\t\t\t\t\r\n",
- " \u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"seldonio/imagenet_transformer:0.1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t },\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\r\n",
- " },\r\n",
- " {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-otransformer\"\u001b[39;49;00m,\t\t\t\t\r\n",
- " \u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"seldonio/imagenet_transformer:0.1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t },\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\t\t\t\t\r\n",
- " },\r\n",
- " {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-combiner\"\u001b[39;49;00m,\t\t\t\t\r\n",
- " \u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"seldonio/imagenet_combiner:0.1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t },\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\t\t\t\t\r\n",
- " },\r\n",
- " {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"tfserving-proxy1\"\u001b[39;49;00m,\t\t\t\t\r\n",
- " \u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"seldonio/tfserving-proxy:0.2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t },\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\t\t\t\t\t\t\t\t\r\n",
- " },\r\n",
- " {\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"tfserving-proxy2\"\u001b[39;49;00m,\t\t\t\r\n",
- " \u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"seldonio/tfserving-proxy:0.2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t },\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\t\t\t\t\t\t\t\t\r\n",
- " },\r\n",
- "\t\t\t {\r\n",
- "\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"openvino-model-server1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"intelaipg/openvino-model-server:0.2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"command\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t \u001b[33m\"/ie-serving-py/start_server.sh\"\u001b[39;49;00m\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"args\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t \u001b[33m\"ie_serving\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"model\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--model_path\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"/opt/ml/squeezenet\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--model_name\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"squeezenet1.1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--port\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"8001\"\u001b[39;49;00m\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"ports\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"grpc\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"containerPort\"\u001b[39;49;00m: \u001b[34m8001\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"protocol\"\u001b[39;49;00m: \u001b[33m\"TCP\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"LOG_LEVEL\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"DEBUG\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"resources\"\u001b[39;49;00m: {},\r\n",
- "\t\t\t\t\u001b[34;01m\"volumeMounts\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"modelstore\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"mountPath\"\u001b[39;49;00m: \u001b[33m\"/opt/ml\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\r\n",
- "\t\t\t },\r\n",
- "\t\t\t {\r\n",
- "\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"openvino-model-server2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"image\"\u001b[39;49;00m: \u001b[33m\"intelaipg/openvino-model-server:0.2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"command\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t \u001b[33m\"/ie-serving-py/start_server.sh\"\u001b[39;49;00m\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"args\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t \u001b[33m\"ie_serving\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"model\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--model_path\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"/opt/ml/squeezenet\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--model_name\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"squeezenet1.1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"--port\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[33m\"8002\"\u001b[39;49;00m\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"ports\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"grpc\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"containerPort\"\u001b[39;49;00m: \u001b[34m8002\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"protocol\"\u001b[39;49;00m: \u001b[33m\"TCP\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"LOG_LEVEL\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"DEBUG\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t],\r\n",
- "\t\t\t\t\u001b[34;01m\"resources\"\u001b[39;49;00m: {},\r\n",
- "\t\t\t\t\u001b[34;01m\"volumeMounts\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"modelstore\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t\u001b[34;01m\"mountPath\"\u001b[39;49;00m: \u001b[33m\"/opt/ml\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t]\r\n",
- "\t\t\t }\r\n",
- "\t\t\t],\r\n",
- "\t\t\t\u001b[34;01m\"terminationGracePeriodSeconds\"\u001b[39;49;00m: \u001b[34m1\u001b[39;49;00m,\r\n",
- "\t\t\t\u001b[34;01m\"volumes\"\u001b[39;49;00m: [\r\n",
- "\t\t\t {\r\n",
- "\t\t\t\t\u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"modelstore\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\u001b[34;01m\"volumeSource\"\u001b[39;49;00m: {\r\n",
- "\t\t\t\t \u001b[34;01m\"persistentVolumeClaim\"\u001b[39;49;00m: {\r\n",
- "\t\t\t\t\t\u001b[34;01m\"claimName\"\u001b[39;49;00m: \u001b[33m\"model-store-pvc\"\u001b[39;49;00m\r\n",
- "\t\t\t\t }\r\n",
- "\t\t\t\t}\r\n",
- "\t\t\t }\r\n",
- "\t\t\t]\r\n",
- "\t\t }\r\n",
- "\t\t}],\r\n",
- " \u001b[34;01m\"graph\"\u001b[39;49;00m: {\r\n",
- "\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-otransformer\"\u001b[39;49;00m,\r\n",
- "\t\t \u001b[34;01m\"endpoint\"\u001b[39;49;00m: { \u001b[34;01m\"type\"\u001b[39;49;00m : \u001b[33m\"GRPC\"\u001b[39;49;00m },\r\n",
- "\t\t \u001b[34;01m\"type\"\u001b[39;49;00m: \u001b[33m\"OUTPUT_TRANSFORMER\"\u001b[39;49;00m,\r\n",
- "\t\t \u001b[34;01m\"children\"\u001b[39;49;00m: [\r\n",
- "\t\t\t{\r\n",
- "\r\n",
- "\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-itransformer\"\u001b[39;49;00m,\r\n",
- "\t\t \u001b[34;01m\"endpoint\"\u001b[39;49;00m: { \u001b[34;01m\"type\"\u001b[39;49;00m : \u001b[33m\"GRPC\"\u001b[39;49;00m },\r\n",
- "\t\t \u001b[34;01m\"type\"\u001b[39;49;00m: \u001b[33m\"TRANSFORMER\"\u001b[39;49;00m,\r\n",
- "\t\t \u001b[34;01m\"children\"\u001b[39;49;00m: [\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"imagenet-combiner\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"endpoint\"\u001b[39;49;00m: { \u001b[34;01m\"type\"\u001b[39;49;00m : \u001b[33m\"GRPC\"\u001b[39;49;00m },\r\n",
- "\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m: \u001b[33m\"COMBINER\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"children\"\u001b[39;49;00m: [\r\n",
- "\t\t\t\t{\r\n",
- "\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"tfserving-proxy1\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[34;01m\"endpoint\"\u001b[39;49;00m: { \u001b[34;01m\"type\"\u001b[39;49;00m : \u001b[33m\"GRPC\"\u001b[39;49;00m },\r\n",
- "\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m: \u001b[33m\"MODEL\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[34;01m\"children\"\u001b[39;49;00m: [],\r\n",
- "\t\t\t\t \u001b[34;01m\"parameters\"\u001b[39;49;00m:\r\n",
- "\t\t\t\t [\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"grpc_endpoint\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"localhost:8001\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_name\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"squeezenet1.1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_output\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"prob\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_input\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"data\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t}\r\n",
- "\t\t\t\t ]\r\n",
- "\t\t\t\t},\r\n",
- "\t\t\t\t{\r\n",
- "\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"tfserving-proxy2\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[34;01m\"endpoint\"\u001b[39;49;00m: { \u001b[34;01m\"type\"\u001b[39;49;00m : \u001b[33m\"GRPC\"\u001b[39;49;00m },\r\n",
- "\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m: \u001b[33m\"MODEL\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t \u001b[34;01m\"children\"\u001b[39;49;00m: [],\r\n",
- "\t\t\t\t \u001b[34;01m\"parameters\"\u001b[39;49;00m:\r\n",
- "\t\t\t\t [\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"grpc_endpoint\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"localhost:8002\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_name\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"squeezenet1.1\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_output\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"prob\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t},\r\n",
- "\t\t\t\t\t{\r\n",
- "\t\t\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m:\u001b[33m\"model_input\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"type\"\u001b[39;49;00m:\u001b[33m\"STRING\"\u001b[39;49;00m,\r\n",
- "\t\t\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m:\u001b[33m\"data\"\u001b[39;49;00m\r\n",
- "\t\t\t\t\t}\r\n",
- "\t\t\t\t ]\r\n",
- "\t\t\t\t}\t\t\t\r\n",
- "\t\t\t ]\r\n",
- "\t\t\t}\r\n",
- "\t\t ]\r\n",
- "\t\t\t}\r\n",
- "\t\t ]\r\n",
- "\t\t},\r\n",
- " \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"openvino\"\u001b[39;49;00m,\r\n",
- " \u001b[34;01m\"replicas\"\u001b[39;49;00m: \u001b[34m1\u001b[39;49;00m,\r\n",
- "\t\t\u001b[34;01m\"srvOrchSpec\"\u001b[39;49;00m : {\r\n",
- "\t\t \u001b[34;01m\"env\"\u001b[39;49;00m: [\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"TRACING\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t},\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_HOST\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"jaeger-agent\"\u001b[39;49;00m\r\n",
- "\t\t\t},\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_AGENT_PORT\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"5775\"\u001b[39;49;00m\r\n",
- "\t\t\t},\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_SAMPLER_TYPE\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"const\"\u001b[39;49;00m\r\n",
- "\t\t\t},\r\n",
- "\t\t\t{\r\n",
- "\t\t\t \u001b[34;01m\"name\"\u001b[39;49;00m: \u001b[33m\"JAEGER_SAMPLER_PARAM\"\u001b[39;49;00m,\r\n",
- "\t\t\t \u001b[34;01m\"value\"\u001b[39;49;00m: \u001b[33m\"1\"\u001b[39;49;00m\r\n",
- "\t\t\t}\r\n",
- "\t\t ]\t\t\t\t\r\n",
- "\t\t}\r\n",
- " }\r\n",
- " ]\r\n",
- " }\r\n",
- "}\r\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
- "!pygmentize seldon_openvino_ensemble.json"
+ "!pygmentize seldon_ov_predict_ensemble.json"
]
},
{
"cell_type": "code",
"execution_count": 51,
- "metadata": {},
+ "metadata": {
+ "collapsed": false
+ },
"outputs": [
{
"name": "stdout",
@@ -910,153 +564,34 @@
}
],
"source": [
- "!kubectl apply -f pvc.json\n",
- "!kubectl apply -f seldon_openvino_ensemble.json"
+ "!kubectl apply -f seldon_ov_predict_ensemble.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Serve Requests\n",
+ "## Testing the pipeline\n",
"\n",
- "**Ensure you port forward ambassador:**\n",
+ "Expose ambassador API endpoint outside of the Kubernetes cluster or connect to it via `kubectl port-forward`.\n",
"\n",
- "```\n",
- "kubectl port-forward $(kubectl get pods -n seldon -l service=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080\n",
- "```"
+ "Install python dependencies:"
]
},
{
"cell_type": "code",
- "execution_count": 30,
+ "execution_count": null,
"metadata": {},
"outputs": [],
"source": [
- "import tensorflow as tf\n",
- "from seldon_core.proto import prediction_pb2\n",
- "from seldon_core.proto import prediction_pb2_grpc\n",
- "import grpc\n",
- "\n",
- "def grpc_request_ambassador_tensor(deploymentName,namespace,endpoint=\"localhost:8004\",data=None):\n",
- " datadef = prediction_pb2.DefaultData(\n",
- " names = 'x',\n",
- " tftensor = tf.make_tensor_proto(data)\n",
- " )\n",
- " request = prediction_pb2.SeldonMessage(data = datadef)\n",
- " channel = grpc.insecure_channel(endpoint)\n",
- " stub = prediction_pb2_grpc.SeldonStub(channel)\n",
- " if namespace is None:\n",
- " metadata = [('seldon',deploymentName)]\n",
- " else:\n",
- " metadata = [('seldon',deploymentName),('namespace',namespace)]\n",
- " response = stub.Predict(request=request,metadata=metadata)\n",
- " return response\n",
- "\n",
- "def grpc_request_ambassador_bindata(deploymentName,namespace,endpoint=\"localhost:8004\",data=None):\n",
- " request = prediction_pb2.SeldonMessage(binData = data)\n",
- " channel = grpc.insecure_channel(endpoint)\n",
- " stub = prediction_pb2_grpc.SeldonStub(channel)\n",
- " if namespace is None:\n",
- " metadata = [('seldon',deploymentName)]\n",
- " else:\n",
- " metadata = [('seldon',deploymentName),('namespace',namespace)]\n",
- " response = stub.Predict(request=request,metadata=metadata)\n",
- " return response\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 56,
- "metadata": {
- "scrolled": true
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "135.026\n",
- "Eskimo dog, husky\n"
- ]
- }
- ],
- "source": [
- "%matplotlib inline\n",
- "import numpy as np\n",
- "from keras.applications.imagenet_utils import preprocess_input, decode_predictions\n",
- "from keras.preprocessing import image\n",
- "import sys\n",
- "import json\n",
- "import matplotlib.pyplot as plt\n",
- "import datetime\n",
- "API_AMBASSADOR=\"localhost:8003\"\n",
- "\n",
- "def getImage(path):\n",
- " img = image.load_img(path, target_size=(227, 227))\n",
- " x = image.img_to_array(img)\n",
- " plt.imshow(x/255.)\n",
- " x = np.expand_dims(x, axis=0)\n",
- " x = preprocess_input(x)\n",
- " return x\n",
- "\n",
- "def getImageRaw(path):\n",
- " img = image.load_img(path, target_size=(227, 227))\n",
- " x = image.img_to_array(img)\n",
- " plt.imshow(x/255.)\n",
- " return x\n",
- "\n",
- "def getImageBytes(path):\n",
- " with open(path, mode='rb') as file: \n",
- " fileContent = file.read()\n",
- " return fileContent\n",
- "\n",
- "#X = getImage(\"car.png\")\n",
- "#X = X.transpose((0,3,1,2))\n",
- "\n",
- "#X = getImageRaw(\"car.png\")\n",
- "#X = getImageRaw(\"dog.jpeg\")\n",
- "#print(X.shape)\n",
- "X = getImageBytes(\"dog.jpeg\")\n",
- "start_time = datetime.datetime.now()\n",
- "response = grpc_request_ambassador_bindata(\"openvino-model\",\"seldon\",API_AMBASSADOR,data=X)\n",
- "end_time = datetime.datetime.now()\n",
- "duration = (end_time - start_time).total_seconds() * 1000\n",
- "print(duration)\n",
- "\n",
- "print(response.strData)\n"
+ "!pip install -y sedon-core grpcio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Send multiple requests to get average response time."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 58,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "130.20825\n"
- ]
- }
- ],
- "source": [
- "durations = []\n",
- "for i in range(100):\n",
- " X = getImageBytes(\"dog.jpeg\")\n",
- " start_time = datetime.datetime.now()\n",
- " response = grpc_request_ambassador_bindata(\"openvino-model\",\"seldon\",API_AMBASSADOR,data=X)\n",
- " end_time = datetime.datetime.now()\n",
- " duration = (end_time - start_time).total_seconds() * 1000\n",
- " durations.append(duration)\n",
- "print(sum(durations)/float(len(durations)))"
+ "Optionally expand `input_images.txt` to include bigger of a complete imagenet dataset in the same format: path to the image separated by the imagenet class number."
]
},
{
@@ -1064,7 +599,9 @@
"execution_count": null,
"metadata": {},
"outputs": [],
- "source": []
+ "source": [
+ "!python seldon_grpc_client.py"
+ ]
}
],
"metadata": {
diff --git a/examples/models/openvino_imagenet_ensemble/pelican.jpeg b/examples/models/openvino_imagenet_ensemble/pelican.jpeg
new file mode 100644
index 0000000000..149fc6f656
Binary files /dev/null and b/examples/models/openvino_imagenet_ensemble/pelican.jpeg differ
diff --git a/examples/models/openvino_imagenet_ensemble/pvc.json b/examples/models/openvino_imagenet_ensemble/pvc.json
deleted file mode 100644
index 159bddcce3..0000000000
--- a/examples/models/openvino_imagenet_ensemble/pvc.json
+++ /dev/null
@@ -1,43 +0,0 @@
----
-{
- "kind": "PersistentVolume",
- "apiVersion": "v1",
- "metadata": {
- "name": "hostpath-pvc"
- },
- "spec": {
- "capacity": {
- "storage": "1Gi"
- },
- "hostPath": {
- "path": "/opt/ml",
- "type": ""
- },
- "accessModes": [
- "ReadWriteOnce"
- ],
- "persistentVolumeReclaimPolicy": "Retain",
- "storageClassName": "manual"
- }
-}
-
----
-{
- "kind": "PersistentVolumeClaim",
- "apiVersion": "v1",
- "metadata": {
- "name": "model-store-pvc"
- },
- "spec": {
- "accessModes": [
- "ReadWriteOnce"
- ],
- "resources": {
- "requests": {
- "storage": "1Gi"
- }
- },
- "storageClassName": "manual"
- }
-}
-
diff --git a/examples/models/openvino_imagenet_ensemble/resources/combiner/ImageNetCombiner.py b/examples/models/openvino_imagenet_ensemble/resources/combiner/ImageNetCombiner.py
index c6c2a118ce..9fe61dfb51 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/combiner/ImageNetCombiner.py
+++ b/examples/models/openvino_imagenet_ensemble/resources/combiner/ImageNetCombiner.py
@@ -1,10 +1,11 @@
import logging
+import numpy as np
logger = logging.getLogger(__name__)
class ImageNetCombiner(object):
def aggregate(self, Xs, features_names):
print("ImageNet Combiner aggregate called")
- logger.info(Xs)
- return (Xs[0]+Xs[1])/2.0
+ logger.debug(Xs)
+ return (np.reshape(Xs[0],(1,-1)) + np.reshape(Xs[1], (1,-1)))/2.0
diff --git a/examples/models/openvino_imagenet_ensemble/resources/combiner/Makefile b/examples/models/openvino_imagenet_ensemble/resources/combiner/Makefile
index 09b08c4508..d881143b27 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/combiner/Makefile
+++ b/examples/models/openvino_imagenet_ensemble/resources/combiner/Makefile
@@ -1,4 +1,4 @@
build:
- s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.4-SNAPSHOT seldonio/imagenet_combiner:0.1
+ s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_combiner:0.1
diff --git a/examples/models/openvino_imagenet_ensemble/resources/combiner/README.md b/examples/models/openvino_imagenet_ensemble/resources/combiner/README.md
new file mode 100644
index 0000000000..f0fa036a85
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/combiner/README.md
@@ -0,0 +1,8 @@
+# Combiner components for two models' results
+
+
+### Building
+```bash
+s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_combiner:0.1
+
+```
diff --git a/examples/models/openvino_imagenet_ensemble/resources/model/Prediction.py b/examples/models/openvino_imagenet_ensemble/resources/model/Prediction.py
new file mode 100644
index 0000000000..cacf54dbcb
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/model/Prediction.py
@@ -0,0 +1,100 @@
+import numpy as np
+import logging
+import datetime
+import os
+import sys
+from urllib.parse import urlparse
+from google.cloud import storage
+from openvino.inference_engine import IENetwork, IEPlugin
+
+
+def get_logger(name):
+ logger = logging.getLogger(name)
+ log_formatter = logging.Formatter("%(asctime)s - %(name)s - "
+ "%(levelname)s - %(message)s")
+ logger.setLevel('DEBUG')
+
+ console_handler = logging.StreamHandler()
+ console_handler.setFormatter(log_formatter)
+ logger.addHandler(console_handler)
+
+ return logger
+
+logger = get_logger(__name__)
+
+
+def gs_download_file(path):
+ if path is None:
+ return None
+ parsed_path = urlparse(path)
+ bucket_name = parsed_path.netloc
+ file_path = parsed_path.path[1:]
+ gs_client = storage.Client()
+ bucket = gs_client.get_bucket(bucket_name)
+ blob = bucket.blob(file_path)
+ tmp_path = os.path.join('/tmp', file_path.split(os.sep)[-1])
+ blob.download_to_filename(tmp_path)
+ return tmp_path
+
+
+def s3_download_file(path):
+ if path is None:
+ return None
+ s3_endpoint = os.getenv('S3_ENDPOINT')
+ s3_client = boto3.client('s3', endpoint_url=s3_endpoint)
+ parsed_path = urlparse(path)
+ bucket_name = parsed_path.netloc
+ file_path = parsed_path.path[1:]
+ tmp_path = os.path.join('/tmp', file_path.split(os.sep)[-1])
+ s3_transfer = boto3.s3.transfer.S3Transfer(s3_client)
+ s3_transfer.download_file(bucket_name, file_path, tmp_path)
+ return tmp_path
+
+
+def GetLocalPath(requested_path):
+ parsed_path = urlparse(requested_path)
+ if parsed_path.scheme == '':
+ return requested_path
+ elif parsed_path.scheme == 'gs':
+ return gs_download_file(path=requested_path)
+ elif parsed_path.scheme == 's3':
+ return s3_download_file(path=requested_path)
+
+
+class Prediction(object):
+ def __init__(self):
+ try:
+ xml_path = os.environ["XML_PATH"]
+ bin_path = os.environ["BIN_PATH"]
+
+ except KeyError:
+ print("Please set the environment variables XML_PATH, BIN_PATH")
+ sys.exit(1)
+
+ xml_local_path = GetLocalPath(xml_path)
+ bin_local_path = GetLocalPath(bin_path)
+ print('path object', xml_local_path)
+
+ CPU_EXTENSION = os.getenv('CPU_EXTENSION', "/usr/local/lib/libcpu_extension.so")
+
+ plugin = IEPlugin(device='CPU', plugin_dirs=None)
+ if CPU_EXTENSION:
+ plugin.add_cpu_extension(CPU_EXTENSION)
+ net = IENetwork(model=xml_local_path, weights=bin_local_path)
+ self.input_blob = next(iter(net.inputs))
+ self.out_blob = next(iter(net.outputs))
+ self.batch_size = net.inputs[self.input_blob].shape[0]
+ self.inputs = net.inputs
+ self.outputs = net.outputs
+ self.exec_net = plugin.load(network=net, num_requests=self.batch_size)
+
+
+ def predict(self,X,feature_names):
+ start_time = datetime.datetime.now()
+ results = self.exec_net.infer(inputs={self.input_blob: X})
+ predictions = results[self.out_blob]
+ end_time = datetime.datetime.now()
+ duration = (end_time - start_time).total_seconds() * 1000
+ logger.debug("Processing time: {:.2f} ms".format(duration))
+ return predictions.astype(np.float64)
+
diff --git a/examples/models/openvino_imagenet_ensemble/resources/model/README.md b/examples/models/openvino_imagenet_ensemble/resources/model/README.md
new file mode 100644
index 0000000000..994e69511e
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/model/README.md
@@ -0,0 +1,41 @@
+# OpenVINO prediction component
+
+Model configuration is implemented using environment variables:
+
+`XML_PATH` - s3, gs or local path to xml file in OpenVINO model server
+
+`BIN_PATH` - s3, gs or local path to bin file in OpenVINO model server
+
+When using GCS make sure you added also GOOGLE_APPLICATION_CREDENTIALS variable and mounted correspondent token file.
+
+In case is S3 or Minio storage add appropriate environment variables with the credentials.
+
+Component is executing inference operation. Processing time is included in the components debug logs.
+
+Model input and output tensors are determined automatically. There is assumed only one input tensor and output tensor.
+
+### Building example:
+
+```bash
+s2i build -E environment_grpc . seldon_openvino_base:latest seldon-openvino-prediction:0.1
+```
+The base image `seldon_openvino_base:latest` should be created according to this [procedure](../../../../../wrappers/s2i/python_openvino)
+
+
+### Local testing example:
+
+```bash
+docker run -it -v $GOOGLE_APPLICATION_CREDENTIALS:/etc/gcp.json -e GOOGLE_APPLICATION_CREDENTIALS=/etc/gcp.json \
+ -e XML_PATH=gs://inference-eu/models_zoo/resnet_V1_50/resnet_V1_50.xml \
+ -e BIN_PATH=gs://inference-eu/models_zoo/resnet_V1_50/resnet_V1_50.bin
+starting microservice
+2019-02-05 11:13:32,045 - seldon_core.microservice:main:261 - INFO: Starting microservice.py:main
+2019-02-05 11:13:32,047 - seldon_core.microservice:main:292 - INFO: Annotations: {}
+path object /tmp/resnet_V1_50.xml
+ net = IENetwork(model=xml_local_path, weights=bin_local_path)
+2019-02-05 11:14:19,870 - seldon_core.microservice:main:354 - INFO: Starting servers
+2019-02-05 11:14:19,906 - seldon_core.microservice:grpc_prediction_server:333 - INFO: GRPC microservice Running on port 5000
+```
+
+
+
diff --git a/examples/models/openvino_imagenet_ensemble/resources/model/environment_grpc b/examples/models/openvino_imagenet_ensemble/resources/model/environment_grpc
new file mode 100644
index 0000000000..43c84db3fc
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/model/environment_grpc
@@ -0,0 +1,5 @@
+MODEL_NAME=Prediction
+API_TYPE=GRPC
+SERVICE_TYPE=MODEL
+PERSISTENCE=0
+
diff --git a/examples/models/openvino_imagenet_ensemble/resources/model/requirements.txt b/examples/models/openvino_imagenet_ensemble/resources/model/requirements.txt
new file mode 100644
index 0000000000..e931a7a540
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/model/requirements.txt
@@ -0,0 +1,2 @@
+google-cloud-storage==1.13.0
+boto3==1.9.34
\ No newline at end of file
diff --git a/examples/models/openvino_imagenet_ensemble/resources/transformer/ImageNetTransformer.py b/examples/models/openvino_imagenet_ensemble/resources/transformer/ImageNetTransformer.py
index ce49e070a6..a65007c267 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/transformer/ImageNetTransformer.py
+++ b/examples/models/openvino_imagenet_ensemble/resources/transformer/ImageNetTransformer.py
@@ -1,11 +1,10 @@
import numpy as np
-from keras.applications.imagenet_utils import preprocess_input, decode_predictions
-from keras.preprocessing import image
from seldon_core.proto import prediction_pb2
import tensorflow as tf
import logging
-import sys
-import io
+import datetime
+import cv2
+import os
logger = logging.getLogger(__name__)
@@ -14,26 +13,48 @@ def __init__(self, metrics_ok=True):
print("Init called")
f = open('imagenet_classes.json')
self.cnames = eval(f.read())
-
+ self.size = os.getenv('SIZE', 224)
+ self.dtype = os.getenv('DTYPE', 'float')
+ self.classes = os.getenv('CLASSES', 1000)
+
+ def crop_resize(self, img,cropx,cropy):
+ y,x,c = img.shape
+ if y < cropy:
+ img = cv2.resize(img, (x, cropy))
+ y = cropy
+ if x < cropx:
+ img = cv2.resize(img, (cropx,y))
+ x = cropx
+ startx = x//2-(cropx//2)
+ starty = y//2-(cropy//2)
+ return img[starty:starty+cropy,startx:startx+cropx,:]
+
def transform_input_grpc(self, request):
- logger.debug("Transform called")
- b = io.BytesIO(request.binData)
- img = image.load_img(b, target_size=(227, 227))
- X = image.img_to_array(img)
- X = np.expand_dims(X, axis=0)
- X = preprocess_input(X)
- X = X.transpose((0,3,1,2))
+ logger.info("Transform called")
+ start_time = datetime.datetime.now()
+ X = np.frombuffer(request.binData, dtype=np.uint8)
+ X = cv2.imdecode(X, cv2.IMREAD_COLOR) # BGR format
+ X = self.crop_resize(X, self.size, self.size)
+ X = X.astype(self.dtype)
+ X = X.transpose(2,0,1).reshape(1,3,self.size,self.size)
+ logger.info("Shape: %s; Dtype: %s; Min: %s; Max: %s",X.shape,X.dtype,np.amin(X),np.amax(X))
+ jpeg_time = datetime.datetime.now()
+ jpeg_duration = (jpeg_time - start_time).total_seconds() * 1000
+ logger.info("jpeg preprocessing: %s ms", jpeg_duration)
datadef = prediction_pb2.DefaultData(
names = 'x',
tftensor = tf.make_tensor_proto(X)
)
+ end_time = datetime.datetime.now()
+ duration = (end_time - start_time).total_seconds() * 1000
+ logger.info("Total transformation: %s ms", duration)
request = prediction_pb2.SeldonMessage(data = datadef)
return request
def transform_output_grpc(self, request):
- logger.debug("Transform output called")
+ logger.info("Transform output called")
result = tf.make_ndarray(request.data.tftensor)
- result = result.reshape(1,1000)
+ result = result.reshape(1,self.classes)
single_result = result[[0],...]
ma = np.argmax(single_result)
diff --git a/examples/models/openvino_imagenet_ensemble/resources/transformer/Makefile b/examples/models/openvino_imagenet_ensemble/resources/transformer/Makefile
index 91062b05f5..0aa81684e0 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/transformer/Makefile
+++ b/examples/models/openvino_imagenet_ensemble/resources/transformer/Makefile
@@ -1,3 +1,3 @@
build:
- s2i build -E environment_grpc . seldonio/seldon-core-s2i-python36:0.5-SNAPSHOT seldonio/imagenet_transformer:0.1
+ s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_transformer:0.1
diff --git a/examples/models/openvino_imagenet_ensemble/resources/transformer/README.md b/examples/models/openvino_imagenet_ensemble/resources/transformer/README.md
new file mode 100644
index 0000000000..7f2a962c2b
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/resources/transformer/README.md
@@ -0,0 +1,24 @@
+## Transformer component
+
+Exemplary implementation of data transformation tasks.
+
+Input transformation function accepts as input the binary representation of jpeg content.
+It performs the following operations:
+- convert compressed jpeg content to numpy array (BGR format)
+- crop/resize the image to the square shape set in the environment variable `SIZE` (by default 224)
+- transpose the data to NCWH
+
+
+Output transformation function is consuming the imagenet classification models.
+It is converting the array including probability for each imagenet classes into the class name.
+It is returning 'human readable' string with most likely class name.
+The function is using `CLASSES` environment variable to define the expected number of classes in the output.
+Depending on the model it could be 1000 (the default value) or 1001.
+
+
+### Building example:
+```bash
+s2i build -E environment_grpc . seldon_openvino_base:latest seldonio/imagenet_transformer:0.1
+```
+
+The base image `seldon_openvino_base:latest` should be created according to this [procedure](../../../../../wrappers/s2i/python_openvino)
\ No newline at end of file
diff --git a/examples/models/openvino_imagenet_ensemble/resources/transformer/environment_grpc b/examples/models/openvino_imagenet_ensemble/resources/transformer/environment_grpc
index 8fb23e694e..bc883fb1fd 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/transformer/environment_grpc
+++ b/examples/models/openvino_imagenet_ensemble/resources/transformer/environment_grpc
@@ -2,3 +2,4 @@ MODEL_NAME=ImageNetTransformer
API_TYPE=GRPC
SERVICE_TYPE=TRANSFORMER
PERSISTENCE=0
+
diff --git a/examples/models/openvino_imagenet_ensemble/resources/transformer/requirements.txt b/examples/models/openvino_imagenet_ensemble/resources/transformer/requirements.txt
index b75cccccae..e69de29bb2 100644
--- a/examples/models/openvino_imagenet_ensemble/resources/transformer/requirements.txt
+++ b/examples/models/openvino_imagenet_ensemble/resources/transformer/requirements.txt
@@ -1,3 +0,0 @@
-numpy>=1.8.2
-keras
-pillow
diff --git a/examples/models/openvino_imagenet_ensemble/seldon_grpc_client.py b/examples/models/openvino_imagenet_ensemble/seldon_grpc_client.py
new file mode 100644
index 0000000000..44c79135ef
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/seldon_grpc_client.py
@@ -0,0 +1,60 @@
+from seldon_core.proto import prediction_pb2
+from seldon_core.proto import prediction_pb2_grpc
+import grpc
+import datetime
+
+
+API_AMBASSADOR="localhost:8080"
+
+
+def grpc_request_ambassador_bindata(deploymentName,namespace,endpoint="localhost:8004",data=None):
+ request = prediction_pb2.SeldonMessage(binData = data)
+ channel = grpc.insecure_channel(endpoint)
+ stub = prediction_pb2_grpc.SeldonStub(channel)
+ if namespace is None:
+ metadata = [('seldon',deploymentName)]
+ else:
+ metadata = [('seldon',deploymentName),('namespace',namespace)]
+ response = stub.Predict(request=request,metadata=metadata)
+ return response
+
+
+def getImage(path):
+ img = image.load_img(path, target_size=(227, 227))
+ x = image.img_to_array(img)
+ x = np.expand_dims(x, axis=0)
+ x = preprocess_input(x)
+ return x
+
+def getImageBytes(path):
+ with open(path, mode='rb') as file:
+ fileContent = file.read()
+ return fileContent
+
+
+fc = open('imagenet_classes.json')
+cnames = eval(fc.read())
+print(type(cnames))
+
+input_images = "input_images.txt"
+with open(input_images) as f:
+ lines = f.readlines()
+
+i = 0
+matched = 0
+durations = []
+for j in range(20): # repeat the sequence of requests
+ for line in lines:
+ path, label = line.strip().split(" ")
+ X = getImageBytes(path)
+ start_time = datetime.datetime.now()
+ response = grpc_request_ambassador_bindata("openvino-model","default",API_AMBASSADOR,data=X)
+ end_time = datetime.datetime.now()
+ duration = (end_time - start_time).total_seconds() * 1000
+ durations.append(duration)
+ print("Duration",duration)
+ i += 1
+ if response.strData == cnames[int(label)]:
+ matched += 1
+print("average duration:",sum(durations)/float(len(durations)))
+print("average accuracy:",matched/float(len(durations))*100)
\ No newline at end of file
diff --git a/examples/models/openvino_imagenet_ensemble/seldon_ov_predict_ensemble.json b/examples/models/openvino_imagenet_ensemble/seldon_ov_predict_ensemble.json
new file mode 100644
index 0000000000..ec67c2d336
--- /dev/null
+++ b/examples/models/openvino_imagenet_ensemble/seldon_ov_predict_ensemble.json
@@ -0,0 +1,235 @@
+{
+ "apiVersion": "machinelearning.seldon.io/v1alpha2",
+ "kind": "SeldonDeployment",
+ "metadata": {
+ "labels": {
+ "app": "seldon"
+ },
+ "name": "openvino-model",
+ "namespace": "seldon"
+ },
+ "spec": {
+ "name": "openvino",
+ "predictors": [
+ {
+ "componentSpecs": [{
+ "spec": {
+ "containers": [
+ {
+ "name": "imagenet-itransformer",
+ "image": "seldon/imagenet_transformer:0.1",
+ "env": [
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ }
+ ]
+ },
+ {
+ "name": "imagenet-otransformer",
+ "image": "seldon/imagenet_transformer:0.1",
+ "env": [
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ }
+ ]
+ },
+ {
+ "name": "imagenet-combiner",
+ "image": "seldon/imagenet_combiner:0.1",
+ "env": [
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ }
+ ]
+ },
+ {
+ "name": "prediction1",
+ "image": "seldon/seldon-openvino-prediction:0.1",
+ "resources": {
+ "requests": {
+ "cpu": "1"
+ },
+ "limits": {
+ "cpu": "32"
+ }
+ },
+ "env": [
+ {
+ "name": "XML_PATH",
+ "value": "gs://inference-eu/serving/densenet_169_int8/1/densenet_169_i8.xml"
+ },
+ {
+ "name": "BIN_PATH",
+ "value": "gs://inference-eu/serving/densenet_169_int8/1/densenet_169_i8.bin"
+ },
+ {
+ "name": "KMP_SETTINGS",
+ "value": "1"
+ },
+ {
+ "name": "KMP_AFFINITY",
+ "value": "granularity=fine,verbose,compact,1,0"
+ },
+ {
+ "name": "KMP_BLOCKTIME",
+ "value": "1"
+ },
+ {
+ "name": "OMP_NUM_THREADS",
+ "value": "16"
+ },
+ {
+ "name": "http_proxy",
+ "value": ""
+ },
+ {
+ "name": "https_proxy",
+ "value": ""
+ },
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ }
+ ]
+ },
+ {
+ "name": "prediction2",
+ "image": "seldon/seldon-openvino-prediction:0.1",
+ "resources": {
+ "requests": {
+ "cpu": "1"
+ },
+ "limits": {
+ "cpu": "32"
+ }
+ },
+ "env": [
+ {
+ "name": "XML_PATH",
+ "value": "gs://inference-eu/serving/resnet_50_int8/1/resnet_50_i8.xml"
+ },
+ {
+ "name": "BIN_PATH",
+ "value": "gs://inference-eu/serving/resnet_50_int8/1/resnet_50_i8.bin"
+ },
+ {
+ "name": "KMP_SETTINGS",
+ "value": "1"
+ },
+ {
+ "name": "KMP_AFFINITY",
+ "value": "granularity=fine,verbose,compact,1,0"
+ },
+ {
+ "name": "KMP_BLOCKTIME",
+ "value": "1"
+ },
+ {
+ "name": "OMP_NUM_THREADS",
+ "value": "16"
+ },
+ {
+ "name": "http_proxy",
+ "value": ""
+ },
+ {
+ "name": "https_proxy",
+ "value": ""
+ },
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ }
+ ]
+ }
+ ],
+ "terminationGracePeriodSeconds": 1
+ }
+ }],
+ "graph": {
+ "name": "imagenet-otransformer",
+ "endpoint": { "type" : "GRPC" },
+ "type": "OUTPUT_TRANSFORMER",
+ "children": [
+ {
+
+ "name": "imagenet-itransformer",
+ "endpoint": { "type" : "GRPC" },
+ "type": "TRANSFORMER",
+ "children": [
+ {
+ "name": "imagenet-combiner",
+ "endpoint": { "type" : "GRPC" },
+ "type": "COMBINER",
+ "children": [
+ {
+ "name": "prediction1",
+ "endpoint": { "type" : "GRPC" },
+ "type": "MODEL",
+ "children": []
+ },
+ {
+ "name": "prediction2",
+ "endpoint": { "type" : "GRPC" },
+ "type": "MODEL",
+ "children": []
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "name": "openvino",
+ "replicas": 0,
+ "svcOrchSpec" : {
+ "env": [
+ {
+ "name": "TRACING",
+ "value": "1"
+ },
+ {
+ "name": "JAEGER_AGENT_HOST",
+ "value": "jaeger-agent"
+ },
+ {
+ "name": "JAEGER_AGENT_PORT",
+ "value": "5775"
+ },
+ {
+ "name": "JAEGER_SAMPLER_TYPE",
+ "value": "const"
+ },
+ {
+ "name": "JAEGER_SAMPLER_PARAM",
+ "value": "1"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
diff --git a/examples/models/openvino_imagenet_ensemble/seldon_openvino_ensemble.json b/examples/models/openvino_imagenet_ensemble/seldon_ovms_ensemble.json
similarity index 98%
rename from examples/models/openvino_imagenet_ensemble/seldon_openvino_ensemble.json
rename to examples/models/openvino_imagenet_ensemble/seldon_ovms_ensemble.json
index 2303741e16..226ea6f224 100644
--- a/examples/models/openvino_imagenet_ensemble/seldon_openvino_ensemble.json
+++ b/examples/models/openvino_imagenet_ensemble/seldon_ovms_ensemble.json
@@ -87,7 +87,7 @@
},
{
"name": "openvino-model-server1",
- "image": "intelaipg/openvino-model-server:0.2",
+ "image": "intelaipg/openvino-model-server:latest",
"command": [
"/ie-serving-py/start_server.sh"
],
@@ -124,7 +124,7 @@
},
{
"name": "openvino-model-server2",
- "image": "intelaipg/openvino-model-server:0.2",
+ "image": "intelaipg/openvino-model-server:latest",
"command": [
"/ie-serving-py/start_server.sh"
],
diff --git a/examples/models/openvino_imagenet_ensemble/zebra.jpeg b/examples/models/openvino_imagenet_ensemble/zebra.jpeg
new file mode 100644
index 0000000000..d31d346c76
Binary files /dev/null and b/examples/models/openvino_imagenet_ensemble/zebra.jpeg differ
diff --git a/wrappers/s2i/python/README.md b/wrappers/s2i/python/README.md
index 85cf050caa..4464081131 100644
--- a/wrappers/s2i/python/README.md
+++ b/wrappers/s2i/python/README.md
@@ -14,5 +14,3 @@ e.g. from 0.3-SNAPSHOT to release 0.3 and create 0.4-SNAPSHOT
-
-
diff --git a/wrappers/s2i/python_openvino/Dockerfile_openvino_base b/wrappers/s2i/python_openvino/Dockerfile_openvino_base
new file mode 100644
index 0000000000..5990110dd5
--- /dev/null
+++ b/wrappers/s2i/python_openvino/Dockerfile_openvino_base
@@ -0,0 +1,79 @@
+FROM intelpython/intelpython3_core as DEV
+RUN apt-get update && apt-get install -y \
+ curl \
+ ca-certificates \
+ libgfortran3 \
+ vim \
+ build-essential \
+ cmake \
+ curl \
+ wget \
+ libssl-dev \
+ ca-certificates \
+ git \
+ libboost-regex-dev \
+ gcc-multilib \
+ g++-multilib \
+ libgtk2.0-dev \
+ pkg-config \
+ unzip \
+ automake \
+ libtool \
+ autoconf \
+ libpng-dev \
+ libcairo2-dev \
+ libpango1.0-dev \
+ libglib2.0-dev \
+ libgtk2.0-dev \
+ libswscale-dev \
+ libavcodec-dev \
+ libavformat-dev \
+ libgstreamer1.0-0 \
+ gstreamer1.0-plugins-base \
+ libusb-1.0-0-dev \
+ libopenblas-dev
+
+ARG DLDT_DIR=/dldt-2018_R5
+RUN git clone --depth=1 -b 2018_R5 https://github.com/opencv/dldt.git ${DLDT_DIR} && \
+ cd ${DLDT_DIR} && git submodule init && git submodule update --recursive && \
+ rm -Rf .git && rm -Rf model-optimizer
+
+WORKDIR ${DLDT_DIR}
+RUN curl -L -o ${DLDT_DIR}/mklml_lnx_2019.0.1.20180928.tgz https://github.com/intel/mkl-dnn/releases/download/v0.17.2/mklml_lnx_2019.0.1.20180928.tgz && \
+ tar -xzf ${DLDT_DIR}/mklml_lnx_2019.0.1.20180928.tgz && rm ${DLDT_DIR}/mklml_lnx_2019.0.1.20180928.tgz
+WORKDIR ${DLDT_DIR}/inference-engine
+RUN mkdir build && cd build && cmake -DGEMM=MKL -DMKLROOT=${DLDT_DIR}/mklml_lnx_2019.0.1.20180928 -DENABLE_MKL_DNN=ON -DCMAKE_BUILD_TYPE=Release ..
+RUN cd build && make -j4
+RUN pip install cython numpy && mkdir ie_bridges/python/build && cd ie_bridges/python/build && \
+ cmake -DInferenceEngine_DIR=${DLDT_DIR}/inference-engine/build -DPYTHON_EXECUTABLE=`which python` -DPYTHON_LIBRARY=/opt/conda/lib/libpython3.6m.so -DPYTHON_INCLUDE_DIR=/opt/conda/include/python3.6m .. && \
+ make -j4
+
+FROM intelpython/intelpython3_core as PROD
+
+LABEL io.openshift.s2i.scripts-url="image:///s2i/bin"
+
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ curl \
+ ca-certificates \
+ build-essential \
+ python3-setuptools \
+ vim
+
+COPY --from=DEV /dldt-2018_R5/inference-engine/bin/intel64/Release/lib/*.so /usr/local/lib/
+COPY --from=DEV /dldt-2018_R5/inference-engine/ie_bridges/python/bin/intel64/Release/python_api/python3.6/openvino/ /usr/local/lib/openvino/
+COPY --from=DEV /dldt-2018_R5/mklml_lnx_2019.0.1.20180928/lib/lib*.so /usr/local/lib/
+ENV LD_LIBRARY_PATH=/usr/local/lib
+ENV PYTHONPATH=/usr/local/lib
+
+RUN conda create --name myenv -y
+ENV PATH /opt/conda/envs/myenv/bin:$PATH
+RUN conda install -y tensorflow opencv && conda clean -a -y
+WORKDIR /microservice
+
+RUN pip install wheel
+RUN pip install seldon-core
+RUN pip install --upgrade setuptools
+
+COPY ./s2i/bin/ /s2i/bin
+
+EXPOSE 5000
diff --git a/wrappers/s2i/python_openvino/README.md b/wrappers/s2i/python_openvino/README.md
new file mode 100644
index 0000000000..d55bf01506
--- /dev/null
+++ b/wrappers/s2i/python_openvino/README.md
@@ -0,0 +1,44 @@
+# Seldon base image with python and OpenVINO inference engine
+
+## Building
+```bash
+
+cp ../python/s2i .
+docker build -f Dockerfile_openvino_base --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy \
+-t seldon_openvino_base:latest .
+```
+## Usage
+
+This base image can be used to Seldon components exactly the same way like with standard Seldon base images.
+Use s2i tool like documented [here](https://github.com/SeldonIO/seldon-core/blob/master/docs/wrappers/python.md).
+An example is presented below:
+
+```bash
+s2i build . seldon_openvino_base:latest {component_image_name}
+```
+
+## References
+
+[OpenVINO toolkit](https://software.intel.com/en-us/openvino-toolkit)
+
+[OpenVINO API docs](https://software.intel.com/en-us/articles/OpenVINO-InferEngine#inpage-nav-9)
+
+[Seldon pipeline example](../../../examples/models/openvino_imagenet_ensemble)
+
+
+## Notes
+
+This Seldon base image contains, beside OpenVINO inference execution engine python API also several other useful components.
+- Intel optimized python version
+- Intel optimized OpenCV package
+- Intel optimized TensorFlow with MKL engine
+- Configured conda package manager
+
+In case you would use this compoment to run inference operations using TensorFlow with MKL, make sure you configure
+also the following environment variables:
+
+`KMP_AFFINITY`=granularity=fine,verbose,compact,1,0
+
+`KMP_BLOCKTIME`=1
+
+`OMP_NUM_THREADS`={number of CPU cores}
\ No newline at end of file