Skip to content

Commit f0a94d4

Browse files
author
Eric Meadows
committed
Remove autoformatting by using nano
1 parent fea32a5 commit f0a94d4

File tree

2 files changed

+54
-74
lines changed

2 files changed

+54
-74
lines changed

doc/source/python/python_wrapping_docker.md

Lines changed: 46 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
# Packaging a Python model for Seldon Core using Docker
22

3+
34
In this guide, we illustrate the steps needed to wrap your own python model in a docker image ready for deployment with Seldon Core using Docker.
45

56
## Step 1 - Create your source code
67

78
You will need:
89

9-
- A python file with a class that runs your model
10-
- A requirements.txt with a seldon-core entry
10+
* A python file with a class that runs your model
11+
* A requirements.txt with a seldon-core entry
1112

1213
We will go into detail for each of these steps:
1314

1415
### Python file
15-
16-
Your source code should contain a python file which defines a class of the same name as the file. For example, looking at our skeleton python model file at `wrappers/s2i/python/test/model-template-app/MyModel.py`:
16+
Your source code should contain a python file which defines a class of the same name as the file. For example, looking at our skeleton python model file at ```wrappers/s2i/python/test/model-template-app/MyModel.py```:
1717

1818
```python
1919
class MyModel(object):
@@ -40,13 +40,12 @@ class MyModel(object):
4040
return X
4141
```
4242

43-
- The file is called MyModel.py and it defines a class MyModel
44-
- The class contains a predict method that takes an array (numpy) X and feature_names and returns an array of predictions.
45-
- You can add any required initialization inside the class init method.
46-
- Your return array should be at least 2-dimensional.
43+
* The file is called MyModel.py and it defines a class MyModel
44+
* The class contains a predict method that takes an array (numpy) X and feature_names and returns an array of predictions.
45+
* You can add any required initialization inside the class init method.
46+
* Your return array should be at least 2-dimensional.
4747

4848
### requirements.txt
49-
5049
Populate a requirements.txt with any software dependencies your code requires. At a minimum the file should contain:
5150

5251
```
@@ -73,20 +72,19 @@ ENV PERSISTENCE 0
7372
CMD exec seldon-core-microservice $MODEL_NAME $API_TYPE --service-type $SERVICE_TYPE --persistence $PERSISTENCE
7473
```
7574

76-
## Step 3 - Build your image
7775

78-
Use `docker build . -t $ORG/$MODEL_NAME:$TAG` to create your Docker image from source code. A simple name can be used but convention is to use the ORG/IMAGE:TAG format.
76+
## Step 3 - Build your image
77+
Use ```docker build . -t $ORG/$MODEL_NAME:$TAG``` to create your Docker image from source code. A simple name can be used but convention is to use the ORG/IMAGE:TAG format.
7978

8079
## Using with Keras/Tensorflow Models
8180

8281
To ensure Keras models with the Tensorflow backend work correctly you may need to call `_make_predict_function()` on your model after it is loaded. This is because Flask may call the prediction request in a separate thread from the one that initialised your model. See the [keras issue](https://github.com/keras-team/keras/issues/6462) for further discussion.
8382

8483
## Environment Variables
85-
8684
The required environment variables understood by the builder image are explained below. You can provide them in the Dockerfile or as `-e` parameters to `docker run`.
8785

88-
### MODEL_NAME
8986

87+
### MODEL_NAME
9088
The name of the class containing the model. Also the name of the python file which will be imported.
9189

9290
### API_TYPE
@@ -97,11 +95,11 @@ API type to create. Can be REST or GRPC
9795

9896
The service type being created. Available options are:
9997

100-
- MODEL
101-
- ROUTER
102-
- TRANSFORMER
103-
- COMBINER
104-
- OUTLIER_DETECTOR
98+
* MODEL
99+
* ROUTER
100+
* TRANSFORMER
101+
* COMBINER
102+
* OUTLIER_DETECTOR
105103

106104
### PERSISTENCE
107105

@@ -111,51 +109,49 @@ Set either to 0 or 1. Default is 0. If set to 1 then your model will be saved pe
111109

112110
See [Flask - Builtin Configuration Values](https://flask.palletsprojects.com/config/#builtin-configuration-values) for possible configurations; the following are configurable when prefixed with the `FLASK_` string (e.g. `FLASK_JSON_SORT_KEYS` translates to `JSON_SORT_KEYS` in Flask):
113111

114-
- DEBUG
115-
- EXPLAIN_TEMPLATE_LOADING
116-
- JSONIFY_PRETTYPRINT_REGULAR
117-
- JSON_SORT_KEYS
118-
- PROPAGATE_EXCEPTIONS
119-
- PRESERVE_CONTEXT_ON_EXCEPTION
120-
- SESSION_COOKIE_HTTPONLY
121-
- SESSION_COOKIE_SECURE
122-
- SESSION_REFRESH_EACH_REQUEST
123-
- TEMPLATES_AUTO_RELOAD
124-
- TESTING
125-
- TRAP_HTTP_EXCEPTIONS
126-
- TRAP_BAD_REQUEST_ERRORS
127-
- USE_X_SENDFILE
112+
* DEBUG
113+
* EXPLAIN_TEMPLATE_LOADING
114+
* JSONIFY_PRETTYPRINT_REGULAR
115+
* JSON_SORT_KEYS
116+
* PROPAGATE_EXCEPTIONS
117+
* PRESERVE_CONTEXT_ON_EXCEPTION
118+
* SESSION_COOKIE_HTTPONLY
119+
* SESSION_COOKIE_SECURE
120+
* SESSION_REFRESH_EACH_REQUEST
121+
* TEMPLATES_AUTO_RELOAD
122+
* TESTING
123+
* TRAP_HTTP_EXCEPTIONS
124+
* TRAP_BAD_REQUEST_ERRORS
128125

129126
## Creating different service types
130127

131128
### MODEL
132129

133-
- [A minimal skeleton for model source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/model-template-app)
134-
- [Example model notebooks](../examples/notebooks.html)
130+
* [A minimal skeleton for model source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/model-template-app)
131+
* [Example model notebooks](../examples/notebooks.html)
135132

136133
### ROUTER
137-
138-
- [Description of routers in Seldon Core](../analytics/routers.html)
139-
- [A minimal skeleton for router source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/router-template-app)
134+
* [Description of routers in Seldon Core](../analytics/routers.html)
135+
* [A minimal skeleton for router source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/router-template-app)
140136

141137
### TRANSFORMER
142138

143-
- [A minimal skeleton for transformer source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/transformer-template-app)
144-
- [Example transformers](https://github.com/SeldonIO/seldon-core/tree/master/examples/transformers)
139+
* [A minimal skeleton for transformer source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/transformer-template-app)
140+
* [Example transformers](https://github.com/SeldonIO/seldon-core/tree/master/examples/transformers)
141+
145142

146143
## Advanced Usage
147144

148145
### Model Class Arguments
149-
150-
You can add arguments to your component which will be populated from the `parameters` defined in the SeldonDeloyment when you deploy your image on Kubernetes. For example, our [Python TFServing proxy](https://github.com/SeldonIO/seldon-core/tree/master/integrations/tfserving) has the class init method signature defined as below:
146+
You can add arguments to your component which will be populated from the ```parameters``` defined in the SeldonDeloyment when you deploy your image on Kubernetes. For example, our [Python TFServing proxy](https://github.com/SeldonIO/seldon-core/tree/master/integrations/tfserving) has the class init method signature defined as below:
151147

152148
```python
153149
class TfServingProxy(object):
154150

155151
def __init__(self,rest_endpoint=None,grpc_endpoint=None,model_name=None,signature_name=None,model_input=None,model_output=None):
156152
```
157153

158-
These arguments can be set when deploying in a Seldon Deployment. An example can be found in the [MNIST TFServing example](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/tfserving-mnist.ipynb) where the arguments are defined in the [SeldonDeployment](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/mnist_tfserving_deployment.json.template) which is partly show below:
154+
These arguments can be set when deploying in a Seldon Deployment. An example can be found in the [MNIST TFServing example](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/tfserving-mnist.ipynb) where the arguments are defined in the [SeldonDeployment](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/mnist_tfserving_deployment.json.template) which is partly show below:
159155

160156
```
161157
"graph": {
@@ -193,13 +189,14 @@ These arguments can be set when deploying in a Seldon Deployment. An example can
193189
},
194190
```
195191

196-
The allowable `type` values for the parameters are defined in the [proto buffer definition](https://github.com/SeldonIO/seldon-core/blob/44f7048efd0f6be80a857875058d23efc4221205/proto/seldon_deployment.proto#L117-L131).
197192

198-
### Custom Metrics
193+
The allowable ```type``` values for the parameters are defined in the [proto buffer definition](https://github.com/SeldonIO/seldon-core/blob/44f7048efd0f6be80a857875058d23efc4221205/proto/seldon_deployment.proto#L117-L131).
199194

200-
`from version 0.3`
201195

202-
To add custom metrics to your response you can define an optional method `metrics` in your class that returns a list of metric dicts. An example is shown below:
196+
### Custom Metrics
197+
```from version 0.3```
198+
199+
To add custom metrics to your response you can define an optional method ```metrics``` in your class that returns a list of metric dicts. An example is shown below:
203200

204201
```python
205202
class MyModel(object):
@@ -215,11 +212,10 @@ For more details on custom metrics and the format of the metric dict see [here](
215212

216213
There is an [example notebook illustrating a model with custom metrics in python](../examples/custom_metrics.html).
217214

218-
### Custom Request Tags
219-
220-
`from version 0.3`
215+
### Custom Meta Data
216+
```from version 0.3```
221217

222-
To add custom request tags data you can add an optional method `tags` which can return a dict of custom meta tags as shown in the example below:
218+
To add custom meta data you can add an optional method ```tags``` which can return a dict of custom meta tags as shown in the example below:
223219

224220
```python
225221
class MyModel(object):

python/seldon_core/wrapper.py

Lines changed: 8 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,7 @@
2121
logger = logging.getLogger(__name__)
2222

2323
PRED_UNIT_ID = os.environ.get("PREDICTIVE_UNIT_ID", "0")
24-
METRICS_ENDPOINT = os.environ.get(
25-
"PREDICTIVE_UNIT_METRICS_ENDPOINT", "/metrics"
26-
)
24+
METRICS_ENDPOINT = os.environ.get("PREDICTIVE_UNIT_METRICS_ENDPOINT", "/metrics")
2725

2826

2927
def get_rest_microservice(user_model, seldon_metrics):
@@ -131,9 +129,7 @@ def HealthPing():
131129
@app.route("/health/status", methods=["GET"])
132130
def HealthStatus():
133131
logger.debug("REST Health Status Request")
134-
response = seldon_core.seldon_methods.health_status(
135-
user_model, seldon_metrics
136-
)
132+
response = seldon_core.seldon_methods.health_status(user_model, seldon_metrics)
137133
logger.debug("REST Health Status Response: %s", response)
138134
return jsonify(response)
139135

@@ -216,9 +212,7 @@ def __init__(self, user_model, seldon_metrics):
216212
self.user_model = user_model
217213
self.seldon_metrics = seldon_metrics
218214

219-
self.metadata_data = seldon_core.seldon_methods.init_metadata(
220-
user_model
221-
)
215+
self.metadata_data = seldon_core.seldon_methods.init_metadata(user_model)
222216

223217
def Predict(self, request_grpc, context):
224218
return seldon_core.seldon_methods.predict(
@@ -260,28 +254,20 @@ def ModelMetadata(self, request_grpc, context):
260254

261255
def GraphMetadata(self, request_grpc, context):
262256
"""GraphMetadata method of rpc Seldon service"""
263-
raise NotImplementedError(
264-
"GraphMetadata not available on the Model level."
265-
)
257+
raise NotImplementedError("GraphMetadata not available on the Model level.")
266258

267259

268-
def get_grpc_server(
269-
user_model, seldon_metrics, annotations={}, trace_interceptor=None
270-
):
260+
def get_grpc_server(user_model, seldon_metrics, annotations={}, trace_interceptor=None):
271261
seldon_model = SeldonModelGRPC(user_model, seldon_metrics)
272262
options = []
273263
if ANNOTATION_GRPC_MAX_MSG_SIZE in annotations:
274264
max_msg = int(annotations[ANNOTATION_GRPC_MAX_MSG_SIZE])
275-
logger.info(
276-
"Setting grpc max message and receive length to %d", max_msg
277-
)
265+
logger.info("Setting grpc max message and receive length to %d", max_msg)
278266
options.append(("grpc.max_message_length", max_msg))
279267
options.append(("grpc.max_send_message_length", max_msg))
280268
options.append(("grpc.max_receive_message_length", max_msg))
281269

282-
server = grpc.server(
283-
futures.ThreadPoolExecutor(max_workers=10), options=options
284-
)
270+
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), options=options)
285271

286272
if trace_interceptor:
287273
from grpc_opentracing.grpcext import intercept_server
@@ -291,9 +277,7 @@ def get_grpc_server(
291277
prediction_pb2_grpc.add_GenericServicer_to_server(seldon_model, server)
292278
prediction_pb2_grpc.add_ModelServicer_to_server(seldon_model, server)
293279
prediction_pb2_grpc.add_TransformerServicer_to_server(seldon_model, server)
294-
prediction_pb2_grpc.add_OutputTransformerServicer_to_server(
295-
seldon_model, server
296-
)
280+
prediction_pb2_grpc.add_OutputTransformerServicer_to_server(seldon_model, server)
297281
prediction_pb2_grpc.add_CombinerServicer_to_server(seldon_model, server)
298282
prediction_pb2_grpc.add_RouterServicer_to_server(seldon_model, server)
299283
prediction_pb2_grpc.add_SeldonServicer_to_server(seldon_model, server)

0 commit comments

Comments
 (0)