Skip to content

Commit

Permalink
Remove autoformatting by using nano
Browse files Browse the repository at this point in the history
  • Loading branch information
Eric Meadows committed Aug 14, 2020
1 parent fea32a5 commit f0a94d4
Show file tree
Hide file tree
Showing 2 changed files with 54 additions and 74 deletions.
96 changes: 46 additions & 50 deletions doc/source/python/python_wrapping_docker.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
# Packaging a Python model for Seldon Core using Docker


In this guide, we illustrate the steps needed to wrap your own python model in a docker image ready for deployment with Seldon Core using Docker.

## Step 1 - Create your source code

You will need:

- A python file with a class that runs your model
- A requirements.txt with a seldon-core entry
* A python file with a class that runs your model
* A requirements.txt with a seldon-core entry

We will go into detail for each of these steps:

### Python file

Your source code should contain a python file which defines a class of the same name as the file. For example, looking at our skeleton python model file at `wrappers/s2i/python/test/model-template-app/MyModel.py`:
Your source code should contain a python file which defines a class of the same name as the file. For example, looking at our skeleton python model file at ```wrappers/s2i/python/test/model-template-app/MyModel.py```:

```python
class MyModel(object):
Expand All @@ -40,13 +40,12 @@ class MyModel(object):
return X
```

- The file is called MyModel.py and it defines a class MyModel
- The class contains a predict method that takes an array (numpy) X and feature_names and returns an array of predictions.
- You can add any required initialization inside the class init method.
- Your return array should be at least 2-dimensional.
* The file is called MyModel.py and it defines a class MyModel
* The class contains a predict method that takes an array (numpy) X and feature_names and returns an array of predictions.
* You can add any required initialization inside the class init method.
* Your return array should be at least 2-dimensional.

### requirements.txt

Populate a requirements.txt with any software dependencies your code requires. At a minimum the file should contain:

```
Expand All @@ -73,20 +72,19 @@ ENV PERSISTENCE 0
CMD exec seldon-core-microservice $MODEL_NAME $API_TYPE --service-type $SERVICE_TYPE --persistence $PERSISTENCE
```

## Step 3 - Build your image

Use `docker build . -t $ORG/$MODEL_NAME:$TAG` to create your Docker image from source code. A simple name can be used but convention is to use the ORG/IMAGE:TAG format.
## Step 3 - Build your image
Use ```docker build . -t $ORG/$MODEL_NAME:$TAG``` to create your Docker image from source code. A simple name can be used but convention is to use the ORG/IMAGE:TAG format.

## Using with Keras/Tensorflow Models

To ensure Keras models with the Tensorflow backend work correctly you may need to call `_make_predict_function()` on your model after it is loaded. This is because Flask may call the prediction request in a separate thread from the one that initialised your model. See the [keras issue](https://github.com/keras-team/keras/issues/6462) for further discussion.

## Environment Variables

The required environment variables understood by the builder image are explained below. You can provide them in the Dockerfile or as `-e` parameters to `docker run`.

### MODEL_NAME

### MODEL_NAME
The name of the class containing the model. Also the name of the python file which will be imported.

### API_TYPE
Expand All @@ -97,11 +95,11 @@ API type to create. Can be REST or GRPC

The service type being created. Available options are:

- MODEL
- ROUTER
- TRANSFORMER
- COMBINER
- OUTLIER_DETECTOR
* MODEL
* ROUTER
* TRANSFORMER
* COMBINER
* OUTLIER_DETECTOR

### PERSISTENCE

Expand All @@ -111,51 +109,49 @@ Set either to 0 or 1. Default is 0. If set to 1 then your model will be saved pe

See [Flask - Builtin Configuration Values](https://flask.palletsprojects.com/config/#builtin-configuration-values) for possible configurations; the following are configurable when prefixed with the `FLASK_` string (e.g. `FLASK_JSON_SORT_KEYS` translates to `JSON_SORT_KEYS` in Flask):

- DEBUG
- EXPLAIN_TEMPLATE_LOADING
- JSONIFY_PRETTYPRINT_REGULAR
- JSON_SORT_KEYS
- PROPAGATE_EXCEPTIONS
- PRESERVE_CONTEXT_ON_EXCEPTION
- SESSION_COOKIE_HTTPONLY
- SESSION_COOKIE_SECURE
- SESSION_REFRESH_EACH_REQUEST
- TEMPLATES_AUTO_RELOAD
- TESTING
- TRAP_HTTP_EXCEPTIONS
- TRAP_BAD_REQUEST_ERRORS
- USE_X_SENDFILE
* DEBUG
* EXPLAIN_TEMPLATE_LOADING
* JSONIFY_PRETTYPRINT_REGULAR
* JSON_SORT_KEYS
* PROPAGATE_EXCEPTIONS
* PRESERVE_CONTEXT_ON_EXCEPTION
* SESSION_COOKIE_HTTPONLY
* SESSION_COOKIE_SECURE
* SESSION_REFRESH_EACH_REQUEST
* TEMPLATES_AUTO_RELOAD
* TESTING
* TRAP_HTTP_EXCEPTIONS
* TRAP_BAD_REQUEST_ERRORS

## Creating different service types

### MODEL

- [A minimal skeleton for model source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/model-template-app)
- [Example model notebooks](../examples/notebooks.html)
* [A minimal skeleton for model source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/model-template-app)
* [Example model notebooks](../examples/notebooks.html)

### ROUTER

- [Description of routers in Seldon Core](../analytics/routers.html)
- [A minimal skeleton for router source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/router-template-app)
* [Description of routers in Seldon Core](../analytics/routers.html)
* [A minimal skeleton for router source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/router-template-app)

### TRANSFORMER

- [A minimal skeleton for transformer source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/transformer-template-app)
- [Example transformers](https://github.com/SeldonIO/seldon-core/tree/master/examples/transformers)
* [A minimal skeleton for transformer source code](https://github.com/SeldonIO/seldon-core/tree/master/wrappers/s2i/python/test/transformer-template-app)
* [Example transformers](https://github.com/SeldonIO/seldon-core/tree/master/examples/transformers)


## Advanced Usage

### Model Class Arguments

You can add arguments to your component which will be populated from the `parameters` defined in the SeldonDeloyment when you deploy your image on Kubernetes. For example, our [Python TFServing proxy](https://github.com/SeldonIO/seldon-core/tree/master/integrations/tfserving) has the class init method signature defined as below:
You can add arguments to your component which will be populated from the ```parameters``` defined in the SeldonDeloyment when you deploy your image on Kubernetes. For example, our [Python TFServing proxy](https://github.com/SeldonIO/seldon-core/tree/master/integrations/tfserving) has the class init method signature defined as below:

```python
class TfServingProxy(object):

def __init__(self,rest_endpoint=None,grpc_endpoint=None,model_name=None,signature_name=None,model_input=None,model_output=None):
```

These arguments can be set when deploying in a Seldon Deployment. An example can be found in the [MNIST TFServing example](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/tfserving-mnist.ipynb) where the arguments are defined in the [SeldonDeployment](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/mnist_tfserving_deployment.json.template) which is partly show below:
These arguments can be set when deploying in a Seldon Deployment. An example can be found in the [MNIST TFServing example](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/tfserving-mnist.ipynb) where the arguments are defined in the [SeldonDeployment](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/tfserving-mnist/mnist_tfserving_deployment.json.template) which is partly show below:

```
"graph": {
Expand Down Expand Up @@ -193,13 +189,14 @@ These arguments can be set when deploying in a Seldon Deployment. An example can
},
```

The allowable `type` values for the parameters are defined in the [proto buffer definition](https://github.com/SeldonIO/seldon-core/blob/44f7048efd0f6be80a857875058d23efc4221205/proto/seldon_deployment.proto#L117-L131).

### Custom Metrics
The allowable ```type``` values for the parameters are defined in the [proto buffer definition](https://github.com/SeldonIO/seldon-core/blob/44f7048efd0f6be80a857875058d23efc4221205/proto/seldon_deployment.proto#L117-L131).

`from version 0.3`

To add custom metrics to your response you can define an optional method `metrics` in your class that returns a list of metric dicts. An example is shown below:
### Custom Metrics
```from version 0.3```

To add custom metrics to your response you can define an optional method ```metrics``` in your class that returns a list of metric dicts. An example is shown below:

```python
class MyModel(object):
Expand All @@ -215,11 +212,10 @@ For more details on custom metrics and the format of the metric dict see [here](

There is an [example notebook illustrating a model with custom metrics in python](../examples/custom_metrics.html).

### Custom Request Tags

`from version 0.3`
### Custom Meta Data
```from version 0.3```

To add custom request tags data you can add an optional method `tags` which can return a dict of custom meta tags as shown in the example below:
To add custom meta data you can add an optional method ```tags``` which can return a dict of custom meta tags as shown in the example below:

```python
class MyModel(object):
Expand Down
32 changes: 8 additions & 24 deletions python/seldon_core/wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@
logger = logging.getLogger(__name__)

PRED_UNIT_ID = os.environ.get("PREDICTIVE_UNIT_ID", "0")
METRICS_ENDPOINT = os.environ.get(
"PREDICTIVE_UNIT_METRICS_ENDPOINT", "/metrics"
)
METRICS_ENDPOINT = os.environ.get("PREDICTIVE_UNIT_METRICS_ENDPOINT", "/metrics")


def get_rest_microservice(user_model, seldon_metrics):
Expand Down Expand Up @@ -131,9 +129,7 @@ def HealthPing():
@app.route("/health/status", methods=["GET"])
def HealthStatus():
logger.debug("REST Health Status Request")
response = seldon_core.seldon_methods.health_status(
user_model, seldon_metrics
)
response = seldon_core.seldon_methods.health_status(user_model, seldon_metrics)
logger.debug("REST Health Status Response: %s", response)
return jsonify(response)

Expand Down Expand Up @@ -216,9 +212,7 @@ def __init__(self, user_model, seldon_metrics):
self.user_model = user_model
self.seldon_metrics = seldon_metrics

self.metadata_data = seldon_core.seldon_methods.init_metadata(
user_model
)
self.metadata_data = seldon_core.seldon_methods.init_metadata(user_model)

def Predict(self, request_grpc, context):
return seldon_core.seldon_methods.predict(
Expand Down Expand Up @@ -260,28 +254,20 @@ def ModelMetadata(self, request_grpc, context):

def GraphMetadata(self, request_grpc, context):
"""GraphMetadata method of rpc Seldon service"""
raise NotImplementedError(
"GraphMetadata not available on the Model level."
)
raise NotImplementedError("GraphMetadata not available on the Model level.")


def get_grpc_server(
user_model, seldon_metrics, annotations={}, trace_interceptor=None
):
def get_grpc_server(user_model, seldon_metrics, annotations={}, trace_interceptor=None):
seldon_model = SeldonModelGRPC(user_model, seldon_metrics)
options = []
if ANNOTATION_GRPC_MAX_MSG_SIZE in annotations:
max_msg = int(annotations[ANNOTATION_GRPC_MAX_MSG_SIZE])
logger.info(
"Setting grpc max message and receive length to %d", max_msg
)
logger.info("Setting grpc max message and receive length to %d", max_msg)
options.append(("grpc.max_message_length", max_msg))
options.append(("grpc.max_send_message_length", max_msg))
options.append(("grpc.max_receive_message_length", max_msg))

server = grpc.server(
futures.ThreadPoolExecutor(max_workers=10), options=options
)
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), options=options)

if trace_interceptor:
from grpc_opentracing.grpcext import intercept_server
Expand All @@ -291,9 +277,7 @@ def get_grpc_server(
prediction_pb2_grpc.add_GenericServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_ModelServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_TransformerServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_OutputTransformerServicer_to_server(
seldon_model, server
)
prediction_pb2_grpc.add_OutputTransformerServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_CombinerServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_RouterServicer_to_server(seldon_model, server)
prediction_pb2_grpc.add_SeldonServicer_to_server(seldon_model, server)
Expand Down

0 comments on commit f0a94d4

Please sign in to comment.