Skip to content

Commit c6c6aea

Browse files
committed
Update version to 0.17.0
1 parent 03144e6 commit c6c6aea

File tree

25 files changed

+68
-76
lines changed

25 files changed

+68
-76
lines changed

README.md

+4-11
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,7 @@
1-
<!-- Delete on release branches -->
2-
<img src='https://s3-us-west-2.amazonaws.com/cortex-public/logo.png' height='88'>
3-
41
# Machine learning model serving infrastructure
52

63
<br>
74

8-
<!-- Delete on release branches -->
9-
<!-- CORTEX_VERSION_README_MINOR -->
10-
[install](https://cortex.dev/install)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.16/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
11-
125
<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
136
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)
147

@@ -32,7 +25,7 @@
3225

3326
<!-- CORTEX_VERSION_README_MINOR -->
3427
```bash
35-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"
28+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.17/get-cli.sh)"
3629
```
3730

3831
### Implement your predictor
@@ -152,6 +145,6 @@ Cortex is an open source alternative to serving models with SageMaker or buildin
152145
## Examples
153146

154147
<!-- CORTEX_VERSION_README_MINOR x3 -->
155-
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.16/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
156-
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
157-
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
148+
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
149+
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
150+
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.

build/build-image.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.17.0
2323

2424
slim="false"
2525
while [[ $# -gt 0 ]]; do

build/cli.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.17.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.17.0
2121

2222
slim="false"
2323
while [[ $# -gt 0 ]]; do

docs/cluster-management/config.md

+17-17
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The Cortex cluster may be configured by providing a configuration file to `cortex cluster up` or `cortex cluster configure` via the `--config` flag (e.g. `cortex cluster up --config cluster.yaml`). Below is the schema for the cluster configuration file, with default values shown (unless otherwise specified):
44

5-
<!-- CORTEX_VERSION_MINOR -->
5+
<!-- CORTEX_VERSION_MINOR x2 -->
66
```yaml
77
# cluster.yaml
88

@@ -68,31 +68,31 @@ log_group: cortex
6868
tags: # <string>: <string> map of key/value pairs
6969

7070
# whether to use spot instances in the cluster (default: false)
71-
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
71+
# see https://cortex.dev/v/0.17/cluster-management/spot-instances for additional details on spot configuration
7272
spot: false
7373

74-
# see https://cortex.dev/v/master/guides/subdomain-https-setup for instructions on how to set up HTTPS for APIs
74+
# see https://cortex.dev/v/0.17/guides/subdomain-https-setup for instructions on how to set up HTTPS for APIs
7575
ssl_certificate_arn: # if empty, APIs will still be accessible via HTTPS (in addition to HTTP), but will not use a trusted certificate
7676
```
7777
7878
The default docker images used for your Predictors are listed in the instructions for [system packages](../deployments/system-packages.md), and can be overridden in your [API configuration](../deployments/api-configuration.md).
7979
80-
The docker images used by the Cortex cluster can also be overriden, although this is not common. They can be configured by adding any of these keys to your cluster configuration file (default values are shown):
80+
The docker images used by the Cortex cluster can also be overridden, although this is not common. They can be configured by adding any of these keys to your cluster configuration file (default values are shown):
8181
8282
<!-- CORTEX_VERSION_BRANCH_STABLE -->
8383
```yaml
8484
# docker image paths
85-
image_operator: cortexlabs/operator:master
86-
image_manager: cortexlabs/manager:master
87-
image_downloader: cortexlabs/downloader:master
88-
image_request_monitor: cortexlabs/request-monitor:master
89-
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
90-
image_metrics_server: cortexlabs/metrics-server:master
91-
image_nvidia: cortexlabs/nvidia:master
92-
image_fluentd: cortexlabs/fluentd:master
93-
image_statsd: cortexlabs/statsd:master
94-
image_istio_proxy: cortexlabs/istio-proxy:master
95-
image_istio_pilot: cortexlabs/istio-pilot:master
96-
image_istio_citadel: cortexlabs/istio-citadel:master
97-
image_istio_galley: cortexlabs/istio-galley:master
85+
image_operator: cortexlabs/operator:0.17.0
86+
image_manager: cortexlabs/manager:0.17.0
87+
image_downloader: cortexlabs/downloader:0.17.0
88+
image_request_monitor: cortexlabs/request-monitor:0.17.0
89+
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.17.0
90+
image_metrics_server: cortexlabs/metrics-server:0.17.0
91+
image_nvidia: cortexlabs/nvidia:0.17.0
92+
image_fluentd: cortexlabs/fluentd:0.17.0
93+
image_statsd: cortexlabs/statsd:0.17.0
94+
image_istio_proxy: cortexlabs/istio-proxy:0.17.0
95+
image_istio_pilot: cortexlabs/istio-pilot:0.17.0
96+
image_istio_citadel: cortexlabs/istio-citadel:0.17.0
97+
image_istio_galley: cortexlabs/istio-galley:0.17.0
9898
```

docs/cluster-management/install.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
<!-- CORTEX_VERSION_MINOR -->
1010
```bash
11-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
11+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.17/get-cli.sh)"
1212
```
1313

1414
Continue to [deploy an example](#deploy-an-example) below.
@@ -26,7 +26,7 @@ To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU S
2626
<!-- CORTEX_VERSION_MINOR -->
2727
```bash
2828
# install the CLI on your machine
29-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
29+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.17/get-cli.sh)"
3030

3131
# provision infrastructure on AWS and spin up a cluster
3232
$ cortex cluster up
@@ -37,7 +37,7 @@ $ cortex cluster up
3737
<!-- CORTEX_VERSION_MINOR -->
3838
```bash
3939
# clone the Cortex repository
40-
$ git clone -b master https://github.com/cortexlabs/cortex.git
40+
$ git clone -b 0.17 https://github.com/cortexlabs/cortex.git
4141

4242
# navigate to the TensorFlow iris classification example
4343
$ cd cortex/examples/tensorflow/iris-classifier

docs/cluster-management/update.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ cortex cluster configure
2222
cortex cluster down
2323

2424
# update your CLI
25-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
25+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.17/get-cli.sh)"
2626

2727
# confirm version
2828
cortex version

docs/deployments/deployment.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,4 +63,4 @@ deleting my-api
6363
<!-- CORTEX_VERSION_MINOR -->
6464
* [Tutorial](../../examples/sklearn/iris-classifier/README.md) provides a step-by-step walkthough of deploying an iris classifier API
6565
* [CLI documentation](../miscellaneous/cli.md) lists all CLI commands
66-
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples) demonstrate how to deploy models from common ML libraries
66+
* [Examples](https://github.com/cortexlabs/cortex/tree/0.17/examples) demonstrate how to deploy models from common ML libraries

docs/deployments/exporting.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Here are examples for some common ML libraries:
1111
The recommended approach is export your PyTorch model with [torch.save()](https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save). Here is PyTorch's documentation on [saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html).
1212

1313
<!-- CORTEX_VERSION_MINOR -->
14-
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) exports its trained model like this:
14+
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/pytorch/iris-classifier) exports its trained model like this:
1515

1616
```python
1717
torch.save(model.state_dict(), "weights.pth")
@@ -22,7 +22,7 @@ torch.save(model.state_dict(), "weights.pth")
2222
It may also be possible to export your PyTorch model into the ONNX format using [torch.onnx.export()](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).
2323

2424
<!-- CORTEX_VERSION_MINOR -->
25-
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
25+
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
2626

2727
```python
2828
placeholder = torch.randn(1, 4)
@@ -50,7 +50,7 @@ A TensorFlow `SavedModel` directory should have this structure:
5050
```
5151

5252
<!-- CORTEX_VERSION_MINOR -->
53-
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer):
53+
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/sentiment-analyzer):
5454

5555
```python
5656
import tensorflow as tf
@@ -88,14 +88,14 @@ aws s3 cp bert.zip s3://my-bucket/bert.zip
8888
```
8989

9090
<!-- CORTEX_VERSION_MINOR -->
91-
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
91+
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
9292

9393
### Other model formats
9494

9595
There are other ways to export Keras or TensorFlow models, and as long as they can be loaded and used to make predictions in Python, they will be supported by Cortex.
9696

9797
<!-- CORTEX_VERSION_MINOR -->
98-
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/license-plate-reader) uses this approach.
98+
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/license-plate-reader) uses this approach.
9999

100100
## Scikit-learn
101101

@@ -104,7 +104,7 @@ For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https:
104104
Scikit-learn models are typically exported using `pickle`. Here is [Scikit-learn's documentation](https://scikit-learn.org/stable/modules/model_persistence.html).
105105

106106
<!-- CORTEX_VERSION_MINOR -->
107-
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
107+
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
108108

109109
```python
110110
pickle.dump(model, open("model.pkl", "wb"))
@@ -157,7 +157,7 @@ model.save_model("model.bin")
157157
It is also possible to export an XGBoost model to the ONNX format using [onnxmltools](https://github.com/onnx/onnxmltools).
158158

159159
<!-- CORTEX_VERSION_MINOR -->
160-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
160+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
161161

162162
```python
163163
from onnxmltools.convert import convert_xgboost

docs/deployments/predictors.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -74,10 +74,10 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
7474
### Examples
7575

7676
<!-- CORTEX_VERSION_MINOR -->
77-
Many of the [examples](https://github.com/cortexlabs/cortex/tree/master/examples) use the Python Predictor, including all of the PyTorch examples.
77+
Many of the [examples](https://github.com/cortexlabs/cortex/tree/0.17/examples) use the Python Predictor, including all of the PyTorch examples.
7878

7979
<!-- CORTEX_VERSION_MINOR -->
80-
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/iris-classifier):
80+
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/iris-classifier):
8181

8282
```python
8383
import re
@@ -155,7 +155,7 @@ xgboost==1.0.2
155155
```
156156

157157
<!-- CORTEX_VERSION_MINOR x2 -->
158-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU).
158+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/python-predictor-gpu/Dockerfile) (for GPU).
159159

160160
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
161161

@@ -190,7 +190,7 @@ class TensorFlowPredictor:
190190
```
191191

192192
<!-- CORTEX_VERSION_MINOR -->
193-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
193+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.17/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
194194

195195
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
196196

@@ -199,10 +199,10 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
199199
### Examples
200200

201201
<!-- CORTEX_VERSION_MINOR -->
202-
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow) use the TensorFlow Predictor.
202+
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow) use the TensorFlow Predictor.
203203

204204
<!-- CORTEX_VERSION_MINOR -->
205-
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier):
205+
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow/iris-classifier):
206206

207207
```python
208208
labels = ["setosa", "versicolor", "virginica"]
@@ -235,7 +235,7 @@ tensorflow==2.1.0
235235
```
236236

237237
<!-- CORTEX_VERSION_MINOR -->
238-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
238+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/tensorflow-predictor/Dockerfile).
239239

240240
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
241241

@@ -270,7 +270,7 @@ class ONNXPredictor:
270270
```
271271

272272
<!-- CORTEX_VERSION_MINOR -->
273-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
273+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.17/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
274274

275275
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
276276

@@ -279,7 +279,7 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
279279
### Examples
280280

281281
<!-- CORTEX_VERSION_MINOR -->
282-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) uses the ONNX Predictor:
282+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/xgboost/iris-classifier) uses the ONNX Predictor:
283283

284284
```python
285285
labels = ["setosa", "versicolor", "virginica"]
@@ -316,7 +316,7 @@ requests==2.23.0
316316
```
317317

318318
<!-- CORTEX_VERSION_MINOR x2 -->
319-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
319+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/onnx-predictor-gpu/Dockerfile) (for GPU).
320320

321321
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
322322

0 commit comments

Comments
 (0)