You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -152,6 +145,6 @@ Cortex is an open source alternative to serving models with SageMaker or buildin
152
145
## Examples
153
146
154
147
<!-- CORTEX_VERSION_README_MINOR x3 -->
155
-
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.16/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
156
-
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
157
-
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.16/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
148
+
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
149
+
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
150
+
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
Copy file name to clipboardExpand all lines: docs/cluster-management/config.md
+17-17
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
The Cortex cluster may be configured by providing a configuration file to `cortex cluster up` or `cortex cluster configure` via the `--config` flag (e.g. `cortex cluster up --config cluster.yaml`). Below is the schema for the cluster configuration file, with default values shown (unless otherwise specified):
4
4
5
-
<!-- CORTEX_VERSION_MINOR -->
5
+
<!-- CORTEX_VERSION_MINOR x2 -->
6
6
```yaml
7
7
# cluster.yaml
8
8
@@ -68,31 +68,31 @@ log_group: cortex
68
68
tags: # <string>: <string> map of key/value pairs
69
69
70
70
# whether to use spot instances in the cluster (default: false)
71
-
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
71
+
# see https://cortex.dev/v/0.17/cluster-management/spot-instances for additional details on spot configuration
72
72
spot: false
73
73
74
-
# see https://cortex.dev/v/master/guides/subdomain-https-setup for instructions on how to set up HTTPS for APIs
74
+
# see https://cortex.dev/v/0.17/guides/subdomain-https-setup for instructions on how to set up HTTPS for APIs
75
75
ssl_certificate_arn: # if empty, APIs will still be accessible via HTTPS (in addition to HTTP), but will not use a trusted certificate
76
76
```
77
77
78
78
The default docker images used for your Predictors are listed in the instructions for [system packages](../deployments/system-packages.md), and can be overridden in your [API configuration](../deployments/api-configuration.md).
79
79
80
-
The docker images used by the Cortex cluster can also be overriden, although this is not common. They can be configured by adding any of these keys to your cluster configuration file (default values are shown):
80
+
The docker images used by the Cortex cluster can also be overridden, although this is not common. They can be configured by adding any of these keys to your cluster configuration file (default values are shown):
Copy file name to clipboardExpand all lines: docs/deployments/exporting.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Here are examples for some common ML libraries:
11
11
The recommended approach is export your PyTorch model with [torch.save()](https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save). Here is PyTorch's documentation on [saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html).
12
12
13
13
<!-- CORTEX_VERSION_MINOR -->
14
-
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) exports its trained model like this:
14
+
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/pytorch/iris-classifier) exports its trained model like this:
It may also be possible to export your PyTorch model into the ONNX format using [torch.onnx.export()](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).
23
23
24
24
<!-- CORTEX_VERSION_MINOR -->
25
-
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
25
+
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
26
26
27
27
```python
28
28
placeholder = torch.randn(1, 4)
@@ -50,7 +50,7 @@ A TensorFlow `SavedModel` directory should have this structure:
50
50
```
51
51
52
52
<!-- CORTEX_VERSION_MINOR -->
53
-
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer):
53
+
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/sentiment-analyzer):
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
91
+
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
92
92
93
93
### Other model formats
94
94
95
95
There are other ways to export Keras or TensorFlow models, and as long as they can be loaded and used to make predictions in Python, they will be supported by Cortex.
96
96
97
97
<!-- CORTEX_VERSION_MINOR -->
98
-
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/license-plate-reader) uses this approach.
98
+
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/0.17/examples/tensorflow/license-plate-reader) uses this approach.
99
99
100
100
## Scikit-learn
101
101
@@ -104,7 +104,7 @@ For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https:
104
104
Scikit-learn models are typically exported using `pickle`. Here is [Scikit-learn's documentation](https://scikit-learn.org/stable/modules/model_persistence.html).
105
105
106
106
<!-- CORTEX_VERSION_MINOR -->
107
-
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
107
+
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
108
108
109
109
```python
110
110
pickle.dump(model, open("model.pkl", "wb"))
@@ -157,7 +157,7 @@ model.save_model("model.bin")
157
157
It is also possible to export an XGBoost model to the ONNX format using [onnxmltools](https://github.com/onnx/onnxmltools).
158
158
159
159
<!-- CORTEX_VERSION_MINOR -->
160
-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
160
+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.17/examples/xgboost/iris-classifier) uses this approach. Here is the relevant code:
Copy file name to clipboardExpand all lines: docs/deployments/predictors.md
+10-10
Original file line number
Diff line number
Diff line change
@@ -74,10 +74,10 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
74
74
### Examples
75
75
76
76
<!-- CORTEX_VERSION_MINOR -->
77
-
Many of the [examples](https://github.com/cortexlabs/cortex/tree/master/examples) use the Python Predictor, including all of the PyTorch examples.
77
+
Many of the [examples](https://github.com/cortexlabs/cortex/tree/0.17/examples) use the Python Predictor, including all of the PyTorch examples.
78
78
79
79
<!-- CORTEX_VERSION_MINOR -->
80
-
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/iris-classifier):
80
+
Here is the Predictor for [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/iris-classifier):
81
81
82
82
```python
83
83
import re
@@ -155,7 +155,7 @@ xgboost==1.0.2
155
155
```
156
156
157
157
<!-- CORTEX_VERSION_MINOR x2 -->
158
-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU).
158
+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/python-predictor-cpu/Dockerfile) (for CPU) or [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/python-predictor-gpu/Dockerfile) (for GPU).
159
159
160
160
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
161
161
@@ -190,7 +190,7 @@ class TensorFlowPredictor:
190
190
```
191
191
192
192
<!-- CORTEX_VERSION_MINOR -->
193
-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
193
+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.17/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
194
194
195
195
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
196
196
@@ -199,10 +199,10 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
199
199
### Examples
200
200
201
201
<!-- CORTEX_VERSION_MINOR -->
202
-
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow) use the TensorFlow Predictor.
202
+
Most of the examples in [examples/tensorflow](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow) use the TensorFlow Predictor.
203
203
204
204
<!-- CORTEX_VERSION_MINOR -->
205
-
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier):
205
+
Here is the Predictor for [examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow/iris-classifier):
206
206
207
207
```python
208
208
labels = ["setosa", "versicolor", "virginica"]
@@ -235,7 +235,7 @@ tensorflow==2.1.0
235
235
```
236
236
237
237
<!-- CORTEX_VERSION_MINOR -->
238
-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
238
+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/tensorflow-predictor/Dockerfile).
239
239
240
240
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
241
241
@@ -270,7 +270,7 @@ class ONNXPredictor:
270
270
```
271
271
272
272
<!-- CORTEX_VERSION_MINOR -->
273
-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
273
+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.17/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
274
274
275
275
For proper separation of concerns, it is recommended to use the constructor's `config` paramater for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
276
276
@@ -279,7 +279,7 @@ The `payload` parameter is parsed according to the `Content-Type` header in the
279
279
### Examples
280
280
281
281
<!-- CORTEX_VERSION_MINOR -->
282
-
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) uses the ONNX Predictor:
282
+
[examples/xgboost/iris-classifier](https://github.com/cortexlabs/cortex/tree/0.17/examples/xgboost/iris-classifier) uses the ONNX Predictor:
283
283
284
284
```python
285
285
labels = ["setosa", "versicolor", "virginica"]
@@ -316,7 +316,7 @@ requests==2.23.0
316
316
```
317
317
318
318
<!-- CORTEX_VERSION_MINOR x2 -->
319
-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
319
+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.17/images/onnx-predictor-gpu/Dockerfile) (for GPU).
320
320
321
321
If your application requires additional dependencies, you can install additional [Python packages](python-packages.md) and [system packages](system-packages.md).
0 commit comments