You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# provision infrastructure on AWS and spin up a cluster
39
35
$ cortex cluster up
@@ -140,8 +136,8 @@ The CLI sends configuration and code to the cluster every time you run `cortex d
140
136
## Examples of Cortex deployments
141
137
142
138
<!-- CORTEX_VERSION_README_MINOR x5 -->
143
-
*[Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
144
-
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
145
-
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
146
-
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
147
-
*[Iris classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
139
+
*[Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
140
+
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
141
+
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
142
+
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
143
+
*[Iris classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
Copy file name to clipboardExpand all lines: docs/deployments/onnx.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ You can log information about each request by adding a `?debug=true` parameter t
67
67
An ONNX Predictor is a Python class that describes how to serve your ONNX model to make predictions.
68
68
69
69
<!-- CORTEX_VERSION_MINOR -->
70
-
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
70
+
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
71
71
72
72
## Implementation
73
73
@@ -133,6 +133,6 @@ requests==2.22.0
133
133
```
134
134
135
135
<!-- CORTEX_VERSION_MINOR x2 -->
136
-
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve-gpu/Dockerfile) (for GPU).
136
+
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve-gpu/Dockerfile) (for GPU).
137
137
138
138
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
Copy file name to clipboardExpand all lines: docs/deployments/python.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -171,6 +171,6 @@ xgboost==0.90
171
171
```
172
172
173
173
<!-- CORTEX_VERSION_MINOR x2 -->
174
-
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve-gpu/Dockerfile) (for GPU).
174
+
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve-gpu/Dockerfile) (for GPU).
175
175
176
176
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
Copy file name to clipboardExpand all lines: docs/deployments/tensorflow.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ You can log information about each request by adding a `?debug=true` parameter t
68
68
A TensorFlow Predictor is a Python class that describes how to serve your TensorFlow model to make predictions.
69
69
70
70
<!-- CORTEX_VERSION_MINOR -->
71
-
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
71
+
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
72
72
73
73
## Implementation
74
74
@@ -128,6 +128,6 @@ tensorflow==2.1.0
128
128
```
129
129
130
130
<!-- CORTEX_VERSION_MINOR -->
131
-
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tf-api/Dockerfile).
131
+
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/tf-api/Dockerfile).
132
132
133
133
If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
Copy file name to clipboardExpand all lines: docs/packaging-models/tensorflow.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# TensorFlow
2
2
3
3
<!-- CORTEX_VERSION_MINOR -->
4
-
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer)):
4
+
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/sentiment-analyzer)):
Copy file name to clipboardExpand all lines: examples/tensorflow/image-classifier/inception.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -204,7 +204,7 @@
204
204
},
205
205
"source": [
206
206
"<!-- CORTEX_VERSION_MINOR -->\n",
207
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/image-classifier) for how to deploy the model as an API."
207
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/iris-classifier/tensorflow.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -289,7 +289,7 @@
289
289
},
290
290
"source": [
291
291
"<!-- CORTEX_VERSION_MINOR -->\n",
292
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
292
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/sentiment-analyzer/bert.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -1000,7 +1000,7 @@
1000
1000
},
1001
1001
"source": [
1002
1002
"<!-- CORTEX_VERSION_MINOR -->\n",
1003
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
1003
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/text-generator/gpt-2.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -346,7 +346,7 @@
346
346
},
347
347
"source": [
348
348
"<!-- CORTEX_VERSION_MINOR x2 -->\n",
349
-
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
349
+
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
350
350
]
351
351
},
352
352
{
@@ -376,7 +376,7 @@
376
376
},
377
377
"source": [
378
378
"<!-- CORTEX_VERSION_MINOR -->\n",
379
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/text-generator) for how to deploy the model as an API."
379
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/text-generator) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/xgboost/iris-classifier/xgboost.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -237,7 +237,7 @@
237
237
},
238
238
"source": [
239
239
"<!-- CORTEX_VERSION_MINOR -->\n",
240
-
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) for how to deploy the model as an API."
240
+
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/0.14/examples/xgboost/iris-classifier) for how to deploy the model as an API."
0 commit comments