Skip to content

Commit e22985f

Browse files
authored
Support installing cortex pkg in API & aggregate internal modules into a pkg (#1709)
1 parent b20ee8a commit e22985f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

74 files changed

+330
-216
lines changed

Diff for: build/cli.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ function build_and_upload() {
4747
}
4848

4949
function build_python {
50-
pushd $ROOT/pkg/workloads/cortex/client
50+
pushd $ROOT/pkg/cortex/client
5151
python setup.py sdist
5252

5353
if [ "$upload" == "true" ]; then

Diff for: dev/generate_python_client_md.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2626

2727
pip3 uninstall -y cortex
2828

29-
cd $ROOT/pkg/workloads/cortex/client
29+
cd $ROOT/pkg/cortex/client
3030

3131
pip3 install -e .
3232

@@ -64,4 +64,4 @@ truncate -s -1 $ROOT/docs/cli/python-client.md
6464
sed -i "s/^## create\\\_api/## create\\\_api\n\n<!-- CORTEX_VERSION_MINOR -->/g" $ROOT/docs/cli/python-client.md
6565

6666
pip3 uninstall -y cortex
67-
rm -rf $ROOT/pkg/workloads/cortex/client/cortex.egg-info
67+
rm -rf $ROOT/pkg/cortex/client/cortex.egg-info

Diff for: dev/python_version_test.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ pip install requests
3535
export CORTEX_CLI_PATH=$ROOT/bin/cortex
3636

3737
# install cortex
38-
cd $ROOT/pkg/workloads/cortex/client
38+
cd $ROOT/pkg/cortex/client
3939
pip install -e .
4040

4141
# run script.py
@@ -44,4 +44,4 @@ python $ROOT/dev/deploy_test.py $2
4444
# clean up conda
4545
conda deactivate
4646
conda env remove -n env
47-
rm -rf $ROOT/pkg/workloads/cortex/client/cortex.egg-info
47+
rm -rf $ROOT/pkg/cortex/client/cortex.egg-info

Diff for: dev/versions.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ Note: it's ok if example training notebooks aren't upgraded, as long as the expo
179179
## Python packages
180180

181181
1. Update versions in `images/python-predictor-*/Dockerfile`, `images/tensorflow-predictor/Dockerfile`, and `images/onnx-predictor-*/Dockerfile`
182-
1. Update versions in `pkg/workloads/cortex/serve/*requirements.txt` and `pkg/workloads/cortex/downloader/requirements.txt`
182+
1. Update versions in `pkg/cortex/serve/*requirements.txt` and `pkg/cortex/downloader/requirements.txt`
183183
1. Update the versions listed in "Pre-installed packages" in `realtime-api/predictors.md` and `batch-api/predictors.md`
184184
* look at the diff carefully since some packages are not shown, and e.g. `tensorflow-cpu` -> `tensorflow`
185185
* be careful not to update any of the versions for Inferentia that are not latest in `images/python-predictor-inf/Dockerfile`
@@ -248,7 +248,7 @@ Note: overriding horizontal-pod-autoscaler-sync-period on EKS is currently not s
248248
1. Update the version in `images/statsd/Dockerfile`
249249
1. In this [GitHub Repo](https://github.com/aws-samples/amazon-cloudwatch-container-insights), set the tree to `master` and open [k8s-yaml-templates/cwagent-statsd/cwagent-statsd-daemonset.yaml](https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-yaml-templates/cwagent-statsd/cwagent-statsd-daemonset.yaml) and [k8s-yaml-templates/cwagent-statsd/cwagent-statsd-configmap.yaml](https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-yaml-templates/cwagent-statsd/cwagent-statsd-configmap.yaml)
250250
1. Update `statsd.yaml` as necessary (this wasn't copy-pasted, so you may need to check the diff intelligently)
251-
1. Update the datadog client version in `pkg/workloads/cortex/serve/requirements.txt`
251+
1. Update the datadog client version in `pkg/cortex/serve/requirements.txt`
252252

253253
## aws-iam-authenticator
254254

Diff for: docs/workloads/batch/predictors.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -216,7 +216,7 @@ class TensorFlowPredictor:
216216
```
217217

218218
<!-- CORTEX_VERSION_MINOR -->
219-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
219+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
220220

221221
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).
222222

@@ -297,7 +297,7 @@ class ONNXPredictor:
297297
```
298298

299299
<!-- CORTEX_VERSION_MINOR -->
300-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
300+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
301301

302302
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
303303

Diff for: docs/workloads/realtime/predictors.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ class PythonPredictor:
120120
```
121121

122122
<!-- CORTEX_VERSION_MINOR -->
123-
When explicit model paths are specified in the Python predictor's API configuration, Cortex provides a `python_client` to your Predictor's constructor. `python_client` is an instance of [PythonClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your predictor, which must be defined when using explicit model paths). It should be saved as an instance variable in your Predictor, and your `predict()` function should call `python_client.get_model()` to load your model for inference. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
123+
When explicit model paths are specified in the Python predictor's API configuration, Cortex provides a `python_client` to your Predictor's constructor. `python_client` is an instance of [PythonClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your predictor, which must be defined when using explicit model paths). It should be saved as an instance variable in your Predictor, and your `predict()` function should call `python_client.get_model()` to load your model for inference. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
124124

125125
When multiple models are defined using the Predictor's `models` field, the `python_client.get_model()` method expects an argument `model_name` which must hold the name of the model that you want to load (for example: `self.client.get_model("text-generator")`). There is also an optional second argument to specify the model version.
126126

@@ -261,7 +261,7 @@ class TensorFlowPredictor:
261261
```
262262

263263
<!-- CORTEX_VERSION_MINOR -->
264-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
264+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
265265

266266
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
267267

@@ -351,7 +351,7 @@ class ONNXPredictor:
351351
```
352352

353353
<!-- CORTEX_VERSION_MINOR -->
354-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
354+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
355355

356356
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). There is also an optional third argument to specify the model version.
357357

Diff for: images/downloader/Dockerfile

+9-7
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
FROM ubuntu:18.04
22

3-
ENV PYTHONPATH="${PYTHONPATH}:/src"
4-
53
RUN apt-get update -qq && apt-get install -y -q \
64
curl \
75
python3.6 \
@@ -12,12 +10,16 @@ RUN apt-get update -qq && apt-get install -y -q \
1210
pip install --upgrade pip && \
1311
rm -rf /root/.cache/pip*
1412

15-
COPY pkg/workloads/cortex/downloader/requirements.txt /src/cortex/downloader/requirements.txt
16-
RUN pip install -r /src/cortex/downloader/requirements.txt && \
13+
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
14+
15+
RUN pip install --no-cache-dir \
16+
-r /src/cortex/serve/cortex_internal.requirements.txt && \
1717
rm -rf /root/.cache/pip*
1818

19-
COPY pkg/workloads/cortex/consts.py /src/cortex/
20-
COPY pkg/workloads/cortex/lib /src/cortex/lib
21-
COPY pkg/workloads/cortex/downloader /src/cortex/downloader
19+
COPY pkg/cortex/downloader /src/cortex/downloader
20+
21+
COPY pkg/cortex/serve/ /src/cortex/serve
22+
RUN pip install --no-deps /src/cortex/serve/ && \
23+
rm -rf /root/.cache/pip*
2224

2325
ENTRYPOINT ["/usr/bin/python3.6", "/src/cortex/downloader/download.py"]

Diff for: images/onnx-predictor-cpu/Dockerfile

+9-10
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
2222
RUN locale-gen en_US.UTF-8
2323
ENV LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8
2424

25-
ENV PYTHONPATH="${PYTHONPATH}:/src"
2625
ENV PATH=/opt/conda/bin:$PATH
2726
ENV PYTHONVERSION=3.6.9
2827

@@ -43,12 +42,14 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4342
ENV BASH_ENV=~/.env
4443
SHELL ["/bin/bash", "-c"]
4544

46-
COPY pkg/workloads/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
47-
COPY pkg/workloads/cortex/serve/onnx-cpu.requirements.txt /src/cortex/serve/image.requirements.txt
45+
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
46+
COPY pkg/cortex/serve/onnx-cpu.requirements.txt /src/cortex/serve/image.requirements.txt
47+
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
48+
RUN pip install --no-cache-dir \
49+
-r /src/cortex/serve/serve.requirements.txt \
50+
-r /src/cortex/serve/image.requirements.txt \
51+
-r /src/cortex/serve/cortex_internal.requirements.txt
4852

49-
RUN pip install --no-cache-dir -r \
50-
/src/cortex/serve/requirements.txt \
51-
-r /src/cortex/serve/image.requirements.txt
5253

5354
ARG SLIM="false"
5455
RUN test "${SLIM}" = "true" || ( \
@@ -74,10 +75,8 @@ RUN test "${SLIM}" = "true" || ( \
7475
&& apt-get clean -qq && rm -rf /var/lib/apt/lists/* \
7576
)
7677

77-
COPY pkg/workloads/cortex/consts.py /src/cortex
78-
COPY pkg/workloads/cortex/lib /src/cortex/lib
79-
COPY pkg/workloads/cortex/serve /src/cortex/serve
80-
78+
COPY pkg/cortex/serve/ /src/cortex/serve
79+
RUN pip install --no-deps /src/cortex/serve/
8180
RUN mv /src/cortex/serve/init/bootloader.sh /etc/cont-init.d/bootloader.sh
8281

8382
ENTRYPOINT ["/init"]

Diff for: images/onnx-predictor-gpu/Dockerfile

+9-11
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
2222
RUN locale-gen en_US.UTF-8
2323
ENV LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8
2424

25-
ENV PYTHONPATH="${PYTHONPATH}:/src"
2625
ENV PATH=/opt/conda/bin:$PATH
2726
ENV PYTHONVERSION=3.6.9
2827

@@ -43,12 +42,13 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4342
ENV BASH_ENV=~/.env
4443
SHELL ["/bin/bash", "-c"]
4544

46-
COPY pkg/workloads/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
47-
COPY pkg/workloads/cortex/serve/onnx-gpu.requirements.txt /src/cortex/serve/image.requirements.txt
48-
49-
RUN pip install --no-cache-dir -r \
50-
/src/cortex/serve/requirements.txt \
51-
-r /src/cortex/serve/image.requirements.txt
45+
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
46+
COPY pkg/cortex/serve/onnx-gpu.requirements.txt /src/cortex/serve/image.requirements.txt
47+
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
48+
RUN pip install --no-cache-dir \
49+
-r /src/cortex/serve/serve.requirements.txt \
50+
-r /src/cortex/serve/image.requirements.txt \
51+
-r /src/cortex/serve/cortex_internal.requirements.txt
5252

5353
ARG SLIM="false"
5454
RUN test "${SLIM}" = "true" || ( \
@@ -74,10 +74,8 @@ RUN test "${SLIM}" = "true" || ( \
7474
&& apt-get clean -qq && rm -rf /var/lib/apt/lists/* \
7575
)
7676

77-
COPY pkg/workloads/cortex/consts.py /src/cortex
78-
COPY pkg/workloads/cortex/lib /src/cortex/lib
79-
COPY pkg/workloads/cortex/serve /src/cortex/serve
80-
77+
COPY pkg/cortex/serve/ /src/cortex/serve
78+
RUN pip install --no-deps /src/cortex/serve/
8179
RUN mv /src/cortex/serve/init/bootloader.sh /etc/cont-init.d/bootloader.sh
8280

8381
ENTRYPOINT ["/init"]

Diff for: images/python-predictor-cpu/Dockerfile

+7-8
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
2222
RUN locale-gen en_US.UTF-8
2323
ENV LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8
2424

25-
ENV PYTHONPATH="${PYTHONPATH}:/src"
2625
ENV PATH=/opt/conda/bin:$PATH
2726
ENV PYTHONVERSION=3.6.9
2827

@@ -43,9 +42,11 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4342
ENV BASH_ENV=~/.env
4443
SHELL ["/bin/bash", "-c"]
4544

46-
COPY pkg/workloads/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
47-
RUN pip install --no-cache-dir -r \
48-
/src/cortex/serve/requirements.txt
45+
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
46+
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
47+
RUN pip install --no-cache-dir \
48+
-r /src/cortex/serve/serve.requirements.txt \
49+
-r /src/cortex/serve/cortex_internal.requirements.txt
4950

5051
ARG SLIM="false"
5152
RUN test "${SLIM}" = "true" || ( \
@@ -92,10 +93,8 @@ RUN test "${SLIM}" = "true" || ( \
9293
xgboost==1.2.0 \
9394
)
9495

95-
COPY pkg/workloads/cortex/consts.py /src/cortex
96-
COPY pkg/workloads/cortex/lib /src/cortex/lib
97-
COPY pkg/workloads/cortex/serve /src/cortex/serve
98-
96+
COPY pkg/cortex/serve/ /src/cortex/serve
97+
RUN pip install --no-deps /src/cortex/serve/
9998
RUN mv /src/cortex/serve/init/bootloader.sh /etc/cont-init.d/bootloader.sh
10099

101100
ENTRYPOINT ["/init"]

Diff for: images/python-predictor-gpu/Dockerfile

+7-8
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,6 @@ ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
2424
RUN locale-gen en_US.UTF-8
2525
ENV LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8
2626

27-
ENV PYTHONPATH="${PYTHONPATH}:/src"
2827
ENV PATH=/opt/conda/bin:$PATH
2928
ENV PYTHONVERSION=3.6.9
3029

@@ -45,9 +44,11 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4544
ENV BASH_ENV=~/.env
4645
SHELL ["/bin/bash", "-c"]
4746

48-
COPY pkg/workloads/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
49-
RUN pip install --no-cache-dir -r \
50-
/src/cortex/serve/requirements.txt
47+
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
48+
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
49+
RUN pip install --no-cache-dir \
50+
-r /src/cortex/serve/serve.requirements.txt \
51+
-r /src/cortex/serve/cortex_internal.requirements.txt
5152

5253
ARG SLIM="false"
5354
RUN test "${SLIM}" = "true" || ( \
@@ -97,10 +98,8 @@ RUN test "${SLIM}" = "true" || ( \
9798
xgboost==1.2.0 \
9899
)
99100

100-
COPY pkg/workloads/cortex/consts.py /src/cortex
101-
COPY pkg/workloads/cortex/lib /src/cortex/lib
102-
COPY pkg/workloads/cortex/serve /src/cortex/serve
103-
101+
COPY pkg/cortex/serve/ /src/cortex/serve
102+
RUN pip install --no-deps /src/cortex/serve/
104103
RUN mv /src/cortex/serve/init/bootloader.sh /etc/cont-init.d/bootloader.sh
105104

106105
ENTRYPOINT ["/init"]

Diff for: images/python-predictor-inf/Dockerfile

+6-6
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,9 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
5353
ENV BASH_ENV=~/.env
5454
SHELL ["/bin/bash", "-c"]
5555

56-
COPY pkg/workloads/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
57-
RUN pip install --no-cache-dir -r \
58-
/src/cortex/serve/requirements.txt
56+
COPY pkg/cortex/serve/requirements.txt /src/cortex/serve/requirements.txt
57+
RUN pip install --no-cache-dir \
58+
-r /src/cortex/serve/requirements.txt
5959

6060
ARG SLIM="false"
6161
RUN test "${SLIM}" = "true" || ( \
@@ -102,9 +102,9 @@ RUN test "${SLIM}" = "true" || ( \
102102
torchvision==0.6.1 \
103103
)
104104

105-
COPY pkg/workloads/cortex/consts.py /src/cortex
106-
COPY pkg/workloads/cortex/lib /src/cortex/lib
107-
COPY pkg/workloads/cortex/serve /src/cortex/serve
105+
COPY pkg/cortex/consts.py /src/cortex
106+
COPY pkg/cortex/lib /src/cortex/lib
107+
COPY pkg/cortex/serve /src/cortex/serve
108108

109109
RUN mv /src/cortex/serve/init/bootloader.sh /etc/cont-init.d/bootloader.sh
110110

0 commit comments

Comments
 (0)