-
Notifications
You must be signed in to change notification settings - Fork 607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add possibility to export environment variables with .env file #1147
Comments
Does this expose the environment variables to the tensorflow-serving image as well? I'm trying to read directly from s3 from tensorflow-serving and it isn't seeing my credentials. |
@lminer only the
Also, your API will inherit the cluster's AWS credentials (provided it's an AWS cluster, read this for more) and so you don't have to export them again - unless these credentials are for S3 buckets that fall outside the API credentials' scope. Can you tell us more about this use case? I see that you want to read from an S3 bucket. If it's a model you want to download, Cortex already handles that awesomely. More on that here. Is this what you want? If not, can you tell us about your case? |
@RobertLucian because of the GRPC limit on the amount of data that I can send to tensorflow-serving, I thought that instead I would simply send a link to an object in an s3 bucket and then read and write to s3 directly within the model itself. However, in order to do this, I need some way to get the AWS credentials to the model and it appears as if this can only be done via environment variables. |
@lminer I see. That would not be an optimal architecture. We have increased the priority on #1740 and it's the next thing for me to work on. Since this seems to be urgent to you, we will probably make a patch release for you or just create a customised image that will work for you. Will this be acceptable to you? |
That would be great. Thank you! |
@lminer I'd just like to add that the AWS credential environment variables are populated in the TF serving container. In addition, you can add custom env vars in both the predictor container and the TF serving container by specifying - name: iris-classifier
kind: RealtimeAPI
predictor:
type: tensorflow
path: predictor.py
models:
path: s3://cortex-examples/tensorflow/iris-classifier/nn/
env:
MY_VAR: my-value That said, I agree with @RobertLucian that the best approach will be to address #1740. |
Ah. Does this hold if you use a .env file too? I tried it with that and tensorflow wasn't seeing my aws creds. |
@lminer it doesn't hold for the |
@deliahu Are you sure that the env vars from the yaml file are propagated to the tensorflow-serving image? I'm still getting the error of aws creds not being properly set. Moreover, when I attach to the tensorflow-serving docker container and run Here's my cortex.yaml - name: pasta
kind: RealtimeAPI
predictor:
type: tensorflow
path: serving/cortex_server.py
models:
path: ../pasta_bucket/
signature_key: serving_default
image: quay.io/cortexlabs/tensorflow-predictor:0.25.0
tensorflow_serving_image: quay.io/robertlucian/cortex-tensorflow-serving-gpu-tf2.4:0.25.0
env:
AWS_ACCESS_KEY_ID: foo
AWS_SECRET_ACCESS_KEY: bar Here's the error:
|
@lminer that is weird, it seemed to work for me when I tried last night. What command are you using to connect to the container? I did something like:
And then when I ran |
I used something like the following. docker exec -it elegant_hertz /bin/bash The image I'm using right now is non-standard so maybe that's it: |
Oh I see, I think it's because I checked the container when it was running in the cluster, whereas you are just checking locally. If you deploy your API to the cluster, the env vars should be there (they get populated at runtime and are not baked into the image). |
@deliahu It would be nice, if possible, to have the env vars show up locally as well. Makes testing a lot easier. I have found that the env vars show up locally in the api image, just not the tensorflow-serving image. |
@lminer yes that makes sense. However, we have actually removed support for running Cortex locally in our latest release (v0.26). We've found that the best way to develop/test a predictor implementation locally is to import it in a separate Python file and call the |
Description
Add support for exporting environment variables from an
.env
file placed in the root directory of a Cortex project.Motivation
In case the user doesn't want to export environment variables using the
predictor:env
field incortex.yaml
. A reason for that could be to keep thecortex.yaml
deployment clean.The text was updated successfully, but these errors were encountered: