-
Notifications
You must be signed in to change notification settings - Fork 607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add keras example API for an autoencoder model #834
Conversation
This looks really awesome, thank you for contributing this! I did a quick pass, and everything looks good on first glance. I will do a more thorough review in the next day or two, and I'll upload the the model to the cortex examples bucket and try running the API. I'll keep you posted on how it goes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Robert, great PR. I just did a pass and it looks amazing. I have a couple of nits and a proposal. Let me know what you think.
|
||
# encode image | ||
image_enc = base64.b64encode(byte_im).decode("utf-8") | ||
image_dump = json.dumps({"img_pred": image_enc}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to do a json.dumps
here. You should be able return {"img_pred": image_enc}
. Then in https://github.com/cortexlabs/cortex/pull/834/files#diff-4e5041cf7620e33613b081c648188d95R66, you wouldn't todo a json.loads
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I can try this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model has been uploaded to Cortex examples folder. I ran it and works really well! I made a few recommendations. Let me know what you think.
|
||
## Sample Prediction | ||
|
||
### Dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dependencies section can be removed since only curl, sed and base64 are being used right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one slipped through. Thanks! It's going out.
```bash | ||
curl "${ENDPOINT}" -X POST -H "Content-Type: application/json" -d '{"url":"'${IMAGE_URL}'"}' | | ||
sed 's/"//g' | | ||
base64 -do prediction.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the -o flag is an option in OSX, I don't think it is an option in ubuntu's base64 binary. To be on the safe side I recommend you replace base64 -do predictor.png
with base64 -d >> predictor.png
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I agree. I'll do the appropriate modification.
This PR contains an example project built with Keras.
The model that I have implemented is an autoencoder. In this use case, the autoencoder is used to filter out the noise from text documents. The
cortex.yaml
config file uses the CPU for inference.Here's a before-and-after situation:
The model has to be uploaded to
s3://cortex-examples/keras/document-denoiser
directory. Its name has to bemodel.h5
. The model can be found here (temporarily).For the loading function of the model I had to resort to importing it this way
Instead of
Because I encountered the error specified in #831. This workaround is good for both types of hardware deployments: CPU or GPU.
checklist:
make test
andmake lint
summary.md
(view in gitbook after merging)