-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added SageMaker batch transform support. #317
Conversation
Job PR-317/1 is complete. |
Job PR-317/2 is complete. |
Job PR-317/3 is complete. |
Codecov Report
@@ Coverage Diff @@
## master #317 +/- ##
===========================================
+ Coverage 40.32% 80.22% +39.89%
===========================================
Files 142 144 +2
Lines 8024 8183 +159
===========================================
+ Hits 3236 6565 +3329
+ Misses 4788 1618 -3170
|
src/gluonts/model/forecast.py
Outdated
output_types: Set[OutputType] | ||
quantiles: List[str] # FIXME: validate list elements | ||
num_eval_samples: int = pydantic.Schema(100, alias="num_samples") | ||
output_types: Set[OutputType] = {"qunatiles", "mean"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix spelling of qunatiles
to quantiles
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, thanks!
Job PR-317/4 is complete. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall.
@@ -0,0 +1,169 @@ | |||
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we maybe put this in another file and just import the stuff we want to expose here?
This has been our convention so far regarding __init__.py
.
import numpy as np | ||
|
||
|
||
def jsonify_floats(json_object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could subclass json.JSONEncoder
-- but this is fine too.
forecaster_type: Optional[Type[Union[Estimator, Predictor]]], | ||
settings: Settings, | ||
) -> Application: | ||
check_gpu_support() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess for supporting gpu's properly, we need a bit more logic for handling the number of proceses.
Also, the server should probably be started with the following environment variables to ensure that each server has one cpu and mxnet does not try to parallelize a single request.
OMP_NUM_THREADS=1
MXNET_ENGINE_TYPE=NaiveEngine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. These changes were just moved around, but we should still do what you’ve suggested.
Job PR-317/5 is complete. |
151fadb
to
59271e0
Compare
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
TODO:
To use batch inference, one has to set the environment variable:
INFERENCE_CONFIG
to the right json string.