-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Add an inference script providing both accuracy and benchmark result for original wide_n_deep example #13895
Conversation
help='the model prefix') | ||
|
||
# Related to feature engineering, please see preprocess in data.py | ||
ADULT = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is common parts with training. Could we put in some util/model file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move to config.py
help='loading the params of the corresponding training epoch.') | ||
parser.add_argument('--batch-size', type=int, default=100, | ||
help='number of examples per batch') | ||
parser.add_argument('--accuracy', action='store_true', default=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to have accuracy=True by default. Use --benchmark to do performance benchmarking as what we do in other scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the comments, modified accordingly.
parser.add_argument('--verbose', action='store_true', default=False, | ||
help='accurcy for each batch will be logged if set') | ||
parser.add_argument('--cuda', action='store_true', default=False, | ||
help='Train on GPU with CUDA') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Train? BTW, we use --gpu in other examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, now use '--gpu', same change also applied to train.py
@mxnet-label-bot add [pr-awaiting-review] |
@@ -0,0 +1,121 @@ | |||
# Licensed to the Apache Software Foundation (ASF) under one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add python3 shebang?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Larroy,
thanks for review, I think this script can work with Python2, it might be fine without python3 shebang?
Thanks.
'hidden_units': [8, 50, 100], | ||
} | ||
|
||
if __name__ == '__main__': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we wrap it into a main function as:
if __name__ == '__main__':
sys.exit(main())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Larroy,
Seems sys only supported by python3, which may introduce incompatibility to python2, meanwhile, I just follow the "structure" of exiting train.py to implement the inference.py. thanks
@szha and @TaoLv @pengzhao-intel @larroy may I have your approval or comments on this PR? |
help='number of examples per batch') | ||
parser.add_argument('--accuracy', action='store_true', default=True, | ||
help='run the script for inference accuracy, not set for benchmark.') | ||
parser.add_argument('--benchmark', action='store_true', default=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about only have --benchmark
here and remove --accuracy
? If --benchmark
is given in the command line, it will do benchmarking with dummy data, otherwise the script will run with real data and print the final accuracy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for review and agree, revised accordingly.
@ZiyueHuang Could you help to review the change? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please execute pylint on these files using ci/other/pylintrc in incubator-mxnet folder.
Something like this
pylint --rcfile=ci/other/pylintrc --ignore-patterns="..so$$,..dll$$,._.dylib$$" example/sparse/wide_deep/*.py
data_iter = iter(eval_data) | ||
if benchmark: | ||
logging.info('Inference benchmark started ...') | ||
nbatch = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: nbatch =0 can be moved before the if...else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, revised accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@juliusshufan please resolve other reviewers' comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@juliusshufan Thank you. LGTM now.
@vandanavk @larroy Please confirm if your comments are addressed. |
@vandanavk @larroy Could you please check if your comments are resolved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Seems there is no other comments. Merging now. |
…for original wide_n_deep example (apache#13895) * Add a inference script can provide both accuracy and benchmark result * minor changes * minor fix to use keep similar coding style as other examples * fix typo * remove code redundance and other minor changes * Addressing review comments and minor pylint fix * remove parameter 'accuracy' to make logic simple
…for original wide_n_deep example (apache#13895) * Add a inference script can provide both accuracy and benchmark result * minor changes * minor fix to use keep similar coding style as other examples * fix typo * remove code redundance and other minor changes * Addressing review comments and minor pylint fix * remove parameter 'accuracy' to make logic simple
…for original wide_n_deep example (apache#13895) * Add a inference script can provide both accuracy and benchmark result * minor changes * minor fix to use keep similar coding style as other examples * fix typo * remove code redundance and other minor changes * Addressing review comments and minor pylint fix * remove parameter 'accuracy' to make logic simple
…for original wide_n_deep example (apache#13895) * Add a inference script can provide both accuracy and benchmark result * minor changes * minor fix to use keep similar coding style as other examples * fix typo * remove code redundance and other minor changes * Addressing review comments and minor pylint fix * remove parameter 'accuracy' to make logic simple
…for original wide_n_deep example (apache#13895) * Add a inference script can provide both accuracy and benchmark result * minor changes * minor fix to use keep similar coding style as other examples * fix typo * remove code redundance and other minor changes * Addressing review comments and minor pylint fix * remove parameter 'accuracy' to make logic simple
Description
Add a script for inference based on the saved model and parameter files achieved during training.
This script can provide either accuracy or benchmark results with specified parameters.
@TaoLv @pengzhao-intel
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
a new script file example/sparse/wide_deep/inference.py
Readme also slightly revised.
Comments