-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][Frontend][Onnx] Compare against onnxruntime more consistently during testing #7300
Conversation
@masahi @mbrookhart can you guys let me know what you think of this PR? |
input_data = np.random.uniform(size=input_size).astype("int32") | ||
verify_with_ort_with_inputs(onnx_model, [input_data]) | ||
input_data = np.random.uniform(size=input_size).astype("float32") | ||
verify_with_ort_with_inputs(onnx_model, [input_data], apply_softmax=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a fun one that I wanted to point out. Previously we were casting inputs to int32
, however because they were generated with np.random.uniform
they all were just being cast to 0. Using non-zero inputs caused some minor mismatch on outputs due to numerical instability but applying softmax (which torchvision models don't use by default) reduces the numerical difference well below our test threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the heroic effort 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
… during testing (apache#7300) Co-authored-by: Josh Fromm <jwfromm@uw.edu>
I noticed that many of our onnx frontend tests compare to results produced by numpy rather than onnx itself. This is somewhat counterproductive because it makes assumptions about what onnx should do rather than checking what it actually does. I decided to go through all the tests in
test_forward.py
and make them use the newverify_with_ort
helper function. This makes our testing suite more consistent and align more closely with their intention. In the process of making this conversion, I discovered many bugs with the importer that are also fixed in this PR. Although this PR might be a little painful to review due to its scope, I think that the result is an overall much cleaner and easier to maintain test suite.