-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py #7010
Comments
cc @altanh @jroesch @antinucleon would be great if you can take a look |
I suspect some recent PR might have broke something, this is the error: Doesn't seem to me like a numerical issue with the gradient |
https://ci.tlcpack.ai/job/tvm/job/main/250/execution/node/233/log/ another one related |
I can't reproduce this locally on the current main branch |
per discussion with @tkonolige, we're pretty sure the abort is being caused by |
The error message is:
Pytorch loading:
Onnx loading:
|
Relevant issue on onnxruntime GitHub: microsoft/onnxruntime#5369 |
It would be great to propose a fix, given that the flaky error happens quite frequently. Is this related to the fact that we are using pytorch for gradient testing? Ideally we sould move that to a separate set of test suite. By default, we should use numerical gradient checking that is independent from other frameworks |
I agree. I think first we should address #7017 to confirm it's the same failure that is happening on CI, and then look into removing the dependencies. If we can't remove the dependency (like in the case of |
@tkonolige found that |
We should keep this issue but rename to dependency libomp conflict I think (or open a new one), since it might arise in the future |
closing for now as original flaky issue is fixed |
https://ci.tlcpack.ai/job/tvm/job/main/245/execution/node/218/log/
The text was updated successfully, but these errors were encountered: