-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transformations: (onnx) fix lower onnx.Relu lowering #2435
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2435 +/- ##
=======================================
Coverage 89.85% 89.85%
=======================================
Files 351 351
Lines 43757 43794 +37
Branches 6523 6530 +7
=======================================
+ Hits 39317 39351 +34
- Misses 3483 3485 +2
- Partials 957 958 +1 ☔ View full report in Codecov by Sentry. |
// CHECK-NEXT: %2 = arith.maximumf %0, %res_relu_1 : f64 | ||
// CHECK-NEXT: linalg.yield %2 : f64 | ||
// CHECK-NEXT: } -> tensor<3x4xf64> | ||
%t2 = "test.op"() : () -> (tensor<3x4xf32>) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely we should support all float types? Can you get the arg type from the parameter type in the rewrite pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say let's duplicate this test, to make sure that we handle both f32 and f64?
@kayode-gif, when you have a moment, could you please merge this? If you feel like it, minimising the diff would be nice, but not necessary |
should be all good now |
@@ -31,10 +43,10 @@ | |||
%res_gemm= "onnx.Gemm"(%t5, %t6, %t7) {onnx_node_name = "/Gemm", "alpha" = 1.000000e+00 : f32, "beta" = 1.000000e+00 : f32, "transA" = 0 : si64, "transB" = 1 : si64}: (tensor<1x320xf32>, tensor<50x320xf32>, tensor<50xf32>) -> tensor<1x50xf32> | |||
|
|||
// CHECK-NEXT: %t5, %t6, %t7 = "test.op"() : () -> (tensor<1x320xf32>, tensor<50x320xf32>, tensor<50xf32>) | |||
// CHECK-NEXT: %3 = tensor.empty() : tensor<320x50xf32> | |||
// CHECK-NEXT: %4 = linalg.transpose ins(%t6:tensor<50x320xf32>) outs(%3:tensor<320x50xf32>) permutation = [1, 0] | |||
// CHECK-NEXT: %6 = tensor.empty() : tensor<320x50xf32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to avoid these spurious changes, I'd recommend using regex patterns:
// CHECK-NEXT: %6 = tensor.empty() : tensor<320x50xf32> | |
// CHECK-NEXT: %{{.*}} = tensor.empty() : tensor<320x50xf32> |
You can also pip install filecheckize
and use it with --mlir-anonymize
to convert MLIR IR to filechek tests, something like this:
xdsl-opt file.mlir -p my-awesome-pass | filecheckize --mlir-anonymize
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops i merged it, ill keep these in mind for future, thank you
This PR fixes the issue where the affine map dimensions do not match the rank of the input operand
Resolves #2432