Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transformations: (onnx) fix lower onnx.Relu lowering #2435

Merged
merged 9 commits into from
May 1, 2024

Conversation

kayode-gif
Copy link
Collaborator

This PR fixes the issue where the affine map dimensions do not match the rank of the input operand
Resolves #2432

@kayode-gif kayode-gif self-assigned this Apr 7, 2024
@kayode-gif kayode-gif changed the title fix onnx relu lowering transformations: (onnx) fix lower onnx.Relu lowering Apr 7, 2024
@kayode-gif kayode-gif requested a review from superlopuh April 7, 2024 19:46
Copy link

codecov bot commented Apr 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.85%. Comparing base (8e589ab) to head (7216b53).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2435   +/-   ##
=======================================
  Coverage   89.85%   89.85%           
=======================================
  Files         351      351           
  Lines       43757    43794   +37     
  Branches     6523     6530    +7     
=======================================
+ Hits        39317    39351   +34     
- Misses       3483     3485    +2     
- Partials      957      958    +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kayode-gif kayode-gif added the transformations Changes or adds a transformatio label Apr 7, 2024
// CHECK-NEXT: %2 = arith.maximumf %0, %res_relu_1 : f64
// CHECK-NEXT: linalg.yield %2 : f64
// CHECK-NEXT: } -> tensor<3x4xf64>
%t2 = "test.op"() : () -> (tensor<3x4xf32>)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surely we should support all float types? Can you get the arg type from the parameter type in the rewrite pattern?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say let's duplicate this test, to make sure that we handle both f32 and f64?

@superlopuh
Copy link
Member

@kayode-gif, when you have a moment, could you please merge this? If you feel like it, minimising the diff would be nice, but not necessary

@kayode-gif
Copy link
Collaborator Author

@kayode-gif, when you have a moment, could you please merge this? If you feel like it, minimising the diff would be nice, but not necessary

should be all good now

@kayode-gif kayode-gif merged commit b4e21c8 into main May 1, 2024
10 checks passed
@kayode-gif kayode-gif deleted the oke/fix-relu-lowering branch May 1, 2024 12:48
@@ -31,10 +43,10 @@
%res_gemm= "onnx.Gemm"(%t5, %t6, %t7) {onnx_node_name = "/Gemm", "alpha" = 1.000000e+00 : f32, "beta" = 1.000000e+00 : f32, "transA" = 0 : si64, "transB" = 1 : si64}: (tensor<1x320xf32>, tensor<50x320xf32>, tensor<50xf32>) -> tensor<1x50xf32>

// CHECK-NEXT: %t5, %t6, %t7 = "test.op"() : () -> (tensor<1x320xf32>, tensor<50x320xf32>, tensor<50xf32>)
// CHECK-NEXT: %3 = tensor.empty() : tensor<320x50xf32>
// CHECK-NEXT: %4 = linalg.transpose ins(%t6:tensor<50x320xf32>) outs(%3:tensor<320x50xf32>) permutation = [1, 0]
// CHECK-NEXT: %6 = tensor.empty() : tensor<320x50xf32>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to avoid these spurious changes, I'd recommend using regex patterns:

Suggested change
// CHECK-NEXT: %6 = tensor.empty() : tensor<320x50xf32>
// CHECK-NEXT: %{{.*}} = tensor.empty() : tensor<320x50xf32>

You can also pip install filecheckize and use it with --mlir-anonymize to convert MLIR IR to filechek tests, something like this:
xdsl-opt file.mlir -p my-awesome-pass | filecheckize --mlir-anonymize

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops i merged it, ill keep these in mind for future, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
transformations Changes or adds a transformatio
Projects
None yet
Development

Successfully merging this pull request may close these issues.

dialects: (linalg) add linalg.broadcast
2 participants