dynamic_reshape and dynamic_broadcast_in_dim can't be lowered to a dialect upstream of mlir. #107
Replies: 1 comment
-
If coming from the old torch-mlir TorchScript path, then those ops are fairly hard to lower because the frontend makes insufficient guarantees regarding dimension relationships, and it forces the backend to handle a lot of combinations defensively. This is usually pretty bad because it is almost always extremely important to fuse those ops, or performance will be very bad. This path isn't being developed anymore. Current work is focused on the FX path, which has much stronger semantics, suitable for dynamic shape compilers: https://github.com/llvm/torch-mlir/blob/main/docs/roadmap.md#current-api-paths Compilers using that path are assuming the stronger semantics, particularly around the program forms that yield these ops. I don't think stablehlo ever defined its dynamic shape system in a similar fashion, so I'm not sure how to write good lowerings with the existing stablehlo definitions, but I'd advise finding out if you can assume the stronger constraints practically. Doing so will most likely cause all such ops to trivially fuse or be directly representable by linalg. |
Beta Was this translation helpful? Give feedback.
-
We seem to be in a bit of a bind. There is stablehlo as a backend in torch-mlir, and chlo-legalize-to-stablehlo did solve some problems before, but there are still some remaining issues.
Give an example of e2e.
HardswishModule_basic
Here are all the pipelines it runs.
What you can notice is that dynamic_broadcast_in_dim is still there. Not only that, but the same situation with dynamic_reshape also has.So I was wondering, can I add a new pass to stablehlo for the two operations of lower?Or if people have other methods. Or add dynamic_reshape and stablehlo.dynamic_broadcast_in_dim to the interpreter?
I hope someone can help me. Thanks.
Beta Was this translation helpful? Give feedback.
All reactions