You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the main reason why TorchToTMTensor requires a constant dim is because it uses the dim input to compute the shape of an intermediate value and the output.
Here,
torch-mlir/lib/Conversion/TorchToTMTensor/TorchToTMTensor.cpp
Line 1532 in 34f6948
the TorchToTMTensor lowering checks to make sure that dim is a TorchConstantInt.
However, ONNX forces the axis to be variable because it allows axis to be a tensor.
This causes (at least part of) the lowering failure described in nod-ai/iree-amd-aie#103
The text was updated successfully, but these errors were encountered: