We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For the given IR
module { func.func @main_graph(%arg0: !torch.vtensor<[1,3,224,224],f32>, %arg1: !torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[?,?,196,512],f32> attributes {torch.onnx_meta.ir_version = 8 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.1.0"} { %40 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<1x1x196x512xf32>} : () -> !torch.vtensor<[1,1,196,512],f32> %45 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<512xf32>} : () -> !torch.vtensor<[512],f32> %876 = torch.operator "onnx.Add"(%arg1, %40) : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[1,1,196,512],f32>) -> !torch.vtensor<[?,?,196,512],f32> %877 = torch.operator "onnx.LayerNormalization"(%876, %45, %45) {torch.onnx.axis = -1 : si64, torch.onnx.epsilon = 9.99999997E-7 : f32} : (!torch.vtensor<[?,?,196,512],f32>, !torch.vtensor<[512],f32>, !torch.vtensor<[512],f32>) -> !torch.vtensor<[?,?,196,512],f32> %2560 = torch.operator "onnx.Shape"(%877) : (!torch.vtensor<[?,?,196,512],f32>) -> !torch.vtensor<[4],si64> %2561 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__691> : tensor<si64>} : () -> !torch.vtensor<[],si64> %2562 = torch.operator "onnx.Gather"(%2560, %2561) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %2563 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__692> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2565 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__693> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2566 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__694> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2567 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__695> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2568 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__696> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2569 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__697> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2570 = torch.operator "onnx.Unsqueeze"(%2562, %2569) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> %2571 = torch.operator "onnx.Concat"(%2563, %2565, %2566, %2567, %2568, %2570) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[6],si64> %2572 = torch.operator "onnx.Reshape"(%877, %2571) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,196,512],f32>, !torch.vtensor<[6],si64>) -> !torch.vtensor<[?,1,1,14,14,512],f32> %2573 = torch.operator "onnx.Transpose"(%2572) {torch.onnx.perm = [0 : si64, 1 : si64, 3 : si64, 2 : si64, 4 : si64, 5 : si64]} : (!torch.vtensor<[?,1,1,14,14,512],f32>) -> !torch.vtensor<[?,1,14,1,14,512],f32> %2574 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__698> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2576 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__699> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2577 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__700> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2578 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__701> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %2580 = torch.operator "onnx.Concat"(%2574, %2576, %2577, %2578) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> %2581 = torch.operator "onnx.Reshape"(%2573, %2580) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,1,14,1,14,512],f32>, !torch.vtensor<[4],si64>) -> !torch.vtensor<[?,14,14,512],f32> %2582 = torch.operator "onnx.LayerNormalization"(%2581, %45, %45) {torch.onnx.axis = -1 : si64, torch.onnx.epsilon = 9.99999997E-7 : f32} : (!torch.vtensor<[?,14,14,512],f32>, !torch.vtensor<[512],f32>, !torch.vtensor<[512],f32>) -> !torch.vtensor<[?,14,14,512],f32> return %877 : !torch.vtensor<[?,?,196,512],f32> } } {-# dialect_resources: { builtin: { __690: "0x080000000000000000000000", __691: "0x080000000300000000000000", __692: "0x080000000000000000000000", __693: "0x080000000100000000000000", __694: "0x080000000100000000000000", __695: "0x080000000E00000000000000", __696: "0x080000000E00000000000000", __697: "0x080000000000000000000000", __698: "0x080000000000000000000000", __699: "0x080000000E00000000000000", __700: "0x080000000E00000000000000", __701: "0x080000000000000000000000" } } #-}
getting error as
../model.torch_onnx.mlir:6:12: error: 'vector.transfer_write' op inferred mask type ('vector<1x1x8xi1>') and mask operand type ('vector<1x8x1xi1>') don't match %877 = torch.operator "onnx.LayerNormalization"(%876, %45, %45) {torch.onnx.axis = -1 : si64, torch.onnx.epsilon = 9.99999997E-7 : f32} : (!torch.vtensor<[?,?,196,512],f32>, !torch.vtensor<[512],f32>, !torch.vtensor<[512],f32>) -> !torch.vtensor<[?,?,196,512],f32>
during generic vectorization.
If the below ops is re-written by replacing dynamic with static dim for input/output then it works fine
%876 = torch.operator "onnx.Add"(%arg1, %40) : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[1,1,196,512],f32>) -> !torch.vtensor<[?,?,196,512],f32>
command:
iree-compile --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-cpu=host -o abc.vmfb model.torch_onnx.mlir
version : IREE compiler version 3.1.0rc20241204 @ 939984c
dump with '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4'
dump.log
Compiler
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What happened?
For the given IR
getting error as
during generic vectorization.
If the below ops is re-written by replacing dynamic with static dim for input/output then it works fine
Steps to reproduce your issue
command:
version : IREE compiler version 3.1.0rc20241204 @ 939984c
dump with '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4'
dump.log
What component(s) does this issue relate to?
Compiler
Version information
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: