We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model.torch_onnx.mlir:
module { func.func @main_graph(%arg0: !torch.vtensor<[],i1>, %arg1: !torch.vtensor<[1,1,?,?],si64>, %arg2: !torch.vtensor<[?,?],si64>) -> !torch.vtensor<[],si64> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.7.0"} { %0 = torch.operator "onnx.If"(%arg0) : (!torch.vtensor<[],i1>) -> !torch.vtensor<[],si64> { %10198 = torch.operator "onnx.Identity"(%arg1) : (!torch.vtensor<[1,1,?,?],si64>) -> !torch.vtensor<[1,1,?,?],si64> torch.operator_terminator %10198 : !torch.vtensor<[1,1,?,?],si64> } , { %10264 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> %10265 = torch.operator "onnx.Unsqueeze"(%arg2, %10264) : (!torch.vtensor<[?,?],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1,?,?],si64> torch.operator_terminator %10265 : !torch.vtensor<[1,?,?],si64> } %819 = torch.operator "onnx.Neg"(%0) : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> %820 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__124> : tensor<si64>} : () -> !torch.vtensor<[],si64> %821 = torch.operator "onnx.Add"(%819, %820) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> return %821 : !torch.vtensor<[],si64> } }
Steps to reproduce:
iree-compile model.torch_onnx.mlir --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-cpu=host
Should get following Error in MLIR:
/home/vivekag/scratch/AIGShark/nodai/SharkTestSuite_vivekag/SHARK-TestSuite/alt_e2eshark/test-run/hf_1_microsoft_deberta_V1.0/model.torch_onnx.mlir:822:12: error: failed to legalize operation 'torch.operator' that was explicitly mar %818 = torch.operator "onnx.If"(%817) : (!torch.vtensor<[],i1>) -> !torch.vtensor<[],si64> { ^ /home/vivekag/scratch/AIGShark/nodai/SharkTestSuite_vivekag/SHARK-TestSuite/alt_e2eshark/test-run/hf_1_microsoft_deberta_V1.0/model.torch_onnx.mlir:822:12: note: see current operation: %2652 = "torch.operator"(%2651) <{name = "onnx.If"}> ({ %13687 = "torch.operator"(%2078) <{name = "onnx.Identity"}> : (!torch.vtensor<[1,1,?,?],si64>) -> !torch.vtensor<[1,1,?,?],si64> "torch.operator_terminator"(%13687) : (!torch.vtensor<[1,1,?,?],si64>) -> () }, {
Note:: The issue could be during import of the model which is causing generating the wrong MLIR.
Tests failing:
hf_1_microsoft_deberta_V1.0 hf_1_microsoft_deberta_V1.1 hf_checkpoints_10_1_microsoft_deberta_V1.1_384 hf_checkpoints_1_16 hf_checkpoints_26_9_microsoft_deberta_21_9 hf_checkpoints_28_9_microsoft_deberta_V2 hf_checkpoints_28_9_microsoft_deberta_V4 hf_checkpoints_28_9_microsoft_deberta_V5 hf_checkpoints_29_9_microsoft_deberta_V1 hf_checkpoints_30_9_microsoft_deberta_V1.0_384 hf_checkpoints_3_14 hf_content hf_deberta-base hf_deberta_finetuned_pii hf_deberta-large-mnli hf_Debertalarg_model_multichoice_Version2 hf_deberta-v2-base-japanese hf_deberta-v2-base-japanese-char-wwm hf_deberta-v3-base hf_deberta-v3-base-absa-v1.1 hf_deberta-v3-base_finetuned_ai4privacy_v2 hf_deberta-v3-base-injection hf_DeBERTa-v3-base-mnli-fever-anli hf_deberta-v3-base-squad2 hf_deberta-v3-base-zeroshot-v1.1-all-33 hf_deberta-v3-large hf_deberta-v3-large_boolq hf_deberta-v3-large-squad2 hf_deberta-v3-large_test hf_deberta-v3-large_test_9e-6 hf_deberta-v3-small hf_deberta-v3-xsmall hf_llm-mdeberta-v3-swag hf_mdeberta-v3-base hf_mDeBERTa-v3-base-mnli-xnli hf_mdeberta-v3-base-squad2 hf_mDeBERTa-v3-xnli-ft-bs-multiple-choice hf_Medical-NER hf_mxbai-rerank-base-v1 hf_mxbai-rerank-xsmall-v1 hf_nli-deberta-v3-base hf_output hf_piiranha-v1-detect-personal-information hf_splinter-base hf_splinter-base-qass
The text was updated successfully, but these errors were encountered:
amd-vivekag
No branches or pull requests
model.torch_onnx.mlir:
Steps to reproduce:
Should get following Error in MLIR:
Note:: The issue could be during import of the model which is causing generating the wrong MLIR.
Tests failing:
The text was updated successfully, but these errors were encountered: