Skip to content

Conversation

justinchuby
Copy link
Collaborator

Also Create default values for the constant folding pass

Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Copy link

codecov bot commented Oct 3, 2025

❌ 31 Tests Failed:

Tests completed Failed Passed Skipped
3983 31 3952 5403
View the top 3 failed test(s) by shortest run time
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_add
Stack Traces | 0.003s run time
onnxscript/optimizer/_constant_folding_test.py:38: in test_fold_add
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'four' of node: 
E   name:  OpType: Mul
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_if_propagate
Stack Traces | 0.003s run time
onnxscript/optimizer/_constant_folding_test.py:156: in test_fold_if_propagate
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'zero_cast' of node: 
E   name:  OpType: Equal
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_if_cond
Stack Traces | 0.004s run time
onnxscript/optimizer/_constant_folding_test.py:105: in test_fold_if_cond
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'zero_cast' of node: 
E   name:  OpType: Equal
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_shape
Stack Traces | 0.004s run time
onnxscript/optimizer/_constant_folding_test.py:69: in test_fold_shape
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'two_float' of node: 
E   name:  OpType: Add
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_input_size_limit
Stack Traces | 0.004s run time
onnxscript/optimizer/_constant_folding_test.py:615: in test_input_size_limit
    optimized = self._fold(model, input_size_limit=256 * 256, output_size_limit=256 * 256)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'w_squared' of node: 
E   name:  OpType: Add
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_shape_inference
Stack Traces | 0.005s run time
onnxscript/optimizer/_constant_folding_test.py:211: in test_shape_inference
    optimized = self._fold(model, onnx_shape_inference=True)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'cond' of node: 
E   name:  OpType: If
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_inside_if_branch
Stack Traces | 0.008s run time
onnxscript/optimizer/_constant_folding_test.py:130: in test_fold_inside_if_branch
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'temp' of node: 
E   name:  OpType: Mul
E    is not output of any previous nodes.
E   
E   ==> Context: Bad node spec for node. Name:  OpType: If
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_shape_slice
Stack Traces | 0.008s run time
onnxscript/optimizer/_constant_folding_test.py:85: in test_fold_shape_slice
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'two_float' of node: 
E   name:  OpType: Add
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_fold_cast_like
Stack Traces | 0.009s run time
onnxscript/optimizer/_constant_folding_test.py:53: in test_fold_cast_like
    optimized = self._fold(model)
                ^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'two_float' of node: 
E   name:  OpType: Add
E    is not output of any previous nodes.
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_transpose_is_always_folded
Stack Traces | 0.012s run time
onnxscript/optimizer/_constant_folding_test.py:634: in test_transpose_is_always_folded
    optimized = self._fold(model, input_size_limit=1)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Graph output 'z' is not an output of any node in graph.
onnxscript.optimizer._function_folding_test.FunctionFoldingTest::test_fold_nested_if_function_succeeds
Stack Traces | 0.038s run time
onnxscript/optimizer/_function_folding_test.py:155: in test_fold_nested_if_function_succeeds
    self.assertNotIn("If", {n.op_type for n in optimized.graph})
E   AssertionError: 'If' unexpectedly found in {'If'}
onnxscript.rewriter.ort_fusions.gelu_test.GeluFusionTest::test_gelu_fusion
Stack Traces | 0.059s run time
.../rewriter/ort_fusions/gelu_test.py:44: in test_gelu_fusion
    original_output = test_utils.ort_run("Original", model, input)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'const_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.cos_sin_cache_test.TestCosSinCacheTransform::test_cos_sin_fusion_1_test_case_1
Stack Traces | 0.163s run time
..../test/lib/python3.11.../site-packages/parameterized/parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/cos_sin_cache_test.py:45: in test_cos_sin_fusion
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'inv_freq_3d' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.cos_sin_cache_test.TestCosSinCacheTransform::test_cos_sin_fusion_2_test_case_2
Stack Traces | 0.333s run time
..../test/lib/python3.11.../site-packages/parameterized/parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/cos_sin_cache_test.py:45: in test_cos_sin_fusion
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'inv_freq_3d' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.mha_test.TestMultiHeadAttention::test_whisper_encoder
Stack Traces | 0.41s run time
.../rewriter/ort_fusions/mha_test.py:58: in test_whisper_encoder
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'encoder_encoder_conv1_bias_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.gqa_test.GQAFusionTest::test_fusion
Stack Traces | 0.497s run time
.../rewriter/ort_fusions/gqa_test.py:355: in test_fusion
    session = ort.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'int64_1_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.skip_normalization_test.TestSkipNormalization::test_bart_encoder
Stack Traces | 2.09s run time
.../rewriter/ort_fusions/skip_normalization_test.py:71: in test_bart_encoder
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'encoder_layernorm_embedding_weight' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.attention_test.TestAttentionFusion::test_whisper_encoder
Stack Traces | 4.14s run time
.../rewriter/ort_fusions/attention_test.py:170: in test_whisper_encoder
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'encoder_encoder_conv1_bias_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.mha_test.TestMultiHeadAttention::test_whisper_decoder
Stack Traces | 7.42s run time
.../rewriter/ort_fusions/mha_test.py:84: in test_whisper_decoder
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'sym_size_int_42' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.skip_normalization_test.TestSkipNormalization::test_whisper_decoder
Stack Traces | 13.2s run time
.../rewriter/ort_fusions/skip_normalization_test.py:56: in test_whisper_decoder
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'sym_size_int_42' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.mha_test.TestMultiHeadAttention::test_smollm
Stack Traces | 31.5s run time
.../rewriter/ort_fusions/mha_test.py:34: in test_smollm
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'int64_16_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.rms_normalization_test.TestRmsNormalization::test_smollm
Stack Traces | 66.2s run time
.../rewriter/ort_fusions/rms_normalization_test.py:19: in test_smollm
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'const_13_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.rotary_embedding_test.TestRotaryEmbedding::test_rotary_embedding_fusion_1_smollm_test_1
Stack Traces | 72.4s run time
..../test/lib/python3.11.../site-packages/parameterized/parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/rotary_embedding_test.py:33: in test_rotary_embedding_fusion
    self.assertIn("RotaryEmbedding", op_types)
E   AssertionError: 'RotaryEmbedding' not found in ['Gather', 'ConstantOfShape', 'Trilu', 'Reshape', 'Unsqueeze', 'Add', 'Equal', 'Where', 'Unsqueeze', 'Cast', 'MatMul', 'Transpose', 'Concat', 'Cos', 'Sin', 'Pow', 'ReduceMean', 'Add', 'Sqrt', 'Reciprocal', 'Mul', 'Mul', 'MatMul', 'MatMul', 'MatMul', 'Reshape', 'Transpose', 'Reshape', 'Transpose', 'Reshape', 'Transpose', 'Unsqueeze', 'Unsqueeze', 'Mul', 'Slice', 'Slice', 'Neg', 'Concat', 'Mul', 'Add', 'Mul', 'Slice', 'Slice', 'Neg', 'Concat', 'Mul', 'Add', 'Reshape', 'Transpose', 'Reshape', 'Mul', 'Mul', 'MatMul', 'Add', 'Softmax', 'MatMul', 'Transpose', 'Reshape', 'MatMul', 'Add', 'Pow', 'ReduceMean', 'Add', 'Sqrt', 'Reciprocal', 'Mul', 'Mul', 'MatMul', 'Sigmoid', 'Mul', 'MatMul', 'Mul', 'MatMul', 'Add', 'Pow', 'ReduceMean', 'Add', 'Sqrt', 'Reciprocal', 'Mul', 'Mul', 'Transpose', 'MatMul']
onnxscript.rewriter.ort_fusions.fuse_xformers_test.TestFuseXformers::test_fuse_xformers
Stack Traces | 78s run time
.../rewriter/ort_fusions/fuse_xformers_test.py:19: in test_fuse_xformers
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'const_13_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.cos_sin_cache_test.TestCosSinCacheTransform::test_cos_sin_fusion_0_smollm_test_1
Stack Traces | 79.5s run time
..../test/lib/python3.11.../site-packages/parameterized/parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/cos_sin_cache_test.py:45: in test_cos_sin_fusion
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'const_13_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.skip_normalization_test.TestSkipNormalization::test_smollm
Stack Traces | 112s run time
.../rewriter/ort_fusions/skip_normalization_test.py:26: in test_smollm
    original_outputs = ort_run("original", model, inputs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'const_13_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.ort_fusions.mha_test.TestMultiHeadAttention::test_phi2lm
Stack Traces | 260s run time
.../rewriter/ort_fusions/mha_test.py:106: in test_phi2lm
    self.assertEqual(
E   AssertionError: 0 != 1 : Expected exactly one MultiHeadAttention node after optimization
View the full list of 6 ❄️ flaky test(s)
onnxscript.optimizer._constant_folding_test.FoldConstantsTest::test_node_is_folded_if_specified_as_should_fold

Flake rate in main: 94.59% (Passed 17 times, Failed 297 times)

Stack Traces | 0.002s run time
onnxscript/optimizer/_constant_folding_test.py:655: in test_node_is_folded_if_specified_as_should_fold
    optimized = self._fold(
onnxscript/optimizer/_constant_folding_test.py:25: in _fold
    onnx.checker.check_model(ir.serde.serialize_model(model))
..../test/lib/python3.11.../site-packages/onnx/checker.py:179: in check_model
    C.check_model(
E   onnx.onnx_cpp2py_export.checker.ValidationError: Graph output 'z' is not an output of any node in graph.
onnxscript.rewriter.ort_fusions.gelu_test.GeluFusionTest::test_gelu_erf_fusion

Flake rate in main: 78.87% (Passed 82 times, Failed 306 times)

Stack Traces | 0.038s run time
.../rewriter/ort_fusions/gelu_test.py:77: in test_gelu_erf_fusion
    original_output = test_utils.ort_run("Original", model, input)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../rewriter/ort_fusions/_test_utils.py:18: in ort_run
    session = onnxruntime.InferenceSession(
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input '_sqrt_two_cast' is not a graph input, initializer, or output of a previous node.
onnxscript.rewriter.rules.common._redundant_scatter_nd_test.RedundantScatterNdTest::test_redundant_scatter_nd_dynamic_indices

Flake rate in main: 94.59% (Passed 17 times, Failed 297 times)

Stack Traces | 0.023s run time
.../rules/common/_redundant_scatter_nd_test.py:52: in test_redundant_scatter_nd_dynamic_indices
    self.assertEqual(count, 1)
E   AssertionError: 0 != 1
onnxscript.rewriter.rules.fusion._gqa_test.GQAFusionTest::test_basic_gqa_fusion

Flake rate in main: 95.16% (Passed 3 times, Failed 59 times)

Stack Traces | 0.1s run time
.../rules/fusion/_gqa_test.py:81: in test_basic_gqa_fusion
    self.assertGreater(count, 0, "GQA fusion should have occurred")
E   AssertionError: 0 not greater than 0 : GQA fusion should have occurred
onnxscript.tools.transformers_models.phi_test.TestExportPhi::test_phi_export_cpu

Flake rate in main: 95.74% (Passed 16 times, Failed 360 times)

Stack Traces | 39.2s run time
onnxscript/_internal/version_utils.py:94: in call_f
    return fct(self)
           ^^^^^^^^^
.../tools/transformers_models/phi_test.py:34: in test_phi_export_cpu
    sess = onnxruntime.InferenceSession(
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'val_3' is not a graph input, initializer, or output of a previous node.
onnxscript.tools.transformers_models.phi_test.TestExportPhi::test_phi_export_cpu_export_api

Flake rate in main: 95.74% (Passed 16 times, Failed 360 times)

Stack Traces | 16.6s run time
onnxscript/_internal/version_utils.py:94: in call_f
    return fct(self)
           ^^^^^^^^^
.../tools/transformers_models/phi_test.py:54: in test_phi_export_cpu_export_api
    sess = onnxruntime.InferenceSession(
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:465: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:528: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid model. Node input 'val_3' is not a graph input, initializer, or output of a previous node.

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@justinchuby justinchuby added this to the 0.5.4 milestone Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

1 participant