Skip to content

Test fft normalization #2209

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Test fft normalization #2209

wants to merge 1 commit into from

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Apr 16, 2025

The numbers seem closer-ish for fft_c2r, but don't really match.

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Apr 16, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (1)

onnxscript/function_libs/torch_lib/ops/fft.py:121

  • Verify that op.Shape returns a scalar value when using start and end to extract the dimension. If op.Shape returns a tensor, explicitly extract its value to ensure proper normalization scaling.
scale = (op.CastLike(last_dim_size, self)) / op.CastLike(

@@ -14,7 +14,7 @@

from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'ir' is not used.

Copilot Autofix

AI 6 days ago

To fix the issue, we should remove the unused import of ir from the onnxscript module. This will clean up the code and eliminate the unnecessary dependency. The change should be made on line 17, where the import statement is defined.

Suggested changeset 1
onnxscript/function_libs/torch_lib/ops/fft.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/onnxscript/function_libs/torch_lib/ops/fft.py b/onnxscript/function_libs/torch_lib/ops/fft.py
--- a/onnxscript/function_libs/torch_lib/ops/fft.py
+++ b/onnxscript/function_libs/torch_lib/ops/fft.py
@@ -16,3 +16,3 @@
 
-from onnxscript import INT64, ir
+from onnxscript import INT64
 from onnxscript.function_libs.torch_lib.registration import torch_op
EOF
@@ -16,3 +16,3 @@

from onnxscript import INT64, ir
from onnxscript import INT64
from onnxscript.function_libs.torch_lib.registration import torch_op
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link

codecov bot commented Apr 16, 2025

❌ 8 Tests Failed:

Tests completed Failed Passed Skipped
14854 8 14846 2205
View the top 3 failed test(s) by shortest run time
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_1271_test_tril
Stack Traces | 0.003s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_tril'

The above exception was the direct cause of the following exception:
.nox\test_onnx_weekly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_tril' (e=No module named 'tests.onnx_backend_test_code.test_tril') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_tril.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_tril.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import INT64
E   from onnxscript.onnx_opset import opset14
E   
E   @script()
E   def bck_test_tril(x: INT64[4,5]) -> (INT64[4,5]):
E       y = opset14.Trilu(x, upper=0)
E       return y
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0124_test_bitwise_not_4d
Stack Traces | 0.004s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_bitwise_not_4d'

The above exception was the direct cause of the following exception:
.nox\test_onnx_weekly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_bitwise_not_4d' (e=No module named 'tests.onnx_backend_test_code.test_bitwise_not_4d') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_bitwise_not_4d.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_bitwise_not_4d.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import UINT8
E   from onnxscript.onnx_opset import opset18
E   
E   @script()
E   def bck_test_bitwise_not_4d(x: UINT8[3,4,5,6]) -> (UINT8[3,4,5,6]):
E       bitwise_not = opset18.BitwiseNot(x)
E       return bitwise_not
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_1153_test_slice_negative_axes
Stack Traces | 0.005s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_slice_negative_axes'

The above exception was the direct cause of the following exception:
.nox\test_onnx_weekly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_slice_negative_axes' (e=No module named 'tests.onnx_backend_test_code.test_slice_negative_axes') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_slice_negative_axes.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_slice_negative_axes.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import FLOAT, INT64
E   from onnxscript.onnx_opset import opset13
E   
E   @script()
E   def bck_test_slice_negative_axes(x: FLOAT[20,10,5], starts: INT64[3], ends: INT64[3], axes: INT64[3]) -> (FLOAT[20,10,1]):
E       y = opset13.Slice(x, starts, ends, axes)
E       return y

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@@ -14,7 +14,7 @@

from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check warning

Code scanning / lintrunner

PYLINT/W0611 Warning

Unused ir imported from onnxscript (unused-import)
See unused-import. To disable, use # pylint: disable=unused-import
@@ -14,7 +14,7 @@

from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check warning

Code scanning / lintrunner

RUFF/F401 Warning

onnxscript.ir imported but unused.
See https://docs.astral.sh/ruff/rules/unused-import
inverse=True,
onesided=False,
)
* scale
)
transformed = _fftn_onnx_normalization(
transformed,
Copy link
Contributor

@bmehta001 bmehta001 Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps replace line 137 (op.Shape...) with op.CastLike(last_dim_size, self) and then remove scale? Would that yield the same/better results?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought last_dim_size was op.Shape(transformed, start=dimension, end=dimension + 1)? Let me try

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, nvm, sorry, you're completely right about op.Shape(transformed, start=dimension, end=dimension + 1) being different between line 122 and 137. But your code made me realize that without modifying anything else, line 137 perhaps should be directly replaced with last_dim_size just to save a call to op.Shape.

Copy link
Contributor

@bmehta001 bmehta001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help

@github-project-automation github-project-automation bot moved this from Todo to In Progress in ONNX Script Review Board Apr 17, 2025
@justinchuby
Copy link
Collaborator Author

Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help

👍 looks like they recreated the conjugate part. It's like doubling the work that was meant to be saved but I guess that's the best option right now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants