-
Notifications
You must be signed in to change notification settings - Fork 64
Test fft normalization #2209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Test fft normalization #2209
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (1)
onnxscript/function_libs/torch_lib/ops/fft.py:121
- Verify that op.Shape returns a scalar value when using start and end to extract the dimension. If op.Shape returns a tensor, explicitly extract its value to ensure proper normalization scaling.
scale = (op.CastLike(last_dim_size, self)) / op.CastLike(
@@ -14,7 +14,7 @@ | |||
|
|||
from typing import Optional, Sequence | |||
|
|||
from onnxscript import INT64 | |||
from onnxscript import INT64, ir |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 6 days ago
To fix the issue, we should remove the unused import of ir
from the onnxscript
module. This will clean up the code and eliminate the unnecessary dependency. The change should be made on line 17, where the import statement is defined.
-
Copy modified line R17
@@ -16,3 +16,3 @@ | ||
|
||
from onnxscript import INT64, ir | ||
from onnxscript import INT64 | ||
from onnxscript.function_libs.torch_lib.registration import torch_op |
❌ 8 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
@@ -14,7 +14,7 @@ | |||
|
|||
from typing import Optional, Sequence | |||
|
|||
from onnxscript import INT64 | |||
from onnxscript import INT64, ir |
Check warning
Code scanning / lintrunner
PYLINT/W0611 Warning
See unused-import. To disable, use # pylint: disable=unused-import
@@ -14,7 +14,7 @@ | |||
|
|||
from typing import Optional, Sequence | |||
|
|||
from onnxscript import INT64 | |||
from onnxscript import INT64, ir |
Check warning
Code scanning / lintrunner
RUFF/F401 Warning
See https://docs.astral.sh/ruff/rules/unused-import
inverse=True, | ||
onesided=False, | ||
) | ||
* scale | ||
) | ||
transformed = _fftn_onnx_normalization( | ||
transformed, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps replace line 137 (op.Shape...) with op.CastLike(last_dim_size, self)
and then remove scale
? Would that yield the same/better results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought last_dim_size was op.Shape(transformed, start=dimension, end=dimension + 1)
? Let me try
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, nvm, sorry, you're completely right about op.Shape(transformed, start=dimension, end=dimension + 1)
being different between line 122 and 137. But your code made me realize that without modifying anything else, line 137 perhaps should be directly replaced with last_dim_size
just to save a call to op.Shape.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help
👍 looks like they recreated the conjugate part. It's like doubling the work that was meant to be saved but I guess that's the best option right now? |
The numbers seem closer-ish for fft_c2r, but don't really match.