-
Notifications
You must be signed in to change notification settings - Fork 364
fix: Allow rank differences in aten.expand
#2234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As such the changes look good. But I think this was already taken care in the slice/ops.py
. This condition was getting encountered in aten::where
cases where the two inputs were not of equal dimensions. However since there was no aten.ops.expand
in dynamo, it was picking up the acc_ops
implementation. Its good that a test case for this in expand
has been added.
We should either keep this or the slice one, and accordingly change the call to expand elsewhere.
5910cf2
to
5546ae7
Compare
- Add support for `aten.expand.default` in Dynamo converter registry - Build converter to support rank-padding for input Tensors, in line with the existing Torch behavior - Add test case to validate new behavior, in addition to existing cases validating old behavior
5546ae7
to
50a9ce3
Compare
Thanks for the comments and suggestions. I have merged the implementations into one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
aten.expand.default
in Dynamo converter registryFixes #2183
Type of change
Checklist: