-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Prim] add reduce_as op for paddle #63064
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
test/legacy_test/test_sum_as_op.py
Outdated
self.python_api = paddle.sum_as | ||
self.public_python_api = paddle.sum_as | ||
self.op_type = "sum_as" | ||
self.prim_op_type = "prim" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个算子目前没有反向拆解,这里是不能设置的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
补充动态shape的单侧 |
由于在本地cuda 11.2环境, 本地cuda12.0环境 以及CI环境都无法复现test_assign_pos_op 单测出错的场景,现将该PR的实现部分进行拆解,分成多个子PR,确认是哪个部分的改动引起了这个单测出错。这些PR并不需要Review和Merge具体PR如下: reduce_as 前向计算: #63652 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for op-benchmark ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM,中文也同步增加文档吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
'float64', | ||
'int16', | ||
'int32', | ||
'int64', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we support other data type of complex64/128
、uint8
andint8
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
后续的PR再进行补充吧
This reverts commit 38182e7.
PR Category
Others
PR Types
New features
Description
由于
broadcast
类算子的反向拆解无法适配动态shape,在静态shape中的例子如下,
但在动态shape的编译期拿不到
reduce_dim
,于是要添加reduce_as
算子,其伪代码如下:有了这个算子之后,动态shape的场景写法如下:
这个pr就是实现对
reduce_as
op的添加