-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0 Tensor support] support the 0d tensor for the cumsum #49518
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
@@ -86,6 +86,7 @@ | |||
paddle.lgamma, | |||
paddle.poisson, | |||
paddle.bernoulli, | |||
paddle.cumsum, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单独写吧,因为None的时候相当于先做一个展平1D,较特殊
然后axis要加检查,0D时axis只能None,0,-1(None时会做展平)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单独写吧,因为None的时候相当于先做一个展平1D,较特殊 然后axis要加检查,0D时axis只能None,0,-1(None时会做展平)
已经单独展开
out1.backward() | ||
out2.backward() | ||
|
||
self.assertEqual(out1.shape, []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以再测下广播的,加个x是ND的
反向的shape测试可以再加个x.grad.shape
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以再测下广播的,加个x是ND的 反向的shape测试可以再加个x.grad.shape
add_n 本身api不支持广播
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以再测下广播的,加个x是ND的 反向的shape测试可以再加个x.grad.shape
add_n 本身api不支持广播
OK
paddle/phi/kernels/xpu/cum_kernel.cc
Outdated
@@ -30,6 +30,11 @@ void CumsumKernel(const Context& dev_ctx, | |||
using XPUType = typename XPUTypeTrait<T>::Type; | |||
dev_ctx.template Alloc<T>(out); | |||
|
|||
if (x.dims().size() == 0) { | |||
phi::Copy(dev_ctx, x, dev_ctx.GetPlace(), false, out); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XPU的应该得这么写:
r = xpu::copy<XPUTypeT>(xpu_ctx,
d_qk_ptr,
d_src_mask_out_ptr,
batch_size * seq_len * seq_len * num_heads);
PADDLE_ENFORCE_XDNN_SUCCESS(r, "copy");
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XPU的应该得这么写:
r = xpu::copy<XPUTypeT>(xpu_ctx, d_qk_ptr, d_src_mask_out_ptr, batch_size * seq_len * seq_len * num_heads); PADDLE_ENFORCE_XDNN_SUCCESS(r, "copy");
Done
paddle/phi/infermeta/multiary.cc
Outdated
@@ -360,7 +362,11 @@ void AddNInferMeta(const std::vector<const MetaTensor*>& x, | |||
} | |||
} | |||
} | |||
out->set_dims(in_dim); | |||
if (is_0d_tensor) { | |||
out->set_dims(phi::make_ddim({})); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样的话广播功能还在吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add_n 本身api不支持广播
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add_n 本身api不支持广播
OK
@@ -87,6 +87,7 @@ | |||
paddle.lgamma, | |||
paddle.poisson, | |||
paddle.bernoulli, | |||
paddle.cumsum, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
已经单独展开单测
paddle/phi/infermeta/unary.cc
Outdated
@@ -412,14 +412,18 @@ void CumInferMeta(const MetaTensor& x, | |||
bool reverse, | |||
MetaTensor* out) { | |||
auto x_dims = x.dims(); | |||
auto x_dims_all = phi::product(x_dims); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
目前这个逻辑是对的,如果flatten后应该都为1D
非flatten就和原来shape一致,但是要检查axis,0D较特殊,axis必须[-1, 0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
目前这个逻辑是对的,如果flatten后应该都为1D 非flatten就和原来shape一致,但是要检查axis,0D较特殊,axis必须[-1, 0]
已经做了axis的check和单测
paddle/phi/infermeta/multiary.cc
Outdated
@@ -311,6 +312,7 @@ void AddNInferMeta(const std::vector<const MetaTensor*>& x, | |||
} | |||
// for 0D tensor | |||
if (x_dim.size() == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
写法上可以去掉这里试试,不过可能会触发历史问题
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
写法上可以去掉这里试试,不过可能会触发历史问题
会触发历史问题
paddle/phi/infermeta/multiary.cc
Outdated
@@ -360,7 +362,11 @@ void AddNInferMeta(const std::vector<const MetaTensor*>& x, | |||
} | |||
} | |||
} | |||
out->set_dims(in_dim); | |||
if (is_0d_tensor) { | |||
out->set_dims(phi::make_ddim({})); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add_n 本身api不支持广播
OK
59fcbe7
to
7d2c320
Compare
3d4a803
to
d204958
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Function optimization
PR changes
OPs
Describe
support the 0d tensor for the cumsum and add_n