-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Zero-Dim] Support output 0D for to_tensor. #52741
[Zero-Dim] Support output 0D for to_tensor. #52741
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
@@ -58,6 +59,8 @@ def _apply_collective_grads(parameters, comm_group, bucket_size, scale=None): | |||
for coalesced_grad, _, _ in coalesced_grads_and_vars: | |||
# need to div nranks | |||
if scale is not None: | |||
if np.isscalar(scale) and not isinstance(scale, str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elementwise_div应该已经支持了除以0D,这个保持原来的写法后面代码会挂不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以保持原来的写法。
@@ -117,7 +117,7 @@ def recv_meta(self, group): | |||
|
|||
def _send_dims_shape_dtype(self, tensor, group): | |||
# send len(shape) | |||
dims = paddle.to_tensor(len(tensor.shape)) | |||
dims = paddle.to_tensor([len(tensor.shape)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个如果保持原来的写法,1D变0D后面会出错吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个先改成这样。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
d329327
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * test=allcase * fix doc erros, test=allcase
PR types
New features
PR changes
APIs
Description
pcard-66984
支持API输出0D Tensor:to_tensor。