-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add alltoall api #32507
add alltoall api #32507
Conversation
Thanks for your contribution! |
out = helper.create_variable_for_type_inference( | ||
dtype=in_tensor_list[0].dtype) | ||
if in_dygraph_mode(): | ||
core.ops.alltoall_(temp, out, 'use_calc_stream', use_calc_stream, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need rm out
in inplace strategy, and fix op_function_generator.cc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
np_data2 = np.array([[19, 20, 21], [22, 23, 24]]) | ||
data1 = paddle.to_tensor(np_data1) | ||
data2 = paddle.to_tensor(np_data2) | ||
paddle.distributed.all_to_all([data1, data2], out_tensor_list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以把跑完后的结果也放到文档里
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
TODO:Fix Docs
should be float16, float32, float64, int32 or int64. | ||
out_tensor_list (Tensor): A list of output Tensors. The data type of its elements should be the same as the | ||
data type of the input Tensors. | ||
group (Group): The group instance return by new_group or None for global default group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
group (Group, optional): The group instance return by new_group or None for global default group.Default: None.
下同
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
9fa800d
9fa800d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
APIs
Describe
Add paddle.distributed.alltoall api
How to use: