Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add alltoall api #32507

Merged
merged 10 commits into from
Apr 27, 2021
Merged

add alltoall api #32507

merged 10 commits into from
Apr 27, 2021

Conversation

sandyhouse
Copy link

@sandyhouse sandyhouse commented Apr 24, 2021

PR types

New features

PR changes

APIs

Describe

Add paddle.distributed.alltoall api

How to use:

            import numpy as np
            import paddle
            from paddle.distributed import init_parallel_env
            init_parallel_env()
            out_tensor_list = []
            if paddle.distributed.ParallelEnv().rank == 0:
                np_data1 = np.array([[1, 2, 3], [4, 5, 6]])
                np_data2 = np.array([[7, 8, 9], [10, 11, 12]])
            else:
                np_data1 = np.array([[13, 14, 15], [16, 17, 18]])
                np_data2 = np.array([[19, 20, 21], [22, 23, 24]])
            data1 = paddle.to_tensor(np_data1)
            data2 = paddle.to_tensor(np_data2)
            paddle.distributed.alltoall([data1, data2], out_tensor_list)

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

out = helper.create_variable_for_type_inference(
dtype=in_tensor_list[0].dtype)
if in_dygraph_mode():
core.ops.alltoall_(temp, out, 'use_calc_stream', use_calc_stream,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need rm out in inplace strategy, and fix op_function_generator.cc

ForFishes
ForFishes previously approved these changes Apr 26, 2021
Copy link
Member

@ForFishes ForFishes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

np_data2 = np.array([[19, 20, 21], [22, 23, 24]])
data1 = paddle.to_tensor(np_data1)
data2 = paddle.to_tensor(np_data2)
paddle.distributed.all_to_all([data1, data2], out_tensor_list)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以把跑完后的结果也放到文档里

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

TCChenlong
TCChenlong previously approved these changes Apr 26, 2021
Copy link
Contributor

@TCChenlong TCChenlong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
TODO:Fix Docs

should be float16, float32, float64, int32 or int64.
out_tensor_list (Tensor): A list of output Tensors. The data type of its elements should be the same as the
data type of the input Tensors.
group (Group): The group instance return by new_group or None for global default group.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

group (Group, optional): The group instance return by new_group or None for global default group.Default: None.
下同

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

wangxicoding
wangxicoding previously approved these changes Apr 26, 2021
ForFishes
ForFishes previously approved these changes Apr 26, 2021
Copy link
Member

@ForFishes ForFishes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

XieYunshen
XieYunshen previously approved these changes Apr 26, 2021
gongweibao
gongweibao previously approved these changes Apr 26, 2021
Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

lanxianghit
lanxianghit previously approved these changes Apr 26, 2021
@sandyhouse sandyhouse dismissed stale reviews from wangxicoding and TCChenlong via 9fa800d April 26, 2021 15:10
Copy link
Contributor

@TCChenlong TCChenlong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sandyhouse sandyhouse merged commit db41b74 into PaddlePaddle:develop Apr 27, 2021
@sandyhouse sandyhouse deleted the alltoall branch April 27, 2021 11:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants