-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
转换规则 torch.nn.parallel.DistributedDataParallel #240
转换规则 torch.nn.parallel.DistributedDataParallel #240
Conversation
Thanks for your contribution! |
ci 里的torch 为CPU版本,部分单测以_开头跳过 |
world_size=1 | ||
) | ||
model = torch.nn.Linear(1, 1, bias=False).cuda() | ||
model = torch.nn.parallel.DistributedDataParallel(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些在本地能跑过不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以
obj.run(pytorch_code, ["result"]) | ||
|
||
|
||
def _test_case_2(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些在本地能跑过不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Docs
torch.nn.parallel.DistributedDataParallel 映射文档已存在
PR APIs