-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
转换规则 No. 153 #181
转换规则 No. 153 #181
Conversation
Thanks for your contribution! |
需要在PR描述里写上文档PR的链接 |
@enkilee 你好,单测没有跑过,因为CI上只有CPU环境无法跑过GPU的环境,可以参考下其他单测这里的写法 |
单测未通过 |
@enkilee 敏师傅,记得这里还有单测问题哈 |
pytorch_code, | ||
["result"], | ||
unsupport=True, | ||
reason="paddle does not support this function temporarily", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
上面不是刚实现了 torch.distributed.all_gather_object
吗,这里怎么能写unsupport呢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dist.init_process_group("nccl", init_method='tcp://127.0.0.1:23456', rank=1, world_size=3)
这个无法运行
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个API也是支持的,为啥转完后,会有 >>>
标记呀,按道理应该全部转写了,使用unsupport=True是不能跑过的
这个API也是支持的,为啥转完后,会有 >>> 标记呀,按道理应该全部转写了,使用unsupport=True是表示转完后代码里有 >>> 标记的 |
收到,立即整改 |
pytorch_code = textwrap.dedent( | ||
""" | ||
import torch | ||
import torch.distributed as dist |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI上的环境只有CPU,单测中需要判断下if torch is cuda,不然跑不过
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
收到
PR Docs
PR APIs