-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ Make FLAGS_einsum_opt as default ] Einsum memory optimization #43397
[ Make FLAGS_einsum_opt as default ] Einsum memory optimization #43397
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
|
对 vector tensor 类型的输出,我设置为 dispensable 的话,动态图自动代码生成的接口会有问题,没法传入size作为参数。这个PR我已经验证过了,可以兼容运行 v2.3.0 和 develop 分支导出的program。 |
请 @jiweibo 也确认下吧 |
ctx->SetOutputsDim(x_grad_name, ctx->GetInputsDim(x_name)); | ||
ctx->ShareAllLoD(x_name, x_grad_name); | ||
ctx->SetOutputsDim(x_grad_name, ctx->GetInputsDim("Operands")); | ||
ctx->ShareAllLoD("Operands", x_grad_name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么不直接用 x_name 了?
#func : UnchangedMultiInferMeta | ||
#param : [x] | ||
#kernel : | ||
#func : einsum_grad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分注释代码删掉吧
看起来,AsIntermediate 也能保证向后兼容
|
PR types
Others
PR changes
Others
Describe
Others
TODO: