-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prepare_gradient_aggregation for non-leaf output of PartialProgramLayer #44893
prepare_gradient_aggregation for non-leaf output of PartialProgramLayer #44893
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
for in_arg in op.input_arg_names: | ||
if in_arg == var.name: | ||
return True | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reuturn var.name in op.input_arg_names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样逻辑不等价了好像。
@@ -287,6 +287,63 @@ def _verify_program(self, main_program): | |||
|
|||
return main_program | |||
|
|||
def prepare_gradient_aggregation(self, main_program, target_program): | |||
# Why we need add Reverse gradient aggregation operation ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
函数注释最好用
""""
xxxx
"""""
格式
lambda x: any([ | ||
out_arg == var_grad_name | ||
for out_arg in x[1].output_arg_names | ||
]), enumerate(target_program.block(0).ops))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么要用 enumerate ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个enumerate得到的值是插入的idx,后续插入Op会用到的。
return False | ||
|
||
def _insert_aggregation_ops_for_var(target_program, var): | ||
var_grad_name = var.name + "@GRAD" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里最好不要写死 + "@Grad" ,grad后缀框架是有统一的API的
# len(finded_ops) may > 1, because we may have fill_constant op. | ||
if len(finded_ops) == 0: | ||
return None | ||
suffix = "@dy2static" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是不是最好使用 var_name + _dy2static + grad_suffix。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
Others
Describe