-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Typing][B-40] Add type annotations for python/paddle/autograd/backward_mode.py
#66277
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
def backward( | ||
tensors: list[paddle.Tensor], | ||
grad_tensors: list[paddle.Tensor] | None = None, | ||
retain_graph: bool | None = False, | ||
) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
几个小问题:
- 最好用
Tensor
,TYPE_CHECKING
中导入 grad_tensors
的list
元素可以有None
retain_graph
为啥有None
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的好的,我之前理解错optional的含义了,已更正
@@ -32,7 +38,7 @@ def backward(tensors, grad_tensors=None, retain_graph=False): | |||
If None, all the gradients of the ``tensors`` is the default value which is filled with 1.0. | |||
Defaults to None. | |||
|
|||
retain_graph(bool, optional): If False, the graph used to compute grads will be freed. If you would | |||
retain_graph(bool|False, optional): If False, the graph used to compute grads will be freed. If you would |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
retain_graph(bool|False, optional): If False, the graph used to compute grads will be freed. If you would | |
retain_graph(bool, optional): If False, the graph used to compute grads will be freed. If you would |
不需要写具体值
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
收到,已更正
def check_tensors( | ||
in_out_list: list[paddle.Tensor], name: str | ||
) -> list[paddle.Tensor]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
内部函数一般不用标注,如果要标注的话,这里其实有点问题,in_out_list
应该是 list[Tensor] | tuple[Tensor, ...] | Tensor
,输出也可以是 tuple
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
收到,已更正,谢谢您的提醒!
注意一下 CLA 的签署,git commit 所使用的 email 要和 GitHub 上的一致,否则 PR 是没办法合入的,你的所有 PR 都有这个问题 即便新的 commit 使用的 email 正确,过去的 commit 也会阻塞 CLA 签署,所以需要将过去的 commit 重新编辑(比如 rebase)后 force push |
import paddle | ||
from paddle.base import core, framework | ||
from paddle.base.backward import gradients_with_optimizer # noqa: F401 | ||
|
||
if TYPE_CHECKING: | ||
from .. import Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from .. import Tensor | |
from paddle import Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
收到,已更正
def backward(tensors, grad_tensors=None, retain_graph=False): | ||
def backward( | ||
tensors: list[Tensor], | ||
grad_tensors: list[Tensor, None] | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grad_tensors: list[Tensor, None] | None = None, | |
grad_tensors: list[Tensor | None] | None = None, |
list 只有一个泛型参数
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已更正
def check_tensors( | ||
in_out_list: list[Tensor] | tuple[Tensor, ...] | Tensor, name: str | ||
) -> list[Tensor] | tuple[Tensor, ...]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def check_tensors( | |
in_out_list: list[Tensor] | tuple[Tensor, ...] | Tensor, name: str | |
) -> list[Tensor] | tuple[Tensor, ...]: | |
def check_tensors( | |
in_out_list: Sequence[Tensor] | Tensor, name: str | |
) -> Sequence[Tensor]: |
另外,104、109、126 行的 isinstance(x, (paddle.Tensor, core.eager.Tensor))
属于历史遗留写法,现在 paddle.Tensor
和 core.eager.Tensor
是同一个东西,可以合并为 isinstance(x, paddle.Tensor)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
上述内容已修正,并对代码部分做了更新,谢谢您的suggestion!
python/paddle/autograd/backward_mode.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Category
User Experience
PR Types
Improvements
Description
类型标注:
Related links
@SigureMo @megemini