Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Typing][C-68,C-77][BUAA] Add type annotations for python/paddle/nn/* #67186

Closed
wants to merge 4 commits into from

Conversation

lwkhahaha
Copy link
Contributor

PR Category

User Experience

PR Types

Others

Description

Add type annotations for python/paddle/nn/*

Copy link

paddle-bot bot commented Aug 8, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@luotao1 luotao1 added contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 labels Aug 8, 2024
@lwkhahaha
Copy link
Contributor Author

截屏2024-08-09 13 23 45 老师想问一下这个报错该如何解决

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请遵守 #65008 对于 PR 的书写要求,不然无法 review

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#67178 已经合入了,这里删掉吧

quant_round_type: int = 1,
quant_max_bound: float = 127.0,
quant_min_bound: float = -127.0,
out_scale: Tensor = -1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
out_scale: Tensor = -1,
out_scale: float = -1,

quant_min_bound: float = -127.0,
out_scale: Tensor = -1,
compute_dtype: str = "default",
) -> Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
) -> Tensor:
) -> tuple[Tensor, Tensor, Tensor, Tensor]:

Comment on lines +437 to +439
out_scale: int = -1,
compute_dtype: str = "default",
) -> Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI 报错,需要编译选项,参考 https://github.com/PaddlePaddle/Paddle/pull/67178/files 在示例中添加

            >>> # doctest: +SKIP('Need compile flash attention')
            >>> # doctest: +REQUIRES(env:GPU)

Comment on lines +53 to +63
dropout1_rate: float | None = 0.5,
dropout2_rate: float | None = 0.5,
activation: str | None = "relu",
ln1_epsilon: float | None = 1e-5,
ln2_epsilon: float | None = 1e-5,
pre_layer_norm: bool | None = False,
training: bool | None = True,
mode: str | None = 'upscale_in_train',
ring_id: int | None = -1,
add_residual: bool | None = True,
name: str | None = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么有 None

ln2_epsilon: float | None = 1e-5,
pre_layer_norm: bool | None = False,
training: bool | None = True,
mode: str | None = 'upscale_in_train',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Literal

Comment on lines +336 to +340
dropout_rate: float | None = 0.5,
ln_epsilon: float | None = 1e-5,
training: bool | None = True,
mode: str | None = 'upscale_in_train',
name: str | None = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines +523 to +531
dropout_rate: float | None = 0.5,
attn_dropout_rate: float | None = 0.5,
ln_epsilon: float | None = 1e-05,
training: bool | None = True,
mode: str | None = 'upscale_in_train',
ring_id: int | None = -1,
add_residual: bool | None = True,
num_heads: int | None = -1,
transpose_qkv_wb: bool | None = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@@ -962,41 +969,41 @@ def fused_multi_head_attention(


def fused_multi_transformer(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

几个问题

  • list[Tensor]|tuple[Tensor] 统一用 Sequence[Tensor]
  • 默认值没有 None 或者没有说明可以使用的,不要加 None
  • Literal 的使用
  • 这个函数根据 cache_kvs 的值需要使用 overload

Copy link

paddle-ci-bot bot commented Aug 16, 2024

Sorry to inform you that 21229fd's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@SigureMo SigureMo changed the title [Typing][C-68,C-77,C-85][BUAA] Add type annotations for python/paddle/nn/* [Typing][C-68,C-77][BUAA] Add type annotations for python/paddle/nn/* Aug 17, 2024
@luotao1 luotao1 closed this Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants