Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Fix embedding doc; test=document_fix #66974

Merged
merged 2 commits into from
Aug 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions python/paddle/nn/functional/input.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,21 @@ def embedding_renorm_(
x: Tensor, weight: Tensor, max_norm: float, norm_type: float = 2.0
) -> Tensor:
r"""
This operator is used to update the embedding weight by renorm.
Renorm the weight of embedding with respect to the provided :attr:`max_norm` and :attr:`norm_type` .

Note:
In the dynamic graph mode, the input weight will be updated in-place, and the return value will be the changed weight.

Args:
x(Tensor): A Tensor with type int32/int64, which contains the id information. The value of the input id should
satisfy :math:`0<= id < weight.shape[0]` .
weight (Tensor): The weight. A Tensor with shape of lookup table parameter. It should have two elements which
indicates the size of the dictionary of embeddings and the size of each embedding vector respectively.
max_norm(float): The maximum norm for each embedding vector.
norm_type(float, optional): The p of the p-norm to compute for the max_norm option. Default: 2.0.

Returns:
Tensor, The updated weight. The data type is the same as :attr:`weight`.
"""
with paddle.set_grad_enabled(False):
unique_x = paddle.unique(x)
Expand Down Expand Up @@ -201,7 +215,7 @@ def embedding(
encounters :math:`padding\_idx` in id. And the padding data will not be updated while training.
If set None, it makes no effect to output. Default: None.
max_norm(float, optional): If provided, will renormalize the embedding vectors to have a norm larger than
:attr:`max\_norm` . It will inplace update the input embedding weight. Default: None.
:attr:`max\_norm` . It will inplace update the input embedding weight in dynamic graph mode. Default: None.
norm_type(float, optional): The p of the p-norm to compute for the max_norm option. Default: 2.0.
name(str|None, optional): For detailed information, please refer
to :ref:`api_guide_Name`. Usually name is no need to set and
Expand Down
3 changes: 3 additions & 0 deletions python/paddle/nn/layer/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -1701,6 +1701,9 @@ class Embedding(Layer):
to :math:`vocab\_size + padding\_idx` . It will output all-zero padding data whenever lookup
encounters :math:`padding\_idx` in id. And the padding data will not be updated while training.
If set None, it makes no effect to output. Default: None.
max_norm(float, optional): If provided, will renormalize the embedding vectors to have a norm larger than
:attr:`max\_norm` . It will inplace update the input embedding weight in dynamic graph mode. Default: None.
norm_type(float, optional): The p of the p-norm to compute for the max_norm option. Default: 2.0.
sparse(bool, optional): The flag indicating whether to use sparse update. This parameter only
affects the performance of the backwards gradient update. It is recommended to set
True because sparse update is faster. But some optimizer does not support sparse update,
Expand Down