Skip to content

Commit

Permalink
[Docathon] Fix NO.8-NO.11 API label (PaddlePaddle#57614)
Browse files Browse the repository at this point in the history
  • Loading branch information
zade23 authored and jiahy0825 committed Oct 16, 2023
1 parent 7b273ef commit 838e6dc
Show file tree
Hide file tree
Showing 13 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion python/paddle/base/layers/math_op_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ def place(self):
def astype(self, dtype):
"""
**Notes**:
**The variable must be a** :ref:`api_base_Tensor`
**The variable must be a** :ref:`api_paddle_Tensor`
Cast a variable to a specified data type.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/incubate/optimizer/lars_momentum.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ class LarsMomentumOptimizer(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information. \
For details, please refer to :ref:`api_guide_Name`. Default is None.
exclude_from_weight_decay (list[str], optional): Name string of layers which will be exclude from lars weight decay. Default is None.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/incubate/optimizer/lbfgs.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ class LBFGS(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of \
some derived class of ``GradientClipBase`` . There are three cliping strategies \
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` , \
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` , \
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`.
The default value is None.
Expand Down
8 changes: 4 additions & 4 deletions python/paddle/nn/clip.py
Original file line number Diff line number Diff line change
Expand Up @@ -950,16 +950,16 @@ def set_gradient_clip(clip, param_list=None, program=None):
and it may be removed in future releases, so it is not recommended.
It is recommended to set ``grad_clip`` when initializing the ``optimizer`` ,
this is a better method to clip gradient. There are three clipping strategies:
:ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` .
:ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` .
To specify parameters that require gradient clip.
Args:
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default value: None, and there is no
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default value: None, and there is no
gradient clipping.
param_list (list(Variable), optional): Parameters that require gradient clip.
It can be a list of parameter or a list of parameter's name.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/adadelta.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ class Adadelta(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): The default value is None. Normally there is no need for user
to set this property. For more information, please refer to
:ref:`api_guide_Name` .
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/adam.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,8 @@ class Adam(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
lazy_mode (bool, optional): The official Adam algorithm has two moving-average accumulators.
The accumulators are updated at every step. Every element of the two moving-average
is updated in both dense mode and sparse mode. If the size of parameter is very large,
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/adamax.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ class Adamax(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`.
The default value is None.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/adamw.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ class AdamW(Optimizer):
Default: None.
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
lazy_mode (bool, optional): The official Adam algorithm has two moving-average accumulators.
The accumulators are updated at every step. Every element of the two moving-average
is updated in both dense mode and sparse mode. If the size of parameter is very large,
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/lbfgs.py
Original file line number Diff line number Diff line change
Expand Up @@ -346,8 +346,8 @@ class LBFGS(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of \
some derived class of ``GradientClipBase`` . There are three cliping strategies \
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` , \
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` , \
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`.
The default value is None.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/momentum.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ class Momentum(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
multi_precision (bool, optional): Whether to use multi-precision during weight updating. Default is false.
rescale_grad (float, optional): Multiply the gradient with `rescale_grad` before updating. \
Often choose to be ``1.0/batch_size``.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,8 +115,8 @@ class Optimizer:
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of \
some derived class of ``GradientClipBase`` . There are three cliping strategies \
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` , \
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` , \
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`.
The default value is None.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/rmsprop.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,8 @@ class RMSProp(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): This parameter is used by developers to print debugging information.
For details, please refer to :ref:`api_guide_Name`. Default is None.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/optimizer/sgd.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ class SGD(Optimizer):
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three clipping strategies
( :ref:`api_base_clip_GradientClipByGlobalNorm` , :ref:`api_base_clip_GradientClipByNorm` ,
:ref:`api_base_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` ,
:ref:`api_paddle_nn_ClipGradByValue` ). Default None, meaning there is no gradient clipping.
name (str, optional): The default value is None. Normally there is no need for user
to set this property. For more information, please refer to
:ref:`api_guide_Name` .
Expand Down

0 comments on commit 838e6dc

Please sign in to comment.