Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix en docs of some Apis (gradients, scope_guard, cuda_places, name_scope, device_guard, load_program_state, scale, ParamAttr and WeightNormParamAttr) #41604

Merged
merged 15 commits into from
Apr 25, 2022
Merged
6 changes: 3 additions & 3 deletions python/paddle/fluid/backward.py
Original file line number Diff line number Diff line change
Expand Up @@ -2021,7 +2021,6 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None):
@framework.static_only
def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
"""
:api_attr: Static Graph

Backpropagate the gradients of targets to inputs.

Expand All @@ -2042,8 +2041,9 @@ def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
will be None.

Examples:

.. code-block:: python

:name: code-example
import paddle
import paddle.nn.functional as F

Expand All @@ -2054,7 +2054,7 @@ def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
y = paddle.static.nn.conv2d(x, 4, 1, bias_attr=False)
y = F.relu(y)
z = paddle.static.gradients([y], x)
print(z) # [var x@GRAD : fluid.VarType.LOD_TENSOR.shape(-1L, 2L, 8L, 8L).astype(VarType.FP32)]
print(z) # [var x@GRAD : LOD_TENSOR.shape(-1, 2, 8, 8).dtype(float32).stop_gradient(False)]
"""
check_type(targets, 'targets', (framework.Variable, list, tuple),
'paddle.static.gradients')
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ def _switch_scope(scope):
@signature_safe_contextmanager
def scope_guard(scope):
"""
:api_attr: Static Graph

This function switches scope through python `with` statement.
Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ),
Expand All @@ -94,6 +93,7 @@ def scope_guard(scope):
None

Examples:

.. code-block:: python

import paddle
Expand Down
12 changes: 8 additions & 4 deletions python/paddle/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -690,7 +690,7 @@ def is_compiled_with_rocm():

def cuda_places(device_ids=None):
"""
**Note**:
Note:
For multi-card tasks, please use `FLAGS_selected_gpus` environment variable to set the visible GPU device.
The next version will fix the problem with `CUDA_VISIBLE_DEVICES` environment variable.

Expand All @@ -715,6 +715,7 @@ def cuda_places(device_ids=None):
list of paddle.CUDAPlace: Created GPU place list.

Examples:

.. code-block:: python

import paddle
Expand Down Expand Up @@ -835,6 +836,7 @@ def cpu_places(device_count=None):
list of paddle.CPUPlace: Created list of CPU places.

Examples:

.. code-block:: python

import paddle
Expand Down Expand Up @@ -954,7 +956,6 @@ def name(self):
@signature_safe_contextmanager
def name_scope(prefix=None):
"""
:api_attr: Static Graph

Generate hierarchical name prefix for the operators in Static Graph.

Expand All @@ -967,6 +968,7 @@ def name_scope(prefix=None):
prefix(str, optional): prefix. Default is none.

Examples:

.. code-block:: python

import paddle
Expand Down Expand Up @@ -6877,8 +6879,9 @@ def switch_device(device):
@signature_safe_contextmanager
def device_guard(device=None):
"""
**Notes**:
**The API only supports static mode.**

Note:
The API only supports static mode.

A context manager that specifies the device on which the OP will be placed.

Expand All @@ -6892,6 +6895,7 @@ def device_guard(device=None):
assigned devices.

Examples:

.. code-block:: python

import paddle
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -2154,7 +2154,6 @@ def set_var(var, ndarray):

def load_program_state(model_path, var_list=None):
"""
:api_attr: Static Graph

Load program state from local file

Expand All @@ -2169,6 +2168,7 @@ def load_program_state(model_path, var_list=None):
state_dict(dict): the dict store Parameter and optimizer information

Examples:

.. code-block:: python

import paddle
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/fluid/layers/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -11803,8 +11803,7 @@ def _elementwise_op(helper):

def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
"""
Scale operator.


Putting scale and bias to the input Tensor as following:

``bias_after_scale`` is True:
Expand All @@ -11829,6 +11828,7 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
Tensor: Output tensor of scale operator, with shape and data type same as input.

Examples:

.. code-block:: python

# scale as a float32 number
Expand Down
27 changes: 14 additions & 13 deletions python/paddle/fluid/param_attr.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,16 +30,17 @@

class ParamAttr(object):
"""
Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.


Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .

Create a object to represent the attribute of parameter. The attributes are:
name, initializer, learning rate, regularizer, trainable, gradient clip,
and model average.

Parameters:
name (str, optional): The parameter's name. Default None, meaning that the name
would be created automatically.
Expand All @@ -63,6 +64,7 @@ class ParamAttr(object):
ParamAttr Object.

Examples:

.. code-block:: python

import paddle
Expand Down Expand Up @@ -213,24 +215,22 @@ def _to_kwargs(self, with_initializer=False):

class WeightNormParamAttr(ParamAttr):
r"""
:api_attr: Static Graph

Note:
Please use 'paddle.nn.utils.weight_norm' in dygraph mode.


Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .

Parameter of weight Norm. Weight Norm is a reparameterization of the weight vectors
in a neural network that decouples the magnitude of those weight vectors from
their direction. Weight Norm has been implemented as discussed in this
paper: `Weight Normalization: A Simple Reparameterization to Accelerate
Training of Deep Neural Networks
<https://arxiv.org/pdf/1602.07868.pdf>`_.

Note:
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .


Args:
dim(int, optional): Dimension over which to compute the norm. Dim is a non-negative
Expand Down Expand Up @@ -258,6 +258,7 @@ class WeightNormParamAttr(ParamAttr):
need_clip (bool, optional): Whether the parameter gradient need to be cliped in optimizer. Default is True.

Examples:

.. code-block:: python

import paddle
Expand Down