Skip to content

Commit

Permalink
aligning to Chinese document
Browse files Browse the repository at this point in the history
  • Loading branch information
Patrick-Star125 committed May 21, 2022
1 parent 0bde96b commit de6623f
Show file tree
Hide file tree
Showing 2 changed files with 40 additions and 33 deletions.
23 changes: 15 additions & 8 deletions python/paddle/nn/functional/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -2246,11 +2246,11 @@ def cosine_embedding_loss(input1, input2, label, margin=0, reduction='mean'):
cos(x1, x2) = \frac{x1 \cdot{} x2}{\Vert x1 \Vert_2 * \Vert x2 \Vert_2}
Parameters:
input1 (Tensor): 1D or 2D tensor with shape: [*, N], '*' means batch size, N the length of input array.
input1 (Tensor): tensor with shape: [N, M] or [M], 'N' means batch size, 'M' means the length of input array.
Available dtypes are float32, float64.
input2 (Tensor): 1D or 2D tensor with shape: [*, N], '*' means batch size, N the length of input array.
input2 (Tensor): tensor with shape: [N, M] or [M], 'N' means batch size, 'M' means the length of input array.
Available dtypes are float32, float64.
label (Tensor): 0D or 1D tensor. The target labels values should be numbers between -1 and 1.
label (Tensor): tensor with shape: [N] or [1]. The target labels values should be -1 or 1.
Available dtypes are int32, int64, float32, float64.
margin (float, optional): Should be a number from :math:`-1` to :math:`1`,
:math:`0` to :math:`0.5` is suggested. If :attr:`margin` is missing, the
Expand All @@ -2267,15 +2267,22 @@ def cosine_embedding_loss(input1, input2, label, margin=0, reduction='mean'):
Examples:
.. code-block:: python
:name: code-example1
import paddle
input1 = paddle.to_tensor([1.6, 1.2, -0.5], 'float32')
input2 = paddle.to_tensor([0.5, 0.5, -1.8], 'float32')
label = paddle.to_tensor([1], 'int64')
input1 = paddle.to_tensor([[1.6, 1.2, -0.5], [3.2, 2.6, -5.8]], 'float32')
input2 = paddle.to_tensor([[0.5, 0.5, -1.8], [2.3, -1.4, 1.1]], 'float32')
label = paddle.to_tensor([1, -1], 'int64')
output = paddle.nn.functional.cosine_embedding_loss(input1, input2, label)
print(output) # output: [0.42310387]
output = paddle.nn.functional.cosine_embedding_loss(input1, input2, label, margin=0.5, reduction='mean')
print(output) # [0.21155193]
output = paddle.nn.functional.cosine_embedding_loss(input1, input2, label, margin=0.5, reduction='sum')
print(output) # [0.42310387]
output = paddle.nn.functional.cosine_embedding_loss(input1, input2, label, margin=0.5, reduction='none')
print(output) # [0.42310387, 0. ]
"""
if len(label.shape) != 1:
Expand Down
50 changes: 25 additions & 25 deletions python/paddle/nn/layer/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -1322,17 +1322,9 @@ class CosineEmbeddingLoss(Layer):
.. math::
Out = max(0, cos(input1, input2)) - margin
If :attr:`reduction` set to ``'none'``, the interface will return the original loss `Out`.
If :attr:`reduction` set to ``'mean'``, the reduced mean loss is:
.. math::
Out = MEAN(Out)
If :attr:`reduction` set to ``'sum'``, the reduced sum loss is:
.. math::
Out = SUM(Out)
The operator cos can be described as follow:
.. math::
cos(x1, x2) = \frac{x1 \cdot{} x2}{\Vert x1 \Vert_2 * \Vert x2 \Vert_2}
Parameters:
margin (float, optional): Should be a number from :math:`-1` to :math:`1`,
Expand All @@ -1344,29 +1336,37 @@ class CosineEmbeddingLoss(Layer):
elements in the output, ``'sum'``: the output will be summed.
Shape:
input1 (Tensor): 1-D or 2-D tensor with shape: [*, N], * means batch_size which
could be emited when batch_size is 1, `N` means size of intpu.
Available dtypes are float32, float64.
input2 (Tensor): 1-D or 2-D tensor with shape: [*, N], * means batch_size which
could be emited when batch_size is 1, `N` means size of intpu.
Available dtypes are float32, float64.
label (Tensor): 0-D or 1-D tensor. The target labels which values should be
numbers between -1 and 1. Available dtypes are int32, int64, float32, float64.
output (Tensor): If ``reduction`` is ``'none'``, the shape of output is
same as ``label`` , else the shape of output is scalar.
input1 (Tensor): tensor with shape: [N, M] or [M], 'N' means batch size, 'M' means the length of input array.
Available dtypes are float32, float64.
input2 (Tensor): tensor with shape: [N, M] or [M], 'N' means batch size, 'M' means the length of input array.
Available dtypes are float32, float64.
label (Tensor): tensor with shape: [N] or [1]. The target labels values should be -1 or 1.
Available dtypes are int32, int64, float32, float64.
output (Tensor): Tensor, the cosine embedding Loss of Tensor ``input1`` ``input2`` and ``label``.
If `reduction` is ``'none'``, the shape of output loss is [N], the same as ``input`` .
If `reduction` is ``'mean'`` or ``'sum'``, the shape of output loss is [1].
Examples:
.. code-block:: python
:name: code-example1
import paddle
input1 = paddle.to_tensor([1.6, 1.2, -0.5], 'float32')
input2 = paddle.to_tensor([0.5, 0.5, -1.8], 'float32')
label = paddle.to_tensor([1], 'int32')
input1 = paddle.to_tensor([[1.6, 1.2, -0.5], [3.2, 2.6, -5.8]], 'float32')
input2 = paddle.to_tensor([[0.5, 0.5, -1.8], [2.3, -1.4, 1.1]], 'float32')
label = paddle.to_tensor([1, -1], 'int64')
cosine_embedding_loss = paddle.nn.CosineEmbeddingLoss(margin=0.5, reduction='mean')
output = cosine_embedding_loss(input1, input2, label)
print(output) # output: [0.42310387]
print(output) # [0.21155193]
cosine_embedding_loss = paddle.nn.CosineEmbeddingLoss(margin=0.5, reduction='sum')
output = cosine_embedding_loss(input1, input2, label)
print(output) # [0.42310387]
cosine_embedding_loss = paddle.nn.CosineEmbeddingLoss(margin=0.5, reduction='none')
output = cosine_embedding_loss(input1, input2, label)
print(output) # [0.42310387, 0. ]
"""

Expand Down

0 comments on commit de6623f

Please sign in to comment.