Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

映射文档 torch.nn.functional.embedding #5928

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
## [ torch 参数更多 ] torch.nn.functional.binary_cross_entropy

### [torch.nn.functional.binary_cross_entropy](https://pytorch.org/docs/2.0/generated/torch.nn.functional.binary_cross_entropy.html?highlight=binary_cross_entropy#torch.nn.functional.binary_cross_entropy)

```python
torch.nn.functional.binary_cross_entropy(input, target, size_average=None, reduce=None, reduction='mean')
```

### [paddle.nn.functional.binary_cross_entropy](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/binary_cross_entropy_cn.html#binary-cross-entropy)

```python
paddle.nn.functional.binary_cross_entropy(input, label, weight=None, reduction='mean', name=None)
```

两者功能一致,torch 参数多,具体如下:
### 参数差异
| PyTorch | PaddlePaddle | 备注 |
| ------------- | ------------ | ------------------------------------------------------ |
| input | input | 表示输入的 Tensor |
| target | label | 标签 |
| weight | weight | 指定每个 batch 的权重 |
| size_average | - | 已废弃,和 reduce 组合决定损失计算方式 |
| reduce | - | 已废弃,和 size_average 组合决定损失计算方式 |
| reduction | reduction | 输出结果的计算方式 |


### 转写示例

```python
# Pytorch 的 size_average、reduce 参数转为 Paddle 的 reduction 参数
if size_average is None:
size_average = True
if reduce is None:
reduce = True

if size_average and reduce:
reduction = 'mean'
elif reduce:
reduction = 'sum'
else:
reduction = 'none'
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## [ torch 参数更多 ]torch.nn.functional.embedding
### [torch.nn.functional.embedding](https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html?highlight=torch+nn+functional+embedding#torch.nn.functional.embedding)

```python
torch.nn.functional.embedding(input,
weight,
padding_idx=None,
max_norm=None,
norm_type=2.0,
scale_grad_by_freq=False,
sparse=False)
```
### [paddle.nn.functional.embedding](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/embedding_cn.html#embedding)

```python
paddle.nn.functional.embedding(x,
weight,
padding_idx=None,
sparse=False,
name=None)
```

其中 Pytorch 相比 Paddle 支持更多其他参数,具体如下:
### 参数映射
| PyTorch | PaddlePaddle | 备注 |
| ------------- | ------------ | ------------------------------------------------------ |
| input | x | 表示存储 id 信息的 Tensor |
| weight | weight | 表示存储词嵌入权重参数的 Tensor |
| padding_idx | padding_idx | 在此区间内的参数及对应的梯度将会以 0 进行填充 |
| max_norm | - | 如果给定,Embeddding 向量的范数(范数的计算方式由 norm_type 决定)超过了 max_norm 这个界限,就要再进行归一化,Paddle 无此功能,暂无转写方式。 |
| norm_type | - | 为 maxnorm 选项计算 p-范数的 p。默认值 2,Paddle 无此功能,暂无转写方式。 |
| scale_grad_by_freq | - | 是否根据单词在 mini-batch 中出现的频率,对梯度进行放缩,Paddle 无此功能,暂无转写方式。 |
| sparse | sparse | 表示是否使用稀疏更新。 |
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## [ 仅参数名不一致 ]torch.nn.functional.fold
## [ 参数不一致 ]torch.nn.functional.fold

### [torch.nn.functional.fold](https://pytorch.org/docs/stable/generated/torch.nn.functional.fold.html?highlight=functional+fold#torch.nn.functional.fold)

Expand Down
Original file line number Diff line number Diff line change
@@ -1,25 +1,40 @@
## [参数名不一致]torch.nn.functional.unfold

## [ 参数名不一致 ]torch.nn.functional.unfold
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个就算 参数不一致 吧,支持类型不同

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

### [torch.nn.functional.unfold](https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html#torch.nn.functional.unfold)

```python
torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1)
torch.nn.functional.unfold(input,
kernel_size,
dilation=1,
padding=0,
stride=1)
```

### [paddle.nn.functional.unfold](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/unfold_cn.html)
### [paddle.nn.functional.unfold](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/unfold_cn.html#unfold)

```python
paddle.nn.functional.unfold(x, kernel_size, strides=1, paddings=0, dilation=1, name=None)
paddle.nn.functional.unfold(x,
kernel_size=[3, 3],
strides=1,
addings=1,
dilation=1,
name=None)
```

其中功能一致, 仅参数名不一致,具体如下:

其中 Paddle 与 Pytorch 前四个参数所支持的参数类型不一致,具体如下:
### 参数映射
| PyTorch | PaddlePaddle | 备注 |
| ------------- | ------------ | ------------------------------------------------------ |
| input | x | 输入 Tensor |
| kernel_size | kernel_sizes | 卷积核大小, PyTorch 支持 int、tuple(int)或者 list(int),Paddle 仅支持 int 或者 list(int) |
| dilation | dilations | 卷积膨胀,PyTorch 支持 int、tuple(int)或者 list(int),Paddle 仅支持 int 或者 list(int) |
| padding | paddings | 每个维度的扩展,PyTorch 支持 int、tuple(int)或者 list(int),Paddle 仅支持 int 或者 list(int) |
| stride | strides | 步长大小,PyTorch 支持 int、tuple(int)或者 list(int),Paddle 仅支持 int 或者 list(int)|

| PyTorch | PaddlePaddle | 备注 |
| ----------- | ------------ | --------------------------------- |
| input | x | 输入 4-D Tensor,仅参数名不一致。 |
| kernel_size | kernel_size | 卷积核的尺寸。 |
| dilation | dilation | 卷积膨胀。 |
| padding | paddings | 每个维度的扩展,仅参数名不一致。 |
| stride | strides | 卷积步长,仅参数名不一致。 |
### 转写示例
#### kernel_size: 卷积核大小
``` python
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

转写前需要写下是 转写哪个参数

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# PyTorch 写法
unfold = nn.functional.unfold(input,kernel_size=(2, 3))

# Paddle 写法
unfold = nn.functional.unfold(input,kernel_size=[2, 3])
```