Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PaddlePaddle Hackathon 2】16 新增 API RRelu #41823

Merged
merged 29 commits into from
May 31, 2022

Conversation

thunder95
Copy link
Contributor

@thunder95 thunder95 commented Apr 14, 2022

PR types

New features

PR changes

APIs

Describe

完成第二期第16项目开发任务: #40317
RRELU激活函数是从Empirical Evaluation of Rectified Activations in Convolutional Network中提出的,它是在Leaky ReLU基础上,对每一个位置的负值输入线性项做了随机采样 ,来加强一定范围内的泛化能力。
RFC设计文档: PaddlePaddle/community#71
中文文档: PaddlePaddle/docs#4725

@paddle-bot-old paddle-bot-old bot added contributor External developers status: proposed labels Apr 14, 2022
@paddle-bot-old
Copy link

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@dingjiaweiww
Copy link
Contributor

请先通过CI噢~

@@ -0,0 +1,153 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2016->2022,其他文件自查一下吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

[ 6.0, 7.0, 8.0, 9.0]]]], 'float64')
x = paddle.to_tensor(data)
m = paddle.nn.RReLU(0.1, 0.3)
out = m(x)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

示例代码需要正式一点,不能用m这种随意的名字,最好名字能看出来表示的功能

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

public:
using framework::OperatorWithKernel::OperatorWithKernel;

void InferShape(framework::InferShapeContext* ctx) const override {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

正向的infershape已经放到phi里了,反向的infershape也一起放过去吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成


if (is_test) {
for (i = 0; i < numel; i++) {
T mid_val = static_cast<T>((lower + upper) / 2.0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个mid_val计算可以放到for循环外边吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

ALL_LAYOUT,
phi::RReluKernel,
float,
phi::dtype::float16,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个float16 cpu kernel也注册上吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

self.x_shape = [2, 3, 4, 5]

def init_attr(self):
self.attrs = {'lower': self.lower, "upper": self.upper, "is_test": True}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_test: False的情况也需要测试

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成,因opt_test的输出校验在1e-7范围内而noise是随机的,所以将lower和upper差距调到尽可能小来测试。

name="x", shape=self.x_np.shape, dtype="float64")
x_2 = paddle.fluid.data(
name="x2", shape=self.x_np.shape, dtype="float64")
out_1 = F.rrelu(x_1, self.lower_0, self.upper_0, training=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

training=True的情况也需要测试

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhiboniu training的时候noise和输出都是随机的,然而在optest中的输出都是固定的,请问老师有什么建议?可以将lower和upper设置成一样的值么?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以试一下固定seed结果能不能固定。如果还是不行我看你现在检测范围是否在[lowerx, upperx]也可以。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成

Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1.fix_seed这个参数应该不需要,seed 也不需要,目前的2.0后的写法是直接从generator中获取seed,不需要单独去给OP设置一个seed。

auto gen_cuda = ctx.GetGenerator();
uint64_t seed = seed_offset.first;
uint64_t offset = seed_offset.second;

2.GPU kernel 参考下 https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/exponential_op.cu#L29-L31 中的简洁写法:

using MT = typename kps::details::MPTypeTrait<T>::Type;
funcs::uniform_distribution<MT> dist;
funcs::uniform_real_transform<MT> trans(min, max);
funcs::distribution_and_transform<T>(dev_ctx, out, dist, trans);

调用https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/funcs/distribution_helper.h 已经封装好的函数即可。不要用thrust库,这个性能不太好。可以参考下https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/gpu/poisson_kernel.cu 比较新的写法

@thunder95
Copy link
Contributor Author

thunder95 commented May 5, 2022

请注意补充中文文档啊

@zhiboniu 已补充, 但未通过CI, 不知为什么会参数名称校验失败

@thunder95
Copy link
Contributor Author

随机数改成调封装好的函数吧(具体可以参考现有调用的代码): funcs::uniform_distribution dist; funcs::uniform_real_transform trans(min, max); funcs::distribution_and_transform(dev_ctx, out, dist, trans);

已修改 @zhiboniu

@thunder95
Copy link
Contributor Author

1.fix_seed这个参数应该不需要,seed 也不需要,目前的2.0后的写法是直接从generator中获取seed,不需要单独去给OP设置一个seed。

auto gen_cuda = ctx.GetGenerator();
uint64_t seed = seed_offset.first;
uint64_t offset = seed_offset.second;

2.GPU kernel 参考下 https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/exponential_op.cu#L29-L31 中的简洁写法:

using MT = typename kps::details::MPTypeTrait<T>::Type;
funcs::uniform_distribution<MT> dist;
funcs::uniform_real_transform<MT> trans(min, max);
funcs::distribution_and_transform<T>(dev_ctx, out, dist, trans);

调用https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/funcs/distribution_helper.h 已经封装好的函数即可。不要用thrust库,这个性能不太好。可以参考下https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/gpu/poisson_kernel.cu 比较新的写法

@zhouwei25 谢谢老师指导,已完成修改意见,麻烦看看这样改是否可以。

@@ -436,6 +436,75 @@ def extra_repr(self):
name_str)


class RReLU(Layer):
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

格式混乱,请参考API文档规范修改

@@ -548,6 +548,102 @@ def prelu(x, weight, data_format="NCHW", name=None):
return out


def rrelu(x, lower=1. / 8., upper=1. / 3., training=True, name=None):
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

一样的,格式混乱

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

尝试着修改了一下,但不知道怎么预览,如果可以就先自己检查了。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

中英文预览都在docs的pr里面,可以参考文档

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

英文的公式还是稍微有点显示错误

:name: RReLU-example

import paddle
import numpy as np
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

代码示例中不需要import numpy,可以使用paddle.to_tensor来生成输入数据

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

"""
rrelu activation.

`Empirical Evaluation of Rectified Activations in Convolutional Network`: https://arxiv.org/abs/1505.00853
Copy link
Contributor

@Ligoml Ligoml May 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 把论文的超链接加在论文标题上
  • 描述里可以多加一些介绍性内容,对用户友好一些
  • 英文公式显示方面还是有些问题

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ligoml 已修改这三点, 但是预览文档页面只找到该API的中文文档,没有这个api的英文文档, 是遗漏什么操作或是需要等?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要paddle的PR-CI-Build跑完才会有英文文档预览~

@zhwesky2010
Copy link
Contributor

1.fix_seed这个参数应该不需要,seed 也不需要,目前的2.0后的写法是直接从generator中获取seed,不需要单独去给OP设置一个seed。

auto gen_cuda = ctx.GetGenerator();
uint64_t seed = seed_offset.first;
uint64_t offset = seed_offset.second;

2.GPU kernel 参考下 https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/exponential_op.cu#L29-L31 中的简洁写法:

using MT = typename kps::details::MPTypeTrait<T>::Type;
funcs::uniform_distribution<MT> dist;
funcs::uniform_real_transform<MT> trans(min, max);
funcs::distribution_and_transform<T>(dev_ctx, out, dist, trans);

调用https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/funcs/distribution_helper.h 已经封装好的函数即可。不要用thrust库,这个性能不太好。可以参考下https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/gpu/poisson_kernel.cu 比较新的写法

@zhouwei25 谢谢老师指导,已完成修改意见,麻烦看看这样改是否可以。

这样实现可以的,直接调funcs里的随机数公共组件就可以了

@zhwesky2010
Copy link
Contributor

LGTM

\right.

where :math:`x` is the input tensor,
:math:`lower` and :math:`upper` are the bounds of uniform distribution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

比之前好多了,但还是有些不太对

Copy link
Contributor

@Ligoml Ligoml May 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

中文是正常显示的,等下我重新构建一个预览看看

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

公式是从代码里直接拷贝到中文文档的,二者是一样的,不清楚为什么英文的还是有问题 @Ligoml

Copy link
Contributor

@TCChenlong TCChenlong May 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以试试在文档最前方加一个 “r”

Copy link
Contributor

@Ligoml Ligoml left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for docs

@jeff41404
Copy link
Contributor

the design of paddle.nn.functional.rrelu should also be add in RFC

@thunder95
Copy link
Contributor Author

the design of paddle.nn.functional.rrelu should also be add in RFC

@jeff41404 done. pr link: PaddlePaddle/community#137

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -0,0 +1,326 @@
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2018->2022,后面可以再追加个PR更新下

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

@jeff41404 jeff41404 merged commit 21e1d10 into PaddlePaddle:develop May 31, 2022
fuyou765 pushed a commit to fuyou765/Paddle that referenced this pull request Jun 7, 2022
* rrelu逻辑部分

* unregistered op kernel (unresolved)

* commit before merge

* 丰富测试用例

* 修复rrelu-sig的bug

* 修复cpu环境测试

* 修改拼写错误

* 修改code format

* 尝试优化测试用例timeout的问题

* 优化测试用例

* 移除seed, 优化随机函数

* update en doc for rrelu

* fix rrelu en docs, test=document_fix

* add paper link for en docs, test=document_fix

* udpate en doc

* add r,test=document_fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants