-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon No.50】为 Paddle lerp 算子实现 float16 数据类型支持 #50925
Changes from 5 commits
897c3f0
6888326
1f21919
2a07fd4
3d855f9
da8a043
8451703
128f4db
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -74,6 +74,68 @@ def init_shape(self): | |
self.shape = [2, 1, 2, 5, 1, 5] | ||
|
||
|
||
class TestLerpWithDim2Fp16(TestLerp): | ||
def init_shape(self): | ||
self.shape = [2, 50] | ||
|
||
def init_dtype(self): | ||
self.dtype = np.float16 | ||
|
||
|
||
class TestLerpWithDim3Fp16(TestLerp): | ||
def init_shape(self): | ||
self.shape = [2, 2, 25] | ||
|
||
def init_dtype(self): | ||
self.dtype = np.float16 | ||
|
||
|
||
class TestLerpWithDim4Fp16(TestLerp): | ||
def init_shape(self): | ||
self.shape = [2, 2, 5, 5] | ||
|
||
def init_dtype(self): | ||
self.dtype = np.float16 | ||
|
||
|
||
class TestLerpWithDim5Fp16(TestLerp): | ||
def init_shape(self): | ||
self.shape = [2, 1, 2, 5, 5] | ||
|
||
def init_dtype(self): | ||
self.dtype = np.float16 | ||
|
||
|
||
class TestLerpWithDim6Fp16(TestLerp): | ||
def init_shape(self): | ||
self.shape = [2, 1, 2, 5, 1, 5] | ||
|
||
def init_dtype(self): | ||
self.dtype = np.float16 | ||
|
||
|
||
class TestLerpWihFp16BroadXY(TestLerp): | ||
def setUp(self): | ||
self.op_type = "lerp" | ||
self.python_api = paddle.lerp | ||
x = np.arange(1.0, 201.0).astype(np.float16).reshape([2, 1, 2, 50]) | ||
y = np.full(200, 10.0).astype(np.float16).reshape([2, 2, 1, 50]) | ||
w = np.asarray([0.5]).astype(np.float16) | ||
self.inputs = {'X': x, 'Y': y, 'Weight': w} | ||
self.outputs = {'Out': x + w * (y - x)} | ||
|
||
|
||
class TestLerpWithFp16BroadWToXY(TestLerp): | ||
def setUp(self): | ||
self.op_type = "lerp" | ||
self.python_api = paddle.lerp | ||
x = np.full(600, 2.5).astype(np.float16).reshape([50, 2, 2, 3]) | ||
y = np.full(600, 1.0).astype(np.float16).reshape([50, 2, 2, 3]) | ||
w = np.random.random([3]).astype(np.float16) | ||
self.inputs = {'X': x, 'Y': y, 'Weight': w} | ||
self.outputs = {'Out': x + w * (y - x)} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 您好
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 上面的5个case和TestLerpWithFp16BroadWToXY 除了shape不同好像没有其他差异?x和y都是相同的shape,w.shape=[1],测试内容有些重复。上述5个case和TestLerpWithFp16BroadWToXY保留一个就可以。 另外,TestLerpWihFp16BroadXY和TestLerpWithFp16BroadWToXY也可以直接重写init_shape和init_dtype简化吧? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 老师,上面五个case的和TestLerpWithFp16BroadWToXY是不同的。因为TestLerpWithFp16BroadWToXY的w.shape=[3],会走到broadcast w的反向分支。而上面五个case的w.shape=[1],会走到w为scalar的分支。 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 明白了,那TestLerpWihFp16BroadXY和TestLerpWithFp16BroadWToXY可以保留。 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 单测或许可以只保留3个case,TestLerpWihFp16BroadXY和TestLerpWithFp16BroadWToXY,以及x和y相同shape,w.shape=[1]的? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 好的老师,已经按照相关说明更改。在基类中增加了init_wshape、init_xyshape。其中init_xyshape用来对x、y shape不相等时初始化。并且简化了原来FP32的TestLerpBroadXY、TestLerpBroadWToXY。 |
||
|
||
|
||
class TestLerpBroadXY(TestLerp): | ||
def setUp(self): | ||
self.op_type = "lerp" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
上面这5个case除了维度不同在调用的kernel上应该并没有区别。x、y的shape都一样,甚至元素数量都是100,从测试覆盖情况来看相同的测试内容保留一个规模最大的即可。原始的fp32单测写的太累赘了,不需要完全保证相同数量的case。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
老师,他五个case在正向kernel中是走的五个分支。FP32这样写应该是要测试每个维度的正确性,这种情况需要修改吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里面其实都是走的同一个LerpFuntion,这里之所以写了这么多分支是因为eigen实现,模版参数必须设置dim的数值。但其实走的是同一个函数,计算的方式上没有差异的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done