-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 5th No.33】为 Paddle 新增 atleast_1d / atleast_2d / atleast_3d API #679
Conversation
|
||
参数: | ||
|
||
- inputs: (Tensor|list(Tensor)) - 输入的一至多个 Tensor。数据类型支持:float32、float64、int32、int64。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
数据类型支持:float32、float64、int32、int64。
数据类型上:float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16,这些都能支持么?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个我再确认一下,因为 manipulation.py
里面很多都是 float32、float64、int32、int64
类似的类型,所以这里是对齐此类方法 ~
后面确认后也会加到单测里面 ~
Update 20231012
简单测试了一下数据类型的支持情况: In [32]: atleast_1d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
...:
Out[32]:
[Tensor(shape=[1], dtype=float16, place=Place(cpu), stop_gradient=True,
[0.30004883]),
Tensor(shape=[1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[23.]),
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
[3.]),
Tensor(shape=[1], dtype=float64, place=Place(cpu), stop_gradient=True,
[23.]),
Tensor(shape=[1], dtype=int8, place=Place(cpu), stop_gradient=True,
[2]),
Tensor(shape=[1], dtype=int16, place=Place(cpu), stop_gradient=True,
[2]),
Tensor(shape=[1], dtype=int32, place=Place(cpu), stop_gradient=True,
[2]),
Tensor(shape=[1], dtype=int64, place=Place(cpu), stop_gradient=True,
[2]),
Tensor(shape=[1], dtype=uint8, place=Place(cpu), stop_gradient=True,
[2]),
Tensor(shape=[1], dtype=complex64, place=Place(cpu), stop_gradient=True,
[(1+1j)]),
Tensor(shape=[1], dtype=complex128, place=Place(cpu), stop_gradient=True,
[(1+1j)]),
Tensor(shape=[1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[0.29882812])]
In [33]: atleast_2d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
...:
Out[33]:
[Tensor(shape=[1, 1], dtype=float16, place=Place(cpu), stop_gradient=True,
[[0.30004883]]),
Tensor(shape=[1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[[23.]]),
Tensor(shape=[1, 1], dtype=float32, place=Place(cpu), stop_gradient=True,
[[3.]]),
Tensor(shape=[1, 1], dtype=float64, place=Place(cpu), stop_gradient=True,
[[23.]]),
Tensor(shape=[1, 1], dtype=int8, place=Place(cpu), stop_gradient=True,
[[2]]),
Tensor(shape=[1, 1], dtype=int16, place=Place(cpu), stop_gradient=True,
[[2]]),
Tensor(shape=[1, 1], dtype=int32, place=Place(cpu), stop_gradient=True,
[[2]]),
Tensor(shape=[1, 1], dtype=int64, place=Place(cpu), stop_gradient=True,
[[2]]),
Tensor(shape=[1, 1], dtype=uint8, place=Place(cpu), stop_gradient=True,
[[2]]),
Tensor(shape=[1, 1], dtype=complex64, place=Place(cpu), stop_gradient=True,
[[(1+1j)]]),
Tensor(shape=[1, 1], dtype=complex128, place=Place(cpu), stop_gradient=True,
[[(1+1j)]]),
Tensor(shape=[1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[[0.29882812]])]
In [34]: atleast_3d(float16, uint16, float32, float64, int8, int16, int32, int64, uint8, complex64, complex128, bfloat16)
...:
Out[34]:
[Tensor(shape=[1, 1, 1], dtype=float16, place=Place(cpu), stop_gradient=True,
[[[0.30004883]]]),
Tensor(shape=[1, 1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[[[23.]]]),
Tensor(shape=[1, 1, 1], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[3.]]]),
Tensor(shape=[1, 1, 1], dtype=float64, place=Place(cpu), stop_gradient=True,
[[[23.]]]),
Tensor(shape=[1, 1, 1], dtype=int8, place=Place(cpu), stop_gradient=True,
[[[2]]]),
Tensor(shape=[1, 1, 1], dtype=int16, place=Place(cpu), stop_gradient=True,
[[[2]]]),
Tensor(shape=[1, 1, 1], dtype=int32, place=Place(cpu), stop_gradient=True,
[[[2]]]),
Tensor(shape=[1, 1, 1], dtype=int64, place=Place(cpu), stop_gradient=True,
[[[2]]]),
Tensor(shape=[1, 1, 1], dtype=uint8, place=Place(cpu), stop_gradient=True,
[[[2]]]),
Tensor(shape=[1, 1, 1], dtype=complex64, place=Place(cpu), stop_gradient=True,
[[[(1+1j)]]]),
Tensor(shape=[1, 1, 1], dtype=complex128, place=Place(cpu), stop_gradient=True,
[[[(1+1j)]]]),
Tensor(shape=[1, 1, 1], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
[[[0.29882812]]])] 除了 @luotao1 请评审~ |
… API (PaddlePaddle#679) * [Add] add Hackathon 5th No.33 pfc * [Fix] fix title * [Change] dtype support
PR types
Others
PR changes
Docs
Description
【Hackathon 5th No.33】为 Paddle 新增 atleast_1d / atleast_2d / atleast_3d API
请评审!