-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【PaddlePaddle Hackathon 4】add paddle softshrink op #15845
【PaddlePaddle Hackathon 4】add paddle softshrink op #15845
Conversation
std::shared_ptr<ngraph::Node> adjusted_inputs_above_pos_lambda = | ||
std::make_shared<default_opset::Multiply>(std::make_shared<default_opset::Add>(data, negative_lambda), | ||
values_above_pos_lambda); | ||
adjusted_inputs = std::make_shared<default_opset::Add>(adjusted_inputs, adjusted_inputs_below_neg_lambda); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wonder can we use select
here ?
https://github.com/openvinotoolkit/openvino/blob/master/docs/ops/condition/Select_1.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it can be implemented using select, which looks more concise than the original implementation, but I wonder if matrix multiplication will be more efficient than select ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion is to use benchmark_app to test the performance of single node graph, and find out a best implementation ?
3825b6b
to
c2bf69a
Compare
|
||
output = std::make_shared<default_opset::Select>(zero_mask, output, zero_node); | ||
} else { | ||
// Passing -lambd to unsigned type constant will cause an overflow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refer to Paddle spec, input x could be float32 and float64 only. How can a test case trigger this else branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ceciliapeng2011 : the other Paddle 'softshrink' api defined here : https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Softshrink_cn.html#softshrink
looks like there is no such limitation for input x type. To cover this api, I think we may still need the branch to handle '-lambd to unsigned type' cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @AndPuQing : about @ceciliapeng2011 's concern, could you please have a try to add such unsigned type test case by using paddle.nn.Softshrink()
api?
if such test case is difficult to generate from paddle side, let's disable such unsigned input support(delete the else branch) this time, and add an input type check PADDLE_OP_CHECK(node, input_element_type.is_signed(), "softshrink only supports singed input data type");
for this op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I test the unsighed tensor input on the paddlepaddle delelop version, there is the result:
>>>import paddle
>>>a = paddle.to_tensor([1],dtype='uint8')
>>>a
Out[3]:
Tensor(shape=[1], dtype=uint8, place=Place(gpu:0), stop_gradient=True,
[1])
>>>paddle.nn.functional.softshrink(a)
RuntimeError: (NotFound) The kernel with key (GPU, Undefined(AnyLayout), uint8) of kernel `softshrink` is not registered and fail to fallback to CPU one. Selected wrong DataType `uint8`. Paddle support following DataTypes: float64, float16, float32, bfloat16.
[Hint: Expected kernel_iter != iter->second.end(), but received kernel_iter == iter->second.end().] (at /paddle/paddle/phi/core/kernel_factory.cc:259)
So, I think just add dtype check currently.
Details:
add softshrink op in paddle front end
Tickets:
Reference
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/functional/softshrink_en.html