-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【PaddlePaddle Hackathon 3】Add Paddle group_norm operator #12329
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution, could you provide a screenshot for unitest results?
|
||
auto shape = ov::opset6::Constant::create(ngraph::element::i64, Shape{3}, {0, num_groups, -1}); | ||
auto reshape_input = std::make_shared<ov::opset6::Reshape>(data, shape); | ||
auto scale_ = ov::opset6::Constant::create(dtype, {1}, {1.0 * num_groups}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use the default_opset , not ov::opset6
auto dtype = data.get_element_type(); | ||
auto num_groups = node.get_attribute<int32_t>("Groups"); | ||
auto epsilon = node.get_attribute<float>("Epsilon"); | ||
auto data_layout = node.get_attribute<std::string>("Data_layout"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better use the default value for 'get_attribute()' function in case it throw error for models which doesn't set this attributes
the CI of Code Style didn't passed. Is that means I should reformat my code with Google C++ standard? |
you can format your code (for VScode click "format selection") and retrigger the CI. |
Since your mapping is complicated, could you help to explain why you select these OPs to build 'group_norm' ? |
Hi, I want to illustrate my implementation point by point.
|
Thanks for your illustration. Could you help to put this link on your Reference which is the document for the OP you try to enable: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/static/nn/group_norm_cn.html#group-norm Thanks |
Reference updated |
@OpenVINO-dev-contest Could you please review? |
Cloud you help to explain why you only delivery a single output mapping for this operation? and it seems has more one output ? |
@OpenVINO-dev-contest Hi, I think the placeholder of outputs ["Mean", "Variance"] are just code aligment with other normalization operators. |
Thanks for your illustration. |
@OpenVINO-dev-contest could please you take a review? |
Hi, seems scale_attr and bias_attr can be in bool, will your test case include it ? |
update: 1.python code of unit test 2.screenshot for unitest results |
@luo-cheng2021 could you please take a review? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mapping is quite complicated, could you please add some comments in the code to explain the ideas, thanks.
size_t rank_size = pshape.rank().get_length(); | ||
PADDLE_OP_CHECK(node, rank_size >= 2, "2-D and above tensors supported only"); | ||
|
||
if (data_layout == "NHWC") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if don't we have defined layout in the model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In line 29, the default value of data_layout
is "NCHW", which is same as Paddle doc description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ilyachur Some Paddle operations have this data_layout attribute, while some are not. For NHWC layout, it is okay to use this layout conversion for the op mapper, I think. We could apply a transformation from model perspective to eliminate the conversions between operations later.
Details:
add group_norm operation in Paddle front end
Reference:
Unit-test passed