Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add spp(Spatial pyramid pooling ) op #6204

Merged
merged 10 commits into from
Dec 18, 2017

Conversation

sweetsky0901
Copy link
Contributor

@sweetsky0901 sweetsky0901 commented Dec 4, 2017

fix #5988

@sweetsky0901 sweetsky0901 changed the title My spp op spp op pull request Dec 4, 2017
@sweetsky0901 sweetsky0901 changed the title spp op pull request add spp op Dec 4, 2017
@sweetsky0901 sweetsky0901 changed the title add spp op add spp(Spatial pyramid pooling ) op Dec 4, 2017
"M = C * H * W");
AddAttr<int>("pyramid_height", "(int), multi level pooling");
AddComment(R"DOC(
"Does spatial pyramid pooling on the input image by taking the max,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The document is too simple to be easy to understand for the novice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

std::vector<int64_t> output_shape_vec({in_x->dims()[0], in_x->dims()[1]});
output_shape_vec.push_back(
(input_h - kernel_size_h + 2 * padding_h) / kernel_size_h + 1);
output_shape_vec.push_back(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is (input_h - kernel_size_h + 2 * padding_h) / kernel_size_h + 1) equals to bins? Why do we have to recaulate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

framework::DDim output_flatten_shape(
framework::make_ddim(output_flatten_shape_vec));
out_flatten_level.ShareDataWith(out_level);
out_flatten_level.Resize(output_flatten_shape);
Copy link
Contributor

@NHZlX NHZlX Dec 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use out_level to resize, but reapply a new variable out_flatten_level?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

framework::Tensor out_level;
framework::Tensor outgrad_level;
std::vector<int64_t> out_shape_vec({in_x->dims()[0], in_x->dims()[1]});
out_shape_vec.push_back(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"M = C * H * W");
AddAttr<int>("pyramid_height", "(int), multi level pooling");
AddComment(R"DOC(
"Does spatial pyramid pooling on the input image by taking the max,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

std::vector<int64_t> output_shape_vec({in_x->dims()[0], in_x->dims()[1]});
output_shape_vec.push_back(
(input_h - kernel_size_h + 2 * padding_h) / kernel_size_h + 1);
output_shape_vec.push_back(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

framework::DDim output_flatten_shape(
framework::make_ddim(output_flatten_shape_vec));
out_flatten_level.ShareDataWith(out_level);
out_flatten_level.Resize(output_flatten_shape);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

framework::Tensor out_level;
framework::Tensor outgrad_level;
std::vector<int64_t> out_shape_vec({in_x->dims()[0], in_x->dims()[1]});
out_shape_vec.push_back(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@sweetsky0901 sweetsky0901 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@wangkuiyi wangkuiyi mentioned this pull request Dec 14, 2017
@NHZlX
Copy link
Contributor

NHZlX commented Dec 15, 2017

average mode should be supported

@sweetsky0901 sweetsky0901 merged commit 7456d73 into PaddlePaddle:develop Dec 18, 2017
@sweetsky0901 sweetsky0901 deleted the my_spp_op branch December 18, 2017 05:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Spp Operator.
2 participants