-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
My unpool max 2d #5826
My unpool max 2d #5826
Conversation
… my_unpool_max_2d
… my_unpool_max_2d
… my_unpool_max_2d
… my_unpool_max_2d
paddle/operators/unpool_op.cc
Outdated
|
||
class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
Unpool2dOpMaker(framework::OpProto* proto, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we don't need the trailing \
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
表示连起来是一行来着。好吧,我删了它
paddle/operators/unpool_op.cc
Outdated
"the number of channels, H and W is the height and " | ||
"width of feature."); | ||
AddAttr<std::vector<int>>("ksize", | ||
"(vector ), the unpooling window size(height, width) " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(vector )
=> (vector)
size(height
=> size (height
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
"of unpooling operator."); | ||
AddAttr<std::vector<int>>("strides", | ||
"(vector, default:{1, 1}), " | ||
"strides(height, width) of unpooling operator.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
strides(height
=> strides (height
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
framework::OpAttrChecker* op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", | ||
"(Tensor) The input tensor of unpool operator. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure should we have variable type in the parenthesis? Is this a convention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是否有这项约定我不太确定从哪里看约定。
但是看了几个op都是这样写的
paddle/operators/unpool_op.cc
Outdated
AddAttr<std::string>("unpoolingtype", | ||
"(string), unpooling type, can be \"max\" for max-unpooling ") | ||
.InEnum({"max"}); | ||
AddComment(R"DOC( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd thought that we no longer need that many "
's in C++'s DOC-string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.h
Outdated
namespace paddle { | ||
namespace operators { | ||
|
||
using Tensor = framework::Tensor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the using
in a header file might break existing namespace hierarchy if we include that header file (https://google.github.io/styleguide/cppguide.html#Namespaces), usually we don't do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
抱歉,对这个的确没有太注意,这里也是copy了其他的op的写法。我把它去掉吧
paddle/operators/unpool_op.h
Outdated
set_zero(context.device_context(), out, static_cast<T>(0)); | ||
} | ||
switch (ksize.size()) { | ||
case 2: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is only one case in this switch? It seems that we should use if
instead of switch
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧
paddle/operators/unpool_op.h
Outdated
zero(device_ctx, in_x_grad, static_cast<T>(0.0)); | ||
} | ||
switch (ksize.size()) { | ||
case 2: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like in the above comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧
paddle/operators/unpool_op.cc
Outdated
std::vector<int> paddings = ctx->Attrs().Get<std::vector<int>>("paddings"); | ||
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput should be 4-D."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be 4-D
=> must be of 4-dimensional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.h
Outdated
math::SetConstant<Place, T> zero; | ||
if (in_x_grad) { | ||
in_x_grad->mutable_data<T>(context.GetPlace()); | ||
zero(device_ctx, in_x_grad, static_cast<T>(0.0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.0
=> 0
, in the case that T
is int
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done thanks
paddle/operators/unpool_op.h
Outdated
zero(device_ctx, in_x_grad, static_cast<T>(0.0)); | ||
} | ||
switch (ksize.size()) { | ||
case 2: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧
paddle/operators/unpool_op.h
Outdated
math::SetConstant<Place, T> zero; | ||
if (in_x_grad) { | ||
in_x_grad->mutable_data<T>(context.GetPlace()); | ||
zero(device_ctx, in_x_grad, static_cast<T>(0.0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.h
Outdated
set_zero(context.device_context(), out, static_cast<T>(0)); | ||
} | ||
switch (ksize.size()) { | ||
case 2: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为开始考虑要后面继续加3d的来着,ok,我先暂时去掉吧
paddle/operators/unpool_op.h
Outdated
namespace paddle { | ||
namespace operators { | ||
|
||
using Tensor = framework::Tensor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
抱歉,对这个的确没有太注意,这里也是copy了其他的op的写法。我把它去掉吧
paddle/operators/unpool_op.cc
Outdated
std::vector<int> paddings = ctx->Attrs().Get<std::vector<int>>("paddings"); | ||
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput should be 4-D."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
"the number of channels, H and W is the height and " | ||
"width of feature."); | ||
AddAttr<std::vector<int>>("ksize", | ||
"(vector ), the unpooling window size(height, width) " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
|
||
class Unpool2dOpMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
Unpool2dOpMaker(framework::OpProto* proto, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
表示连起来是一行来着。好吧,我删了它
paddle/operators/unpool_op.cc
Outdated
framework::OpAttrChecker* op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", | ||
"(Tensor) The input tensor of unpool operator. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是否有这项约定我不太确定从哪里看约定。
但是看了几个op都是这样写的
paddle/operators/unpool_op.cc
Outdated
AddAttr<std::string>("unpoolingtype", | ||
"(string), unpooling type, can be \"max\" for max-unpooling ") | ||
.InEnum({"max"}); | ||
AddComment(R"DOC( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
PADDLE_ENFORCE(ctx->HasInput("Y"), "Input(Y) of UnpoolOp" | ||
"should not be null."); | ||
PADDLE_ENFORCE(ctx->HasOutput("Out"), | ||
"Output(Out) of UnpoolOp should not be null."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里没有很好的理解应该在什么地方还加空格,sorry
paddle/operators/math/unpooling.h
Outdated
#pragma once | ||
#include "paddle/framework/tensor.h" | ||
#include "paddle/platform/device_context.h" | ||
#include "paddle/platform/hostdevice.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this unused header file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.h
Outdated
const framework::Tensor& indices, | ||
framework::Tensor * input_grad, | ||
const framework::Tensor& output, | ||
const framework::Tensor& output_grad); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put the input_grad
as the last parameter.
https://google.github.io/styleguide/cppguide.html#Function_Parameter_Ordering
When defining a function, parameter order is: inputs, then outputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cc
Outdated
for (int i = 0; i < input_feasize; ++i) { | ||
int index = indices_data[i]; | ||
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!"); | ||
output_data[index] = input_data[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么不是 output_data[index] += input_data[i];
另外,Unpool2dMaxFunctor可以不用写,直接复用MaxPool2dWithIndexGradFunctor
就行。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样会不会让代码结构上显得有点乱呢,我不太建议这么弄
* All tensors are in NCHW format. | ||
*/ | ||
template <typename T> | ||
class Unpool2dMaxGradFunctor<platform::GPUPlace, T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同CPU,也可以复用MaxPool2dWithIndexGradFunctor
,可以去掉 Unpool2dMaxFunctor。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样会不会让代码结构上显得有点乱呢,我不太建议这么弄
paddle/operators/unpool_op.cc
Outdated
"(vector defalut:{0,0}), " | ||
"paddings (height, width) of unpooling operator.") | ||
.SetDefault({0, 0}); | ||
AddAttr<std::string>("unpoolingtype", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unpoolingtype -> unpooling_type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done thanks
paddle/operators/math/unpooling.cc
Outdated
for (int i = 0; i < input_feasize; ++i) { | ||
int index = indices_data[i]; | ||
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!"); | ||
output_data[index] = input_data[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样会不会让代码结构上显得有点乱呢,我不太建议这么弄
* All tensors are in NCHW format. | ||
*/ | ||
template <typename T> | ||
class Unpool2dMaxGradFunctor<platform::GPUPlace, T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这样会不会让代码结构上显得有点乱呢,我不太建议这么弄
paddle/operators/math/unpooling.h
Outdated
#pragma once | ||
#include "paddle/framework/tensor.h" | ||
#include "paddle/platform/device_context.h" | ||
#include "paddle/platform/hostdevice.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.h
Outdated
const framework::Tensor& indices, | ||
framework::Tensor * input_grad, | ||
const framework::Tensor& output, | ||
const framework::Tensor& output_grad); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
"(vector defalut:{0,0}), " | ||
"paddings (height, width) of unpooling operator.") | ||
.SetDefault({0, 0}); | ||
AddAttr<std::string>("unpoolingtype", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cc
Outdated
int input_feasize = input_height * input_width; | ||
int output_feasize = output_height * output_width; | ||
const T* input_data = input.data<T>(); | ||
const T * indices_data = indices.data<T>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You specify indices type to int16
in https://github.com/sweetsky0901/Paddle/blob/27cf7f3376e2005522dbba8c964cb2e449b46380/python/paddle/v2/fluid/tests/test_unpool_op.py#L56 so, there must be wrong using const T * indices_data = indices.data<T>();
here, also, the paddle does not support int16 currently. So, you must specify int32 when test, and use const int * indices_data = indices.data<int>();
or add a new parameter type to the template, like here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
系统默认对输入类型做了检查,必须一样,如果不一样,需要重写
protected:
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
framework::ToDataType(ctx.Inputframework::Tensor("X")->type()),
ctx.device_context());
}
之前为了逃避这个检查,用float传递的index,然后到具体计算再强转的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不是这样的
在注册Op的时候一般注册两个(double/float),那么在op运行的时候到底是用哪个呢(double的还是float),这时候需要调用GetKernelType
函数来判断,如果X
是float,那就调float的Op,如果是double,那就调double的Op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请问这些有文档吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done thanks
paddle/operators/math/unpooling.cc
Outdated
int input_feasize = input_height * input_width; | ||
int output_feasize = output_height * output_width; | ||
const T* input_data = input.data<T>(); | ||
const T * indices_data = indices.data<T>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
系统默认对输入类型做了检查,必须一样,如果不一样,需要重写
protected:
framework::OpKernelType GetKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
framework::ToDataType(ctx.Inputframework::Tensor("X")->type()),
ctx.device_context());
}
之前为了逃避这个检查,用float传递的index,然后到具体计算再强转的。
paddle/operators/math/unpooling.cc
Outdated
for (int c = 0; c < output_channels; ++c) { | ||
for (int i = 0; i < input_feasize; ++i) { | ||
int index = indices_data[i]; | ||
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个可以写在循环外面,将indices转为EigenMatrix或者EigenVector,之后调用maximum方法返回indices中最大元素,然后判断是否越界。
CUDA实现也可以这样做,避免在cuda kernel里面做越界检测。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
调用maximum实际上又多了很多次比较,不太赞成这样的写法
paddle/operators/math/unpooling.cu
Outdated
const int channels, | ||
T* output_data, | ||
const int output_height, | ||
const int output_width) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note the order of parameters.
paddle/operators/unpool_op.cc
Outdated
"(Tensor) The input tensor of unpool operator. " | ||
"The format of input tensor is NCHW. Where N is batch size, C is the " | ||
"number of channels, H and W is the height and width of feature."); | ||
AddInput("Y", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the name of the variable is intuitive. It can be changed to Masks
or Indices
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with @chengduoZH
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
} | ||
}; | ||
|
||
int OutputSize(int input_size, int ksize, int padding, int stride) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add inline
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
暂时不改
paddle/operators/unpool_op.cc
Outdated
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput must be of 4-dimensional."); | ||
for (int i = 0; i < 4; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can directly use "==" operator to compare in_x_dims
and in_y_dims
.
Lines 42 to 44 in 33fa2df
bool operator==(const Dim<i>& o) const { | |
return (head == o.head) && (tail == o.tail); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
期待等能有一些关键数据类型的一些说明文档之后批量的改
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PADDLE_ENFORCE_EQ(in_x_dims, in_y_dims);
即可。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
math::SetConstant<Place, T> zero; | ||
if (in_x_grad) { | ||
in_x_grad->mutable_data<T>(context.GetPlace()); | ||
zero(device_ctx, in_x_grad, static_cast<T>(0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can directly use this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
现在这么写,是不是也可以?
paddle/operators/unpool_op.h
Outdated
context.Input<framework::Tensor>(framework::GradVarName("Out")); | ||
framework::Tensor* in_x_grad = | ||
context.Output<framework::Tensor>(framework::GradVarName("X")); | ||
std::string unpooling_type = context.Attr<std::string>("unpoolingtype"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unpoolingtype
-> unpooling_type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.h
Outdated
in_x_grad->mutable_data<T>(context.GetPlace()); | ||
zero(device_ctx, in_x_grad, static_cast<T>(0)); | ||
} | ||
math::Unpool2dMaxGradFunctor<Place, T, T2> unpool2d_max_backward; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pooling 有maxPool, avgPool,当前Unpooling只有maxPool的上采样,是否应该也加上avgPool的上采样 @qingqing01 @NHZlX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
暂时可以不用考虑 avg 的吧,有需求的时候再进行添加
paddle/operators/math/unpooling.h
Outdated
namespace operators { | ||
namespace math { | ||
|
||
template <typename Place, typename T, typename T2> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typename T, typename T2
- > typename T1, typename T2
hidx = (index - index % out_W) / out_W | ||
widx = index % out_W | ||
out[nidx, cidx, int(hidx), int(widx)] = \ | ||
input[nidx, cidx, h, w] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我觉得python代码可以写的更简单一点
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为是测试代码,就先以准确就好了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cc
Outdated
int input_feasize = input_height * input_width; | ||
int output_feasize = output_height * output_width; | ||
const T* input_data = input.data<T>(); | ||
const T * indices_data = indices.data<T>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请问这些有文档吗?
paddle/operators/math/unpooling.cc
Outdated
for (int c = 0; c < output_channels; ++c) { | ||
for (int i = 0; i < input_feasize; ++i) { | ||
int index = indices_data[i]; | ||
PADDLE_ENFORCE(index < output_feasize, "err index in unpooling!"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
调用maximum实际上又多了很多次比较,不太赞成这样的写法
} | ||
}; | ||
|
||
int OutputSize(int input_size, int ksize, int padding, int stride) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
暂时不改
paddle/operators/unpool_op.cc
Outdated
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput must be of 4-dimensional."); | ||
for (int i = 0; i < 4; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
期待等能有一些关键数据类型的一些说明文档之后批量的改
if (output_data) { | ||
math::SetConstant<Place, T> set_zero; | ||
set_zero(context.device_context(), out, static_cast<T>(0)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
习惯对指针都加判断了,按道理应该判断!的,然后抛异常。
paddle/operators/unpool_op.h
Outdated
context.Input<framework::Tensor>(framework::GradVarName("Out")); | ||
framework::Tensor* in_x_grad = | ||
context.Output<framework::Tensor>(framework::GradVarName("X")); | ||
std::string unpooling_type = context.Attr<std::string>("unpoolingtype"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
math::SetConstant<Place, T> zero; | ||
if (in_x_grad) { | ||
in_x_grad->mutable_data<T>(context.GetPlace()); | ||
zero(device_ctx, in_x_grad, static_cast<T>(0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
现在这么写,是不是也可以?
hidx = (index - index % out_W) / out_W | ||
widx = index % out_W | ||
out[nidx, cidx, int(hidx), int(widx)] = \ | ||
input[nidx, cidx, h, w] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
因为是测试代码,就先以准确就好了
paddle/operators/unpool_op.cc
Outdated
ksize – Size of the max pooling window. | ||
stride – Stride of the max pooling window. | ||
"It is set to kernel_size by default. | ||
padding – Padding that was added to the input" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 41 - line 48已经解释了ksize, stride, padding
的含义,这里不用重复解释。Doc里可以给出参数文献,output shape的计算公式等。
比如参考 PyTorch: http://pytorch.org/docs/master/nn.html?highlight=unpool#torch.nn.MaxUnpool2d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput must be of 4-dimensional."); | ||
for (int i = 0; i < 4; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PADDLE_ENFORCE_EQ(in_x_dims, in_y_dims);
即可。
paddle/operators/math/unpooling.cu
Outdated
int bsize = input_height * input_width * channels; | ||
int csize = input_height * input_width; | ||
int out_bsize = output_height * output_width * channels; | ||
int out_csize = output_height * output_width; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
感觉命名有点随意。
int in_c_stride = input_height * input_width;
int in_n_stride = in_c_stride * channels;
int out_c_stride = output_height * output_width;
int out_n_stride = out_c_stride * channels;
数据是NCHW,这里就用了n_stride, c_stride,其他更好的命名也可以。但bsize, csize等觉得有点随意。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int cidx = boffset / csize; | ||
int out_offset = bidx * out_bsize + cidx * out_csize; | ||
int out_index = indices_data[i]; | ||
PADDLE_ASSERT(out_index < (output_height * output_width)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PADDLE_ASSERT(out_index < out_c_stride);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int bsize = input_height * input_width * channels; | ||
int csize = input_height * input_width; | ||
int out_bsize = output_height * output_width * channels; | ||
int out_csize = output_height * output_width; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
"(Tensor) The input tensor of unpool operator. " | ||
"The format of input tensor is NCHW. Where N is batch size, C is the " | ||
"number of channels, H and W is the height and width of feature."); | ||
AddInput("Y", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with @chengduoZH
paddle/operators/unpool_op.cu.cc
Outdated
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不要有缩进,参考 paddle/operators/accuracy_op.h
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
def unpool2dmax_forward_naive(input, indices, ksize, strides, paddings): | ||
s0, s1, s2, s3 = input.shape | ||
out_H=(s2 - 1) * strides[0] - 2 * paddings[0] + ksize[0] | ||
out_W=(s2 - 1) * strides[1] - 2 * paddings[1] + ksize[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请安装格式检查工具!!!( 上个PR就说过),这里的格式不符合Python格式。
= 前后有空格。
另外,python里不要大小写混合,这个python文件里好几处,请一并修改。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我已经装了这个工具了!!!!!但是就是没检查出来。
done
arg = x_masked.argmax() | ||
indices[nidx, cidx, i, j] = \ | ||
(r_start + arg / self.ksize[1]) * W + \ | ||
c_start + arg % self.ksize[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里不用自己写max-pooling的forward,可以调用python/paddle/v2/fluid/tests/test_pool_max_op.py里的函数max_pool2D_forward_naive即可。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那单测文件不是互相依赖了吗?
indices[nidx, cidx, i, j] = \ | ||
(r_start + arg / self.ksize[1]) * W + \ | ||
c_start + arg % self.ksize[1] | ||
output = self.Unpool2d_forward_naive(input, indices, self.ksize, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.Unpool2d_forward_naive -> unpool2d_forward_naive
remove line 72.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是为了准备将来写avg的测试兼容用的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int bsize = input_height * input_width * channels; | ||
int csize = input_height * input_width; | ||
int out_bsize = output_height * output_width * channels; | ||
int out_csize = output_height * output_width; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int cidx = boffset / csize; | ||
int out_offset = bidx * out_bsize + cidx * out_csize; | ||
int out_index = indices_data[i]; | ||
PADDLE_ASSERT(out_index < (output_height * output_width)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int bsize = input_height * input_width * channels; | ||
int csize = input_height * input_width; | ||
int out_bsize = output_height * output_width * channels; | ||
int out_csize = output_height * output_width; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
int cidx = boffset / csize; | ||
int out_offset = bidx * out_bsize + cidx * out_csize; | ||
int out_index = indices_data[i]; | ||
PADDLE_ASSERT(out_index < (output_height * output_width)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/math/unpooling.cu
Outdated
const T2 * indices_data = indices.data<T2>(); | ||
T* output_data = output->mutable_data<T>(context.GetPlace()); | ||
int nthreads = batch_size * output_channels * input_height * input_width; | ||
int blocks = (nthreads + 1024 - 1) / 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cc
Outdated
|
||
PADDLE_ENFORCE(in_x_dims.size() == 4, | ||
"Unpooling intput must be of 4-dimensional."); | ||
for (int i = 0; i < 4; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/operators/unpool_op.cu.cc
Outdated
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
def unpool2dmax_forward_naive(input, indices, ksize, strides, paddings): | ||
s0, s1, s2, s3 = input.shape | ||
out_H=(s2 - 1) * strides[0] - 2 * paddings[0] + ksize[0] | ||
out_W=(s2 - 1) * strides[1] - 2 * paddings[1] + ksize[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我已经装了这个工具了!!!!!但是就是没检查出来。
done
indices[nidx, cidx, i, j] = \ | ||
(r_start + arg / self.ksize[1]) * W + \ | ||
c_start + arg % self.ksize[1] | ||
output = self.Unpool2d_forward_naive(input, indices, self.ksize, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是为了准备将来写avg的测试兼容用的
arg = x_masked.argmax() | ||
indices[nidx, cidx, i, j] = \ | ||
(r_start + arg / self.ksize[1]) * W + \ | ||
c_start + arg % self.ksize[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那单测文件不是互相依赖了吗?
… my_unpool_max_2d
… my_unpool_max_2d
… my_unpool_max_2d
… my_unpool_max_2d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
希望下个PR继续完成对3D的支持,并修改这个PR遗留问题。
arg = x_masked.argmax() | ||
indices[nidx, cidx, i, j] = \ | ||
(r_start + arg / self.ksize[1]) * wsize + \ | ||
c_start + arg % self.ksize[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里max-pooling的计算,请复用 paddle/v2/fluid/tests/test_pool_max_op.py 或 paddle/v2/fluid/tests/test_pool2d_op.py的代码。
resolve #5787
当前只实现了unpool的2d的max