-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CPU and GPU eigh op implementation #34990
Conversation
Thanks for your contribution! |
@@ -946,53 +946,42 @@ def __check_input(x, vec): | |||
def matrix_power(x, n, name=None): | |||
r""" | |||
Computes the n-th power of a square matrix or a batch of square matrices. | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不要改别的API。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的
paddle/fluid/operators/eigh_helper.h
Outdated
op_inputs[name] = name_vector; | ||
} | ||
auto op = | ||
framework::OpRegistry::CreateOp(type, op_inputs, op_outputs, attrs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这是需要在eigh算子里面创建别的op?不建议使用这种方式。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的,正在修改
self.x_np = np.random.random(self.x_shape).astype(self.x_type) | ||
|
||
def test_check_output(self): | ||
self.check_output(no_check_set=['Eigenvectors']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里对于去掉'Eigenvectors'解释:
1、特征值是唯一的,但是特征向量不唯一;
2、numpy采用Lapack库,paddle目前采用eigen库
3、eigen计算出来的特征向量与lapack会有一个符号上有区别,比如:
eigen计算出来特征向量 [[2, 3],[4, 5]], Lapack计算出来的为[[-2, 3],[-4, 5]],第一列符号不同,但是结果都是争取的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要解释一下为什么会出现符号上的差别。以及说明下在其他单测里面会检查Eigenvectors
。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. 这个PR改的差不多了,还有其他算子依赖这个PR,因此建议先合入,后续再提PR修复上述review建议。
std::vector<int64_t> v_dim = {input_dim[1]}; | ||
if (rank > 2) { | ||
v_dim = {batch_size, input_dim[1]}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里也没必要用if else
两个分支写。
OP_INOUT_CHECK(ctx->HasInput("Eigenvectors"), "Input", "Eigenvectors", | ||
"EighGrad"); | ||
OP_INOUT_CHECK(ctx->HasInputs(framework::GradVarName("Eigenvalues")), | ||
"Input", "Eigenvalues@GRAD", "EighGrad"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
那这里应该使用ctx->HasInput
而不是ctx->HasInputs
,没有s
?
template <typename ValueType, typename T> | ||
class EighGPUKernel : public framework::OpKernel<T> { | ||
public: | ||
void Compute(const framework::ExecutionContext &ctx) const override { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
实现和EighKernel
一样,GPU也可以直接使用EighKernel
来注册,没有必要实现EighGPUKernel
。
void Compute(const framework::ExecutionContext& ctx) const override { | ||
auto& x_grad = *ctx.Output<framework::Tensor>(framework::GradVarName("X")); | ||
x_grad.mutable_data<T>(ctx.GetPlace()); | ||
auto& output_w_var = *ctx.Input<Tensor>("Eigenvalues"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
变量名为啥加_var
后缀,直接用output_w
?
using EigenTensor = framework::EigenTensor<T, D, MajorType, IndexType>; | ||
template <typename T, int MajorType = Eigen::RowMajor, | ||
typename IndexType = Eigen::DenseIndex> | ||
using EigenVector = framework::EigenVector<T, MajorType, IndexType>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这部分Eigen的声明可以删除了?
|
||
} // namespace math | ||
} // namespace operators | ||
} // namespace paddle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
所有代码都实现在头文件里面了吗?编译可能会较慢。
// symmetric matrices, and uses the variable compute_vectors to | ||
// control whether to return the eigenvectors. | ||
template <typename DeviceContext, typename ValueType, typename T> | ||
struct MatrixEighFunctorCPU { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CPU、GPU的Functor都叫MatrixEighFunctor
,针对CPUDeviceContext
、CUDADeviceContext
特化实现。
} | ||
|
||
// Support x and y are different data types | ||
Tensor Div_(const Tensor& x, const Tensor& y) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
函数名不应该以_
区分功能,以及这个是不是可以直接实现在Div
里面,判断一下x和y的dtype
是不是相同?但其实x和t的dtype也是有要求的,一个是实数,一个是实数对应的复数?
return out; | ||
} | ||
|
||
framework::Tensor Sub_(const framework::Tensor& x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个Sub为啥要重新实现?若是因为原来的Sub
没有调用InverseFunctor
,我也认为是Sub
实现的bug,应该直接修改原Sub
函数。另外,Sub的修改参考下Div,GPU是不需要InverseFunctor的。
self.x_np = np.random.random(self.x_shape).astype(self.x_type) | ||
|
||
def test_check_output(self): | ||
self.check_output(no_check_set=['Eigenvectors']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要解释一下为什么会出现符号上的差别。以及说明下在其他单测里面会检查Eigenvectors
。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for no_check_set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
OPs
Describe
添加 paddle.linalg.eigh() API, 支持CPU和GPU计算.