Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prim] add reduce_as op for paddle #63064

Merged
merged 42 commits into from
Apr 19, 2024
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
f9ba2e5
add the sum_as op for paddle - part(forward)
zeroRains Mar 25, 2024
6ab282a
fix the test bug
zeroRains Mar 25, 2024
7b2b5d6
add the sum_as_grad but have bug in test
zeroRains Mar 27, 2024
f9a62ad
remove uncessary args but backward computing still have bug
zeroRains Mar 28, 2024
b7760da
fix the python registor
zeroRains Mar 29, 2024
5c8bc4c
modif the test and and some case
zeroRains Mar 30, 2024
a16be62
modify the description of python api
zeroRains Mar 30, 2024
da822f5
fix tyop
zeroRains Mar 30, 2024
38bd332
fix the bug in test which write base on OpTest
zeroRains Mar 31, 2024
9e7fa93
remove the useless function in test
zeroRains Mar 31, 2024
d8c22f2
modift the size of the test tenor
zeroRains Mar 31, 2024
6101781
Update test/legacy_test/test_sum_as_op.py
cyber-pioneer Apr 1, 2024
91d6d2b
Update test/legacy_test/test_sum_as_op.py
cyber-pioneer Apr 1, 2024
11e96ed
fix code style
zeroRains Apr 1, 2024
6f4325d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 4, 2024
4636403
add dynamic shape test and modify the doc
zeroRains Apr 4, 2024
89f0d8b
Update test_sum_as_op.py
zeroRains Apr 4, 2024
e7dd69a
Update test_sum_as_op.py
zeroRains Apr 4, 2024
1339391
fix the bug in convert_np_dtype_to_dtype_
zeroRains Apr 5, 2024
9c06455
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 5, 2024
76ff0e4
Update core.py
zeroRains Apr 5, 2024
05ad7c1
change the variable name
zeroRains Apr 7, 2024
ee8cb10
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 7, 2024
19a39ef
remove spaces
zeroRains Apr 7, 2024
b9b07cc
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 7, 2024
437a6cd
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 7, 2024
7141a70
add an assert for get_reduce_dims
zeroRains Apr 8, 2024
009cf48
fix the bug
zeroRains Apr 9, 2024
9fc3081
Update common_shape.h
zeroRains Apr 9, 2024
21157bc
modife sum_as to reduce_as
zeroRains Apr 9, 2024
c7ffb0a
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 9, 2024
c96d295
fix the file name
zeroRains Apr 9, 2024
287d4f1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 13, 2024
d5d3d7b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 14, 2024
2d83a1b
Merge commit 'refs/pull/63064/head' of https://github.com/PaddlePaddl…
cyber-pioneer Apr 16, 2024
c6f7e55
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 17, 2024
7f2168e
modify the test time
zeroRains Apr 17, 2024
702390a
Merge commit 'refs/pull/63064/head' of https://github.com/PaddlePaddl…
cyber-pioneer Apr 17, 2024
1ce4a5b
fix test case
cyber-pioneer Apr 18, 2024
d8ac719
fix the date
zeroRains Apr 18, 2024
754482f
Merge branch 'develop' into sum
zeroRains Apr 18, 2024
0dfe7fa
fix code style
zeroRains Apr 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2388,6 +2388,16 @@
kernel :
func : stanh_grad

- backward_op : sum_as_grad
forward : sum_as(Tensor x, Tensor y) -> Tensor(out)
args : (Tensor x, Tensor y, Tensor out_grad)
output : Tensor(x_grad)
infer_meta :
func : UnchangedInferMeta
param : [x]
kernel :
func : sum_as_grad

- backward_op : svd_grad
forward : svd (Tensor x, bool full_matrices = false) -> Tensor(u), Tensor(s), Tensor(vh)
args : (Tensor x, Tensor u, Tensor vh, Tensor s, Tensor u_grad, Tensor vh_grad, Tensor s_grad, bool full_matrices)
Expand Down
10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2751,6 +2751,16 @@
func : stanh
backward : stanh_grad

- op : sum_as
args : (Tensor x, Tensor y)
output : Tensor(out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

输入参数 y 这里的作用是提供output的维度信息,但y的命名表示不出来这一层含义,是否可以换成target之类更准确的变量名?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

infer_meta :
func : SumAsInferMeta
kernel :
func : sum_as
data_type : x
backward : sum_as_grad

- op : svd
args : (Tensor x, bool full_matrices = false)
output : Tensor(u), Tensor(s), Tensor(vh)
Expand Down
12 changes: 12 additions & 0 deletions paddle/phi/infermeta/binary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3047,6 +3047,18 @@ void SequenceMaskInferMeta(const MetaTensor& x,
y->set_dtype(out_dtype);
}

void SumAsInferMeta(const MetaTensor& x, const MetaTensor& y, MetaTensor* out) {
DataType out_dtype;
if (x.dtype() == DataType::BOOL || x.dtype() == DataType::INT32) {
out_dtype = DataType::INT64;
} else {
out_dtype = x.dtype();
}
out->set_dtype(out_dtype);
out->set_dims(y.dims());
out->set_layout(x.layout());
}

void SoftmaxMaskFuseInferMeta(const MetaTensor& x,
const MetaTensor& mask,
MetaTensor* out) {
Expand Down
2 changes: 2 additions & 0 deletions paddle/phi/infermeta/binary.h
Original file line number Diff line number Diff line change
Expand Up @@ -524,6 +524,8 @@ void ShuffleBatchInferMeta(const MetaTensor& x,

);

void SumAsInferMeta(const MetaTensor& x, const MetaTensor& y, MetaTensor* out);

void SoftmaxMaskFuseInferMeta(const MetaTensor& x,
const MetaTensor& mask,
MetaTensor* out);
Expand Down
59 changes: 59 additions & 0 deletions paddle/phi/kernels/cpu/sum_as_grad_kernel.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@

zeroRains marked this conversation as resolved.
Show resolved Hide resolved
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/sum_as_kernel.h"

#include "paddle/phi/core/device_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/reduce_grad.h"

namespace phi {

template <typename T, typename Context>
void SumAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
const DenseTensor& out_grad,
DenseTensor* x_grad) {
auto reduce_dim = phi::funcs::GetReduceDims(x, y);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
ReduceGradKernel<Context, T, funcs::SumGradFunctor, true>(dev_ctx,
x,
paddle::none,
out_grad,
reduce_dim,
false,
reduce_all,
x_grad);
}

} // namespace phi

PD_REGISTER_KERNEL(sum_as_grad,
CPU,
ALL_LAYOUT,
phi::SumAsGradKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {
kernel->OutputAt(0).SetDataType(phi::DataType::UNDEFINED);
}
50 changes: 50 additions & 0 deletions paddle/phi/kernels/cpu/sum_as_kernel.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@

zeroRains marked this conversation as resolved.
Show resolved Hide resolved
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/sum_as_kernel.h"

#include "paddle/phi/core/device_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/cpu/reduce.h"

namespace phi {

template <typename T, typename Context>
void SumAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
DenseTensor* out) {
auto reduce_dim = phi::funcs::GetReduceDims(x, y);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
phi::Reduce<CPUContext, T, phi::funcs::SumFunctor>(
dev_ctx, x, reduce_all, reduce_dim, false, out->type(), out);
}

} // namespace phi

PD_REGISTER_KERNEL(sum_as,
CPU,
ALL_LAYOUT,
phi::SumAsKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {}
18 changes: 18 additions & 0 deletions paddle/phi/kernels/funcs/common_shape.h
Original file line number Diff line number Diff line change
Expand Up @@ -297,5 +297,23 @@ inline void FCOutputSize(const DDim &in_dims,
out_dims.push_back(w_dims1);
}

inline std::vector<int64_t> GetReduceDims(const DenseTensor &in,
const DenseTensor &out) {
std::vector<int64_t> reduce_dims;
auto in_dims = in.dims();
auto out_dims = out.dims();

int diff = in_dims.size() - out_dims.size();
for (int i = 0; i < diff; ++i) {
reduce_dims.push_back(i);
}
for (int i = diff; i < in_dims.size(); ++i) {
if (out_dims[i - diff] != in_dims[i]) {
reduce_dims.push_back(i);
}
}
return reduce_dims;
}

} // namespace funcs
} // namespace phi
69 changes: 69 additions & 0 deletions paddle/phi/kernels/gpu/sum_as_grad_kernel.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@

// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/sum_as_grad_kernel.h"

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/funcs/reduce_function.h"
#include "paddle/phi/kernels/gpu/reduce_grad.h"

namespace phi {

template <typename T, typename Context>
void SumAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
const DenseTensor& out_grad,
DenseTensor* x_grad) {
auto reduce_dim = phi::funcs::GetReduceDims(x, y);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
auto update_dims = common::vectorize(x.dims());
for (auto i : reduce_dim) {
update_dims[i] = 1;
}

DenseTensor new_out_grad(out_grad.type());
new_out_grad.ShareDataWith(out_grad);
new_out_grad.Resize(common::make_ddim(update_dims));

dev_ctx.Alloc(x_grad, x.dtype());
using MPType = typename phi::dtype::MPTypeTrait<T>::Type;
phi::ReduceGrad<phi::kps::IdentityFunctor<T, MPType>>(
dev_ctx,
&new_out_grad,
x_grad,
out_grad.dtype(),
phi::kps::IdentityFunctor<T, MPType>());
}

} // namespace phi

PD_REGISTER_KERNEL(sum_as_grad,
GPU,
ALL_LAYOUT,
phi::SumAsGradKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {
kernel->OutputAt(0).SetDataType(phi::DataType::UNDEFINED);
}
49 changes: 49 additions & 0 deletions paddle/phi/kernels/gpu/sum_as_kernel.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@

// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/sum_as_kernel.h"

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/reduce_sum_kernel.h"

namespace phi {

template <typename T, typename Context>
void SumAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
DenseTensor* out) {
auto reduce_dim = phi::funcs::GetReduceDims(x, y);
dev_ctx.template Alloc<T>(out);
phi::SumKernel<T, Context>(dev_ctx, x, reduce_dim, out->type(), false, out);
}

} // namespace phi

PD_REGISTER_KERNEL(sum_as,
GPU,
ALL_LAYOUT,
phi::SumAsKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {}
31 changes: 31 additions & 0 deletions paddle/phi/kernels/sum_as_grad_kernel.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/phi/core/dense_tensor.h"
#include "paddle/phi/core/device_context.h"
#include "paddle/phi/kernels/funcs/common_shape.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"

namespace phi {

template <typename T, typename Context>
void SumAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
const DenseTensor& out_grad,
DenseTensor* x_grad);

} // namespace phi
30 changes: 30 additions & 0 deletions paddle/phi/kernels/sum_as_kernel.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/phi/core/dense_tensor.h"
#include "paddle/phi/core/device_context.h"
#include "paddle/phi/kernels/funcs/common_shape.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"

namespace phi {

template <typename T, typename Context>
void SumAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& y,
DenseTensor* out);

} // namespace phi
2 changes: 2 additions & 0 deletions python/paddle/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -494,6 +494,7 @@
stanh,
subtract,
sum,
sum_as,
take,
tan,
tan_,
Expand Down Expand Up @@ -846,6 +847,7 @@
'ones',
'not_equal',
'sum',
'sum_as',
'nansum',
'nanmean',
'count_nonzero',
Expand Down
Loading