Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prim] add reduce_as op for paddle #63064

Merged
merged 42 commits into from
Apr 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
f9ba2e5
add the sum_as op for paddle - part(forward)
zeroRains Mar 25, 2024
6ab282a
fix the test bug
zeroRains Mar 25, 2024
7b2b5d6
add the sum_as_grad but have bug in test
zeroRains Mar 27, 2024
f9a62ad
remove uncessary args but backward computing still have bug
zeroRains Mar 28, 2024
b7760da
fix the python registor
zeroRains Mar 29, 2024
5c8bc4c
modif the test and and some case
zeroRains Mar 30, 2024
a16be62
modify the description of python api
zeroRains Mar 30, 2024
da822f5
fix tyop
zeroRains Mar 30, 2024
38bd332
fix the bug in test which write base on OpTest
zeroRains Mar 31, 2024
9e7fa93
remove the useless function in test
zeroRains Mar 31, 2024
d8c22f2
modift the size of the test tenor
zeroRains Mar 31, 2024
6101781
Update test/legacy_test/test_sum_as_op.py
cyber-pioneer Apr 1, 2024
91d6d2b
Update test/legacy_test/test_sum_as_op.py
cyber-pioneer Apr 1, 2024
11e96ed
fix code style
zeroRains Apr 1, 2024
6f4325d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 4, 2024
4636403
add dynamic shape test and modify the doc
zeroRains Apr 4, 2024
89f0d8b
Update test_sum_as_op.py
zeroRains Apr 4, 2024
e7dd69a
Update test_sum_as_op.py
zeroRains Apr 4, 2024
1339391
fix the bug in convert_np_dtype_to_dtype_
zeroRains Apr 5, 2024
9c06455
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 5, 2024
76ff0e4
Update core.py
zeroRains Apr 5, 2024
05ad7c1
change the variable name
zeroRains Apr 7, 2024
ee8cb10
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 7, 2024
19a39ef
remove spaces
zeroRains Apr 7, 2024
b9b07cc
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 7, 2024
437a6cd
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 7, 2024
7141a70
add an assert for get_reduce_dims
zeroRains Apr 8, 2024
009cf48
fix the bug
zeroRains Apr 9, 2024
9fc3081
Update common_shape.h
zeroRains Apr 9, 2024
21157bc
modife sum_as to reduce_as
zeroRains Apr 9, 2024
c7ffb0a
Merge branch 'sum' of https://github.com/zeroRains/Paddle into sum
zeroRains Apr 9, 2024
c96d295
fix the file name
zeroRains Apr 9, 2024
287d4f1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 13, 2024
d5d3d7b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 14, 2024
2d83a1b
Merge commit 'refs/pull/63064/head' of https://github.com/PaddlePaddl…
cyber-pioneer Apr 16, 2024
c6f7e55
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zeroRains Apr 17, 2024
7f2168e
modify the test time
zeroRains Apr 17, 2024
702390a
Merge commit 'refs/pull/63064/head' of https://github.com/PaddlePaddl…
cyber-pioneer Apr 17, 2024
1ce4a5b
fix test case
cyber-pioneer Apr 18, 2024
d8ac719
fix the date
zeroRains Apr 18, 2024
754482f
Merge branch 'develop' into sum
zeroRains Apr 18, 2024
0dfe7fa
fix code style
zeroRains Apr 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1900,6 +1900,16 @@
func : reciprocal_grad
inplace : (out_grad -> x_grad)

- backward_op : reduce_as_grad
forward : reduce_as(Tensor x, Tensor target) -> Tensor(out)
args : (Tensor x, Tensor target, Tensor out_grad)
output : Tensor(x_grad)
infer_meta :
func : UnchangedInferMeta
param : [x]
kernel :
func : reduce_as_grad

- backward_op : relu6_grad
forward : relu6 (Tensor x) -> Tensor(out)
args : (Tensor out, Tensor out_grad)
Expand Down
10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2298,6 +2298,16 @@
inplace : (x -> out)
backward : reciprocal_grad

- op : reduce_as
args : (Tensor x, Tensor target)
output : Tensor(out)
infer_meta :
func : ReduceAsInferMeta
kernel :
func : reduce_as
data_type : x
backward : reduce_as_grad

- op : reindex_graph
args : (Tensor x, Tensor neighbors, Tensor count, Tensor hashtable_value, Tensor hashtable_index)
output : Tensor(reindex_src), Tensor(reindex_dst), Tensor(out_nodes)
Expand Down
14 changes: 14 additions & 0 deletions paddle/phi/infermeta/binary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3047,6 +3047,20 @@ void SequenceMaskInferMeta(const MetaTensor& x,
y->set_dtype(out_dtype);
}

void ReduceAsInferMeta(const MetaTensor& x,
const MetaTensor& target,
MetaTensor* out) {
DataType out_dtype;
if (x.dtype() == DataType::BOOL || x.dtype() == DataType::INT32) {
out_dtype = DataType::INT64;
} else {
out_dtype = x.dtype();
}
out->set_dtype(out_dtype);
out->set_dims(target.dims());
out->set_layout(x.layout());
}

void SoftmaxMaskFuseInferMeta(const MetaTensor& x,
const MetaTensor& mask,
MetaTensor* out) {
Expand Down
4 changes: 4 additions & 0 deletions paddle/phi/infermeta/binary.h
Original file line number Diff line number Diff line change
Expand Up @@ -524,6 +524,10 @@ void ShuffleBatchInferMeta(const MetaTensor& x,

);

void ReduceAsInferMeta(const MetaTensor& x,
const MetaTensor& target,
MetaTensor* out);

void SoftmaxMaskFuseInferMeta(const MetaTensor& x,
const MetaTensor& mask,
MetaTensor* out);
Expand Down
58 changes: 58 additions & 0 deletions paddle/phi/kernels/cpu/reduce_as_grad_kernel.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/reduce_as_kernel.h"

#include "paddle/phi/core/device_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/reduce_grad.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
const DenseTensor& out_grad,
DenseTensor* x_grad) {
auto reduce_dim = phi::funcs::GetReduceDims(x, target);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
ReduceGradKernel<Context, T, funcs::SumGradFunctor, true>(dev_ctx,
x,
paddle::none,
out_grad,
reduce_dim,
false,
reduce_all,
x_grad);
}

} // namespace phi

PD_REGISTER_KERNEL(reduce_as_grad,
CPU,
ALL_LAYOUT,
phi::ReduceAsGradKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {
kernel->OutputAt(0).SetDataType(phi::DataType::UNDEFINED);
}
49 changes: 49 additions & 0 deletions paddle/phi/kernels/cpu/reduce_as_kernel.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/reduce_as_kernel.h"

#include "paddle/phi/core/device_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/cpu/reduce.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
DenseTensor* out) {
auto reduce_dim = phi::funcs::GetReduceDims(x, target);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
phi::Reduce<CPUContext, T, phi::funcs::SumFunctor>(
dev_ctx, x, reduce_all, reduce_dim, false, out->type(), out);
}

} // namespace phi

PD_REGISTER_KERNEL(reduce_as,
CPU,
ALL_LAYOUT,
phi::ReduceAsKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {}
32 changes: 32 additions & 0 deletions paddle/phi/kernels/funcs/common_shape.h
Original file line number Diff line number Diff line change
Expand Up @@ -295,5 +295,37 @@ inline void FCOutputSize(const DDim &in_dims,
out_dims.push_back(w_dims1);
}

inline std::vector<int64_t> GetReduceDims(const DenseTensor &in,
const DenseTensor &out) {
std::vector<int64_t> reduce_dims;
auto in_dims = in.dims();
auto out_dims = out.dims();
int diff = in_dims.size() - out_dims.size();
for (int i = 0; i < diff; ++i) {
reduce_dims.push_back(i);
}
for (int i = 0; i < out_dims.size(); ++i) {
if (out_dims[i] == 1 && in_dims[i + diff] != 1) {
reduce_dims.push_back(i + diff);
} else {
PADDLE_ENFORCE_EQ(
in_dims[i + diff],
out_dims[i],
phi::errors::InvalidArgument(
"ReduceDims dimension mismatch. Operands could "
"not be broadcast together with the shape of in_dims = [%s] and "
"the shape of out_dims = [%s]. Received [%d] in X is not equal "
"to "
"[%d] in Y at i:%d.",
in_dims,
out_dims,
in_dims[i + diff],
out_dims[i],
i));
}
}
return reduce_dims;
}

} // namespace funcs
} // namespace phi
68 changes: 68 additions & 0 deletions paddle/phi/kernels/gpu/reduce_as_grad_kernel.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/reduce_as_grad_kernel.h"

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/funcs/reduce_function.h"
#include "paddle/phi/kernels/gpu/reduce_grad.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
const DenseTensor& out_grad,
DenseTensor* x_grad) {
auto reduce_dim = phi::funcs::GetReduceDims(x, target);
bool reduce_all = recompute_reduce_all(x, reduce_dim);
auto update_dims = common::vectorize(x.dims());
for (auto i : reduce_dim) {
update_dims[i] = 1;
}

DenseTensor new_out_grad(out_grad.type());
new_out_grad.ShareDataWith(out_grad);
new_out_grad.Resize(common::make_ddim(update_dims));

dev_ctx.Alloc(x_grad, x.dtype());
using MPType = typename phi::dtype::MPTypeTrait<T>::Type;
phi::ReduceGrad<phi::kps::IdentityFunctor<T, MPType>>(
dev_ctx,
&new_out_grad,
x_grad,
out_grad.dtype(),
phi::kps::IdentityFunctor<T, MPType>());
}

} // namespace phi

PD_REGISTER_KERNEL(reduce_as_grad,
GPU,
ALL_LAYOUT,
phi::ReduceAsGradKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {
kernel->OutputAt(0).SetDataType(phi::DataType::UNDEFINED);
}
48 changes: 48 additions & 0 deletions paddle/phi/kernels/gpu/reduce_as_kernel.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/kernels/reduce_as_kernel.h"

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/reduce_sum_kernel.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
DenseTensor* out) {
auto reduce_dim = phi::funcs::GetReduceDims(x, target);
dev_ctx.template Alloc<T>(out);
phi::SumKernel<T, Context>(dev_ctx, x, reduce_dim, out->type(), false, out);
}

} // namespace phi

PD_REGISTER_KERNEL(reduce_as,
GPU,
ALL_LAYOUT,
phi::ReduceAsKernel,
bool,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16,
int16_t,
int,
int64_t,
uint8_t,
int8_t) {}
31 changes: 31 additions & 0 deletions paddle/phi/kernels/reduce_as_grad_kernel.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/phi/core/dense_tensor.h"
#include "paddle/phi/core/device_context.h"
#include "paddle/phi/kernels/funcs/common_shape.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsGradKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
const DenseTensor& out_grad,
DenseTensor* x_grad);

} // namespace phi
30 changes: 30 additions & 0 deletions paddle/phi/kernels/reduce_as_kernel.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
// Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/phi/core/dense_tensor.h"
#include "paddle/phi/core/device_context.h"
#include "paddle/phi/kernels/funcs/common_shape.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"

namespace phi {

template <typename T, typename Context>
void ReduceAsKernel(const Context& dev_ctx,
const DenseTensor& x,
const DenseTensor& target,
DenseTensor* out);

} // namespace phi
Loading