Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Eager] Pylayer #39989

Merged
merged 76 commits into from
Mar 30, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
8bf0344
Supported Complex2Real Conversion for Eager Dygraph
jim19930609 Feb 24, 2022
10645f7
Supported Complex2Real Conversion for Eager Dygraph
jim19930609 Feb 24, 2022
b360c23
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Feb 24, 2022
62c5d5e
Enabled complex type promotion test for matmul_v2
jim19930609 Feb 24, 2022
ea46995
pylayer, test=develop
wanghuancoder Feb 25, 2022
884dddb
Fix CI issues
jim19930609 Feb 25, 2022
9f0bf2b
Merged develop branch
jim19930609 Feb 26, 2022
753798e
Support initializing specific grad tensors to zero for selected opera…
jim19930609 Feb 27, 2022
03c6f20
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Feb 28, 2022
530fa56
finish forward, test=develop
wanghuancoder Mar 1, 2022
24dbb6e
create grad node finish, test=develop
wanghuancoder Mar 1, 2022
d98e938
Merged adj_edges_ with GradSlotMeta
jim19930609 Mar 2, 2022
4855da1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 2, 2022
1ded93a
Fixed monir issue
jim19930609 Mar 2, 2022
e478404
Merge develop
jim19930609 Mar 3, 2022
d07580e
backward finish, start dbg, test=develop
wanghuancoder Mar 3, 2022
bb5c5bc
Adjusted num runs
jim19930609 Mar 3, 2022
e641d8b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 3, 2022
8d76a7e
fix some bug, and merge develop, test=develop
wanghuancoder Mar 3, 2022
3cb3c8a
Recovered Eager performance tests configurations
jim19930609 Mar 3, 2022
9942837
Recovered Eager performance tests configurations
jim19930609 Mar 3, 2022
96b3a42
finish, test=develop
wanghuancoder Mar 4, 2022
c7688d0
polish, test=develop
wanghuancoder Mar 4, 2022
59d0850
polish, test=develop
wanghuancoder Mar 4, 2022
b661be5
refine, test=develop
wanghuancoder Mar 4, 2022
0b3f6e5
eager, test=develop
wanghuancoder Mar 4, 2022
36f084b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 4, 2022
6e06997
Adjusted performance tests configurations
jim19930609 Mar 7, 2022
489e146
Fixed Minor Issues with performance tests
jim19930609 Mar 5, 2022
802a860
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 7, 2022
428d455
merge pr 39878
wanghuancoder Mar 7, 2022
c7b600e
[Phi] Fix macro name typo
Aurelius84 Mar 7, 2022
d3e383b
Merge commit 'refs/pull/40204/head' of https://github.com/PaddlePaddl…
wanghuancoder Mar 7, 2022
2688122
support set_materialize_grads, test=develop
wanghuancoder Mar 7, 2022
c58de03
suppotr mark_non_differentiable, test=develop
wanghuancoder Mar 8, 2022
0dfbb39
support once_differentiable, test=develop
wanghuancoder Mar 8, 2022
fb00410
refine, test=develop
wanghuancoder Mar 8, 2022
1c86cec
Merge branch 'develop' into pylayer
wanghuancoder Mar 10, 2022
8534ec8
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 10, 2022
e5eb8e1
refine, test=develop
wanghuancoder Mar 10, 2022
cc67f30
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 Mar 15, 2022
489580e
Moved out Edge from GradSlotMeta
jim19930609 Mar 15, 2022
96d0960
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 16, 2022
a0a89db
Fixed issues from merge
jim19930609 Mar 16, 2022
b8538de
Fixed typo
jim19930609 Mar 16, 2022
27991c5
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 Mar 16, 2022
a25d534
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 17, 2022
ae44285
Addressed review comments
jim19930609 Mar 17, 2022
303f06d
Fixed merge issues
jim19930609 Mar 17, 2022
02efb72
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 Mar 17, 2022
91dbbe3
Fixed minor issues
jim19930609 Mar 17, 2022
bcb7137
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 Mar 17, 2022
1410253
Fixed minor issue
jim19930609 Mar 18, 2022
908a9a6
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 18, 2022
a08c83d
merge pr39963, test=develop
wanghuancoder Mar 21, 2022
7559ccf
merge, test=develop
wanghuancoder Mar 21, 2022
8ba3c04
merge, test=develop
wanghuancoder Mar 21, 2022
970581c
refine, test=develop
wanghuancoder Mar 21, 2022
bca12a1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 21, 2022
3a7715c
refine, test=develop
wanghuancoder Mar 21, 2022
17aff34
refine, test=develop
wanghuancoder Mar 21, 2022
b8c311c
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 21, 2022
ed54418
Fixed major issues and enabled auto_prune test cases
jim19930609 Mar 22, 2022
4e31a54
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 Mar 22, 2022
154fdd6
Fixed issues from merge
jim19930609 Mar 22, 2022
7eb8252
Merged develop
jim19930609 Mar 22, 2022
dcbd991
merge PR39963, test=develop
wanghuancoder Mar 23, 2022
d96f201
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 23, 2022
f7fc963
refine, test=develop
wanghuancoder Mar 23, 2022
abc1eee
refine, test=develop
wanghuancoder Mar 23, 2022
7fed773
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 23, 2022
7eee5f8
refine, test=develop
wanghuancoder Mar 23, 2022
73b946a
refine, test=develop
wanghuancoder Mar 25, 2022
2036eca
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder Mar 28, 2022
9121bd3
refine, test=develop
wanghuancoder Mar 28, 2022
4444d85
Merge branch 'develop' into pylayer
wanghuancoder Mar 28, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions paddle/fluid/eager/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@ endif()
add_subdirectory(api)
add_subdirectory(accumulation)
add_subdirectory(custom_operator)


if(NOT ((NOT WITH_PYTHON) AND ON_INFER))
add_subdirectory(pylayer)
endif()
cc_library(grad_node_info SRCS grad_node_info.cc DEPS phi_api phi_tensor)
cc_library(grad_tensor_holder SRCS grad_tensor_holder.cc DEPS grad_node_info gradient_accumulator)

Expand Down
1 change: 0 additions & 1 deletion paddle/fluid/eager/grad_node_info.h
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,6 @@ class GradNodeBase {
size_t slot_rank);
void SetGradOutMeta(const paddle::experimental::Tensor& fwd_in,
size_t slot_rank);

/**
* Default setters for Grad in/out meta this should be used for same special
* Node which will not create by user
Expand Down
1 change: 1 addition & 0 deletions paddle/fluid/eager/pylayer/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
cc_library(py_layer_node SRCS py_layer_node.cc DEPS phi phi_api grad_node_info)
159 changes: 159 additions & 0 deletions paddle/fluid/eager/pylayer/py_layer_node.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/fluid/eager/pylayer/py_layer_node.h"
#include "paddle/fluid/eager/eager_tensor.h"

#include "paddle/phi/api/all.h"
#include "paddle/phi/core/dense_tensor.h"

#include "paddle/fluid/platform/device_context.h"
#include "paddle/fluid/platform/enforce.h"
#include "paddle/fluid/platform/errors.h"
#include "paddle/fluid/pybind/eager.h"
#include "paddle/fluid/pybind/eager_utils.h"

#include "glog/logging.h"
#pragma GCC diagnostic ignored "-Wattributes"
#include "pybind11/pytypes.h"

namespace egr {
std::vector<std::vector<paddle::experimental::Tensor>> GradNodePyLayer::
operator()(
std::vector<std::vector<paddle::experimental::Tensor>>& grads, // NOLINT
bool create_graph) {
VLOG(3) << "Running Eager Backward Node: " << name();

std::vector<std::vector<paddle::experimental::Tensor>> hooked_grads =
GradNodePyLayer::ApplyGradientHooks(grads);

paddle::pybind::PyLayerObject* ctx =
reinterpret_cast<paddle::pybind::PyLayerObject*>(ctx_);

PADDLE_ENFORCE_EQ(ctx->forward_output_tensor_is_duplicable.size(),
grads.size(),
paddle::platform::errors::InvalidArgument(
"%s's grad input size(%s) mast be equal with it's "
"forward's output size(%s).",
name(), grads.size(),
ctx->forward_output_tensor_is_duplicable.size()));

auto backward_args = PyTuple_New(grads.size());
for (size_t i = 0; i < grads.size(); i++) {
if (ctx->forward_output_tensor_is_duplicable[i]) {
PyObject* pylist = PyList_New((Py_ssize_t)grads[i].size());
for (size_t j = 0; j < grads[i].size(); j++) {
if (ctx->materialize_grads && !grads[i][j].initialized()) {
paddle::experimental::Tensor tensor_tmp;
auto dense_tensor = std::make_shared<phi::DenseTensor>();
dense_tensor->set_meta(forward_outputs_meta_[i][j]);
tensor_tmp.set_impl(dense_tensor);
PyList_SET_ITEM(
pylist, static_cast<Py_ssize_t>(i),
paddle::pybind::ToPyObject(paddle::experimental::zeros_like(
tensor_tmp, tensor_tmp.dtype(),
forward_outputs_place_[i][j])));
} else {
PyList_SET_ITEM(pylist, static_cast<Py_ssize_t>(i),
paddle::pybind::ToPyObject(grads[i][0], true));
}
}
PyTuple_SET_ITEM(backward_args, i, pylist);
} else {
if (ctx->materialize_grads && !grads[i][0].initialized()) {
paddle::experimental::Tensor tensor_tmp;
auto dense_tensor = std::make_shared<phi::DenseTensor>();
dense_tensor->set_meta(forward_outputs_meta_[i][0]);
tensor_tmp.set_impl(dense_tensor);
PyTuple_SET_ITEM(
backward_args, i,
paddle::pybind::ToPyObject(paddle::experimental::zeros_like(
tensor_tmp, tensor_tmp.dtype(), forward_outputs_place_[i][0])));
} else {
PyTuple_SET_ITEM(backward_args, i,
paddle::pybind::ToPyObject(grads[i][0], true));
}
}
}

VLOG(6) << "PyLayer backward args is ready, begin call user's backward "
"function...";

auto backward_fn =
PyObject_GetAttrString(reinterpret_cast<PyObject*>(ctx), "backward");
if (!backward_fn) {
PADDLE_THROW(paddle::platform::errors::InvalidArgument(
"Get backward function faild."));
}
auto outputs = PyObject_CallObject(backward_fn, backward_args);
if (!outputs) {
PADDLE_THROW(paddle::platform::errors::External(
pybind11::detail::error_string().c_str()));
}

outputs_ = outputs;

VLOG(6) << "PyLayer backward function finish...";

PyObject* outputs_tuple = nullptr;
if (PyTuple_Check(outputs)) {
outputs_tuple = outputs;
} else {
outputs_tuple = PyTuple_New(1);
Py_INCREF(outputs);
PyTuple_SET_ITEM(outputs_tuple, 0, outputs);
}

size_t outputs_size = PyTuple_GET_SIZE(outputs_tuple);

if (outputs_size > ctx->forward_input_tensor_is_duplicable.size()) {
PADDLE_THROW(paddle::platform::errors::InvalidArgument(
"The number of outputs of `PyLayer.backward` should be %d, but "
"received %d.",
ctx->forward_input_tensor_is_duplicable.size(), outputs_size));
}

std::vector<std::vector<paddle::experimental::Tensor>> grad_out;
grad_out.reserve(ctx->forward_input_tensor_is_duplicable.size());
for (size_t i = 0; i < ctx->forward_input_tensor_is_duplicable.size(); i++) {
if (i < outputs_size) {
PyObject* obj = PyTuple_GET_ITEM(outputs_tuple, i);
if (this->OutputMeta()[i][0].IsStopGradient()) {
PADDLE_ENFORCE_EQ(
obj, Py_None,
paddle::platform::errors::InvalidArgument(
"%s's backward function should return None at %d position, "
"because it's forward Tensor's stopgradient is true.",
name(), i));
grad_out.push_back({});
} else {
if (ctx->forward_input_tensor_is_duplicable[i]) {
grad_out.push_back(paddle::pybind::GetTensorListFromPyObject(obj));
} else {
grad_out.push_back({paddle::pybind::GetTensorFromPyObject(obj)});
}
}
} else {
PADDLE_ENFORCE_EQ(
this->OutputMeta()[i][0].IsStopGradient(), true,
paddle::platform::errors::InvalidArgument(
"%s's backward function should not return empyt at %d position.",
name(), i));
grad_out.push_back({});
}
}

return grad_out;
}
} // namespace egr
82 changes: 82 additions & 0 deletions paddle/fluid/eager/pylayer/py_layer_node.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include <Python.h>

#include "paddle/fluid/eager/autograd_meta.h"
#include "paddle/fluid/eager/grad_node_info.h"
#include "paddle/fluid/eager/hooks.h"
#include "paddle/phi/core/compat/convert_utils.h"
#include "paddle/phi/core/tensor_meta.h"

namespace egr {

class GradNodePyLayer : public GradNodeBase {
public:
GradNodePyLayer(PyObject* ctx, size_t bwd_in_slot_num,
size_t bwd_out_slot_num)
: GradNodeBase(bwd_in_slot_num, bwd_out_slot_num) {
ctx_ = ctx;
}

~GradNodePyLayer() override { Py_DECREF(ctx_); };

virtual std::vector<std::vector<paddle::experimental::Tensor>> operator()(
std::vector<std::vector<paddle::experimental::Tensor>>& grads, // NOLINT
bool create_graph = false) override;

void ClearTensorWrappers() override { VLOG(6) << "Do nothing here now"; }

bool IsTensorWrappersCleared() override {
VLOG(6) << "Do nothing here now";
return false;
}

std::string name() {
return "GradNodePyLayer_" + std::string(Py_TYPE(ctx_)->tp_name);
}

// for paddle.grad get result
PyObject* GetMutableOutputs() { return outputs_; }

void SaveForwardOutputsMeta(
const std::vector<std::vector<paddle::experimental::Tensor*>>&
outputs_tensor) {
forward_outputs_meta_.resize(outputs_tensor.size());
forward_outputs_place_.resize(outputs_tensor.size());
for (size_t i = 0; i < outputs_tensor.size(); i++) {
forward_outputs_meta_[i].reserve(outputs_tensor[i].size());
forward_outputs_place_[i].reserve(outputs_tensor[i].size());
for (auto tensor : outputs_tensor[i]) {
if (tensor->is_dense_tensor()) {
forward_outputs_meta_[i].push_back(
static_cast<phi::DenseTensor*>(tensor->impl().get())->meta());
} else {
forward_outputs_meta_[i].emplace_back();
}
forward_outputs_place_[i].emplace_back(tensor->inner_place());
}
}
}

private:
PyObject* ctx_{nullptr};
PyObject* outputs_{nullptr};
std::vector<std::vector<phi::DenseTensorMeta>> forward_outputs_meta_;
std::vector<std::vector<paddle::platform::Place>> forward_outputs_place_;
};

} // namespace egr
33 changes: 33 additions & 0 deletions paddle/fluid/eager/utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,16 @@ std::vector<AutogradMeta*> EagerUtils::nullable_autograd_meta(
return metas;
}

std::vector<AutogradMeta*> EagerUtils::nullable_autograd_meta(
const std::vector<paddle::experimental::Tensor*>& targets) {
std::vector<AutogradMeta*> metas;
metas.reserve(targets.size());
for (const paddle::experimental::Tensor* t : targets) {
metas.emplace_back(nullable_autograd_meta(*t));
}
return metas;
}

std::vector<AutogradMeta*> EagerUtils::autograd_meta(
std::vector<paddle::experimental::Tensor>* targets) {
std::vector<AutogradMeta*> ret;
Expand All @@ -103,6 +113,19 @@ std::vector<AutogradMeta*> EagerUtils::autograd_meta(
return ret;
}

std::vector<AutogradMeta*> EagerUtils::autograd_meta(
std::vector<paddle::experimental::Tensor*>* targets) {
std::vector<AutogradMeta*> ret;
ret.reserve(targets->size());

// for autograd_meta we can tolerent it has nullptr.
for (size_t i = 0; i < targets->size(); i++) {
auto* p_autograd_meta = autograd_meta((*targets)[i]);
ret.emplace_back(p_autograd_meta);
}
return ret;
}

std::pair<size_t, size_t> EagerUtils::OutRankInfo(
const paddle::experimental::Tensor& target) {
return unsafe_autograd_meta(target)->OutRankInfo();
Expand Down Expand Up @@ -380,6 +403,16 @@ void EagerUtils::CheckAndRetainGrad(
}
}

void EagerUtils::CheckAndRetainGrad(
const std::vector<paddle::experimental::Tensor*>& tensors) {
if (FLAGS_retain_grad_for_all_tensor) {
for (auto& tensor : tensors) {
VLOG(6) << "RetainGradForTensor: " << tensor->name();
egr::egr_utils_api::RetainGradForTensor(*tensor);
}
}
}

std::shared_ptr<egr::GradNodeBase> EagerUtils::GetGradAccumulationNode(
const paddle::experimental::Tensor& tensor) {
auto* autograd_ptr = nullable_autograd_meta(tensor);
Expand Down
7 changes: 7 additions & 0 deletions paddle/fluid/eager/utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,9 @@ class EagerUtils {
static std::vector<AutogradMeta*> autograd_meta(
std::vector<paddle::experimental::Tensor>* targets);

static std::vector<AutogradMeta*> autograd_meta(
std::vector<paddle::experimental::Tensor*>* targets);

static std::pair<size_t, size_t> OutRankInfo(
const paddle::experimental::Tensor& target);

Expand Down Expand Up @@ -125,6 +128,8 @@ class EagerUtils {
paddle::optional<const paddle::experimental::Tensor&> target);
static std::vector<AutogradMeta*> nullable_autograd_meta(
const std::vector<paddle::experimental::Tensor>& targets);
static std::vector<AutogradMeta*> nullable_autograd_meta(
const std::vector<paddle::experimental::Tensor*>& targets);
static AutogradMeta* unsafe_autograd_meta(
const paddle::experimental::Tensor& target);
static std::vector<AutogradMeta*> unsafe_autograd_meta(
Expand Down Expand Up @@ -220,6 +225,8 @@ class EagerUtils {
static void CheckAndRetainGrad(const paddle::experimental::Tensor& tensor);
static void CheckAndRetainGrad(
const std::vector<paddle::experimental::Tensor>& tensors);
static void CheckAndRetainGrad(
const std::vector<paddle::experimental::Tensor*>& tensors);
static std::shared_ptr<egr::GradNodeBase> GetGradAccumulationNode(
const paddle::experimental::Tensor& tensor);

Expand Down
4 changes: 2 additions & 2 deletions paddle/fluid/pybind/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -350,8 +350,8 @@ if(WITH_PYTHON)

if(NOT ((NOT WITH_PYTHON) AND ON_INFER))
cc_library(paddle_eager
SRCS eager.cc eager_functions.cc eager_method.cc eager_properties.cc eager_utils.cc
DEPS eager_api autograd_meta backward grad_node_info phi op_function_common final_dygraph_function final_dygraph_node dygraph_function dygraph_node accumulation_node global_utils utils python custom_operator custom_operator_node)
SRCS eager.cc eager_functions.cc eager_method.cc eager_properties.cc eager_utils.cc eager_py_layer.cc
DEPS eager_api autograd_meta backward grad_node_info phi op_function_common final_dygraph_function final_dygraph_node dygraph_function dygraph_node accumulation_node py_layer_node global_utils utils python custom_operator custom_operator_node)
add_dependencies(paddle_eager eager_codegen)
add_dependencies(paddle_eager eager_op_function_generator_cmd)
list(APPEND PYBIND_DEPS paddle_eager)
Expand Down
1 change: 1 addition & 0 deletions paddle/fluid/pybind/eager.cc
Original file line number Diff line number Diff line change
Expand Up @@ -753,6 +753,7 @@ void BindEager(pybind11::module* module) {
}

BindFunctions(m.ptr());
BindEagerPyLayer(m.ptr());
BindEagerOpFunctions(&m);
}

Expand Down
20 changes: 20 additions & 0 deletions paddle/fluid/pybind/eager.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,31 @@ limitations under the License. */
#include "pybind11/pybind11.h"
#include "pybind11/stl.h"

#include "paddle/fluid/eager/pylayer/py_layer_node.h"
#include "paddle/phi/core/dense_tensor.h"

namespace paddle {
namespace pybind {

typedef struct {
PyObject_HEAD paddle::experimental::Tensor tensor;
} TensorObject;

typedef struct {
PyObject_HEAD

PyObject* container;
PyObject* non_differentiable;
PyObject* dirty_tensors;
bool materialize_grads;
std::vector<bool> forward_input_tensor_is_duplicable;
std::vector<bool> forward_output_tensor_is_duplicable;
std::weak_ptr<egr::GradNodePyLayer> grad_node;
} PyLayerObject;

void BindEager(pybind11::module* m);
void BindFunctions(PyObject* module);
void BindEagerPyLayer(PyObject* module);

} // namespace pybind
} // namespace paddle
Loading