-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Eager] Pylayer #39989
Merged
Merged
[Eager] Pylayer #39989
Changes from 72 commits
Commits
Show all changes
76 commits
Select commit
Hold shift + click to select a range
8bf0344
Supported Complex2Real Conversion for Eager Dygraph
jim19930609 10645f7
Supported Complex2Real Conversion for Eager Dygraph
jim19930609 b360c23
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 62c5d5e
Enabled complex type promotion test for matmul_v2
jim19930609 ea46995
pylayer, test=develop
wanghuancoder 884dddb
Fix CI issues
jim19930609 9f0bf2b
Merged develop branch
jim19930609 753798e
Support initializing specific grad tensors to zero for selected opera…
jim19930609 03c6f20
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder 530fa56
finish forward, test=develop
wanghuancoder 24dbb6e
create grad node finish, test=develop
wanghuancoder d98e938
Merged adj_edges_ with GradSlotMeta
jim19930609 4855da1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 1ded93a
Fixed monir issue
jim19930609 e478404
Merge develop
jim19930609 d07580e
backward finish, start dbg, test=develop
wanghuancoder bb5c5bc
Adjusted num runs
jim19930609 e641d8b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 8d76a7e
fix some bug, and merge develop, test=develop
wanghuancoder 3cb3c8a
Recovered Eager performance tests configurations
jim19930609 9942837
Recovered Eager performance tests configurations
jim19930609 96b3a42
finish, test=develop
wanghuancoder c7688d0
polish, test=develop
wanghuancoder 59d0850
polish, test=develop
wanghuancoder b661be5
refine, test=develop
wanghuancoder 0b3f6e5
eager, test=develop
wanghuancoder 36f084b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder 6e06997
Adjusted performance tests configurations
jim19930609 489e146
Fixed Minor Issues with performance tests
jim19930609 802a860
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder 428d455
merge pr 39878
wanghuancoder c7b600e
[Phi] Fix macro name typo
Aurelius84 d3e383b
Merge commit 'refs/pull/40204/head' of https://github.com/PaddlePaddl…
wanghuancoder 2688122
support set_materialize_grads, test=develop
wanghuancoder c58de03
suppotr mark_non_differentiable, test=develop
wanghuancoder 0dfbb39
support once_differentiable, test=develop
wanghuancoder fb00410
refine, test=develop
wanghuancoder 1c86cec
Merge branch 'develop' into pylayer
wanghuancoder 8534ec8
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder e5eb8e1
refine, test=develop
wanghuancoder cc67f30
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 489580e
Moved out Edge from GradSlotMeta
jim19930609 96d0960
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 a0a89db
Fixed issues from merge
jim19930609 b8538de
Fixed typo
jim19930609 27991c5
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 a25d534
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 ae44285
Addressed review comments
jim19930609 303f06d
Fixed merge issues
jim19930609 02efb72
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 91dbbe3
Fixed minor issues
jim19930609 bcb7137
Merge branch 'support_complex' of https://github.com/jim19930609/Padd…
jim19930609 1410253
Fixed minor issue
jim19930609 908a9a6
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 a08c83d
merge pr39963, test=develop
wanghuancoder 7559ccf
merge, test=develop
wanghuancoder 8ba3c04
merge, test=develop
wanghuancoder 970581c
refine, test=develop
wanghuancoder bca12a1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 3a7715c
refine, test=develop
wanghuancoder 17aff34
refine, test=develop
wanghuancoder b8c311c
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 ed54418
Fixed major issues and enabled auto_prune test cases
jim19930609 4e31a54
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
jim19930609 154fdd6
Fixed issues from merge
jim19930609 7eb8252
Merged develop
jim19930609 dcbd991
merge PR39963, test=develop
wanghuancoder d96f201
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder f7fc963
refine, test=develop
wanghuancoder abc1eee
refine, test=develop
wanghuancoder 7fed773
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder 7eee5f8
refine, test=develop
wanghuancoder 73b946a
refine, test=develop
wanghuancoder 2036eca
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wanghuancoder 9121bd3
refine, test=develop
wanghuancoder 4444d85
Merge branch 'develop' into pylayer
wanghuancoder File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -213,6 +213,49 @@ void GradNodeBase::SetGradInMeta( | |
} | ||
} | ||
|
||
void GradNodeBase::SetGradInMeta( | ||
const std::vector<paddle::experimental::Tensor*>& fwd_out, | ||
size_t slot_rank) { | ||
size_t slot_size = fwd_out.size(); | ||
PADDLE_ENFORCE_LE( | ||
slot_rank, (bwd_in_meta_.size() - 1), | ||
paddle::platform::errors::InvalidArgument( | ||
"Slot Rank should less equal than bwd_in_meta_ size, since " | ||
"bwd_in_meta_ is designed to hold as same num as backward " | ||
"inputs.")); | ||
auto& metas = bwd_in_meta_.at(slot_rank); | ||
// Init stop gradient vector before use to avoid push back | ||
metas.resize(slot_size); | ||
for (size_t i = 0; i < slot_size; i++) { | ||
auto& meta = metas[i]; | ||
const auto& fwd_out_tensor = *fwd_out[i]; | ||
auto* fwd_out_meta = | ||
egr::EagerUtils::nullable_autograd_meta(fwd_out_tensor); | ||
PADDLE_ENFORCE_NOT_NULL(fwd_out_meta, | ||
paddle::platform::errors::PreconditionNotMet( | ||
"Bwd_in_meta should only be called while " | ||
"autograd_meta is not null. If you got this " | ||
"error, it indicates bugs in framework.")); | ||
if (fwd_out_meta->StopGradient()) { | ||
// Set Stop Gradient only when its true or non-initialized autograd_meta, | ||
// since all default value is false. | ||
meta.SetStopGradient(fwd_out_meta->StopGradient()); | ||
} | ||
|
||
// Record TensorMeta | ||
if (phi::DenseTensor::classof(fwd_out_tensor.impl().get())) { | ||
// Only Copy Meta | ||
phi::DenseTensor* dense_tensor = | ||
static_cast<phi::DenseTensor*>(fwd_out_tensor.impl().get()); | ||
meta.SetTensorMeta(dense_tensor->meta()); | ||
if (paddle::framework::IsComplexType( | ||
paddle::framework::TransToProtoVarType(dense_tensor->type()))) { | ||
need_complex_to_real_ = true; | ||
} | ||
} | ||
} | ||
} | ||
|
||
void GradNodeBase::SetGradOutMeta(const paddle::experimental::Tensor& fwd_in, | ||
size_t slot_rank) { | ||
auto* fwd_in_meta = egr::EagerUtils::nullable_autograd_meta(fwd_in); | ||
|
@@ -300,6 +343,41 @@ void GradNodeBase::SetGradOutMeta( | |
} | ||
} | ||
|
||
void GradNodeBase::SetGradOutMeta( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. same There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 我给删了,相应的逻辑我在PyLayer的代码中实现了。 |
||
const std::vector<paddle::experimental::Tensor*>& fwd_in, | ||
size_t slot_rank) { | ||
size_t slot_size = fwd_in.size(); | ||
PADDLE_ENFORCE_LE( | ||
slot_rank, (bwd_out_meta_.size() - 1), | ||
paddle::platform::errors::InvalidArgument( | ||
"Slot Rank should less equal than bwd_out_meta_ size, " | ||
"since bwd_out_meta_ is designed to hold as same num as " | ||
"backward outputs.")); | ||
auto& metas = bwd_out_meta_.at(slot_rank); | ||
// Init stop gradient vector before use to avoid push back | ||
metas.resize(slot_size); | ||
for (size_t i = 0; i < slot_size; i++) { | ||
const auto& fwd_in_tensor = *fwd_in[i]; | ||
auto& meta = metas[i]; | ||
auto* fwd_in_meta = egr::EagerUtils::nullable_autograd_meta(fwd_in_tensor); | ||
if (fwd_in_meta) { | ||
// Set Stop Gradient only when its true or non-initialized autograd_meta, | ||
// since all default value is false. | ||
meta.SetStopGradient(fwd_in_meta->StopGradient()); | ||
} | ||
|
||
// Record TensorMeta | ||
if (fwd_in_tensor.impl() && fwd_in_tensor.impl().get()) { | ||
if (phi::DenseTensor::classof(fwd_in_tensor.impl().get())) { | ||
// Only Copy Meta | ||
phi::DenseTensor* dense_tensor = | ||
static_cast<phi::DenseTensor*>(fwd_in_tensor.impl().get()); | ||
meta.SetTensorMeta(dense_tensor->meta()); | ||
} | ||
} | ||
} | ||
} | ||
|
||
void GradNodeBase::SetDefaultGradInOutMeta() { | ||
PADDLE_ENFORCE((bwd_out_meta_.size() == 1) && (bwd_in_meta_.size() == 1), | ||
paddle::platform::errors::PreconditionNotMet( | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
cc_library(py_layer_node SRCS py_layer_node.cc DEPS phi phi_api grad_node_info) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,159 @@ | ||
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. | ||
// | ||
// Licensed under the Apache License, Version 2.0 (the "License"); | ||
// you may not use this file except in compliance with the License. | ||
// You may obtain a copy of the License at | ||
// | ||
// http://www.apache.org/licenses/LICENSE-2.0 | ||
// | ||
// Unless required by applicable law or agreed to in writing, software | ||
// distributed under the License is distributed on an "AS IS" BASIS, | ||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#include "paddle/fluid/eager/pylayer/py_layer_node.h" | ||
#include "paddle/fluid/eager/eager_tensor.h" | ||
|
||
#include "paddle/phi/api/all.h" | ||
#include "paddle/phi/core/dense_tensor.h" | ||
|
||
#include "paddle/fluid/platform/device_context.h" | ||
#include "paddle/fluid/platform/enforce.h" | ||
#include "paddle/fluid/platform/errors.h" | ||
#include "paddle/fluid/pybind/eager.h" | ||
#include "paddle/fluid/pybind/eager_utils.h" | ||
|
||
#include "glog/logging.h" | ||
#pragma GCC diagnostic ignored "-Wattributes" | ||
#include "pybind11/pytypes.h" | ||
|
||
namespace egr { | ||
std::vector<std::vector<paddle::experimental::Tensor>> GradNodePyLayer:: | ||
operator()( | ||
std::vector<std::vector<paddle::experimental::Tensor>>& grads, // NOLINT | ||
bool create_graph) { | ||
VLOG(3) << "Running Eager Backward Node: " << name(); | ||
|
||
std::vector<std::vector<paddle::experimental::Tensor>> hooked_grads = | ||
GradNodePyLayer::ApplyGradientHooks(grads); | ||
|
||
paddle::pybind::PyLayerObject* ctx = | ||
reinterpret_cast<paddle::pybind::PyLayerObject*>(ctx_); | ||
|
||
PADDLE_ENFORCE_EQ(ctx->forward_output_tensor_is_duplicable.size(), | ||
grads.size(), | ||
paddle::platform::errors::InvalidArgument( | ||
"%s's grad input size(%s) mast be equal with it's " | ||
"forward's output size(%s).", | ||
name(), grads.size(), | ||
ctx->forward_output_tensor_is_duplicable.size())); | ||
|
||
auto backward_args = PyTuple_New(grads.size()); | ||
for (size_t i = 0; i < grads.size(); i++) { | ||
if (ctx->forward_output_tensor_is_duplicable[i]) { | ||
PyObject* pylist = PyList_New((Py_ssize_t)grads[i].size()); | ||
for (size_t j = 0; j < grads[i].size(); j++) { | ||
if (ctx->materialize_grads && !grads[i][j].initialized()) { | ||
paddle::experimental::Tensor tensor_tmp; | ||
auto dense_tensor = std::make_shared<phi::DenseTensor>(); | ||
dense_tensor->set_meta(forward_outputs_meta_[i][j]); | ||
tensor_tmp.set_impl(dense_tensor); | ||
PyList_SET_ITEM( | ||
pylist, static_cast<Py_ssize_t>(i), | ||
paddle::pybind::ToPyObject(paddle::experimental::zeros_like( | ||
tensor_tmp, tensor_tmp.dtype(), | ||
forward_outputs_place_[i][j]))); | ||
} else { | ||
PyList_SET_ITEM(pylist, static_cast<Py_ssize_t>(i), | ||
paddle::pybind::ToPyObject(grads[i][0], true)); | ||
} | ||
} | ||
PyTuple_SET_ITEM(backward_args, i, pylist); | ||
} else { | ||
if (ctx->materialize_grads && !grads[i][0].initialized()) { | ||
paddle::experimental::Tensor tensor_tmp; | ||
auto dense_tensor = std::make_shared<phi::DenseTensor>(); | ||
dense_tensor->set_meta(forward_outputs_meta_[i][0]); | ||
tensor_tmp.set_impl(dense_tensor); | ||
PyTuple_SET_ITEM( | ||
backward_args, i, | ||
paddle::pybind::ToPyObject(paddle::experimental::zeros_like( | ||
tensor_tmp, tensor_tmp.dtype(), forward_outputs_place_[i][0]))); | ||
} else { | ||
PyTuple_SET_ITEM(backward_args, i, | ||
paddle::pybind::ToPyObject(grads[i][0], true)); | ||
} | ||
} | ||
} | ||
|
||
VLOG(6) << "PyLayer backward args is ready, begin call user's backward " | ||
"function..."; | ||
|
||
auto backward_fn = | ||
PyObject_GetAttrString(reinterpret_cast<PyObject*>(ctx), "backward"); | ||
if (!backward_fn) { | ||
PADDLE_THROW(paddle::platform::errors::InvalidArgument( | ||
"Get backward function faild.")); | ||
} | ||
auto outputs = PyObject_CallObject(backward_fn, backward_args); | ||
if (!outputs) { | ||
PADDLE_THROW(paddle::platform::errors::External( | ||
pybind11::detail::error_string().c_str())); | ||
} | ||
|
||
outputs_ = outputs; | ||
|
||
VLOG(6) << "PyLayer backward function finish..."; | ||
|
||
PyObject* outputs_tuple = nullptr; | ||
if (PyTuple_Check(outputs)) { | ||
outputs_tuple = outputs; | ||
} else { | ||
outputs_tuple = PyTuple_New(1); | ||
Py_INCREF(outputs); | ||
PyTuple_SET_ITEM(outputs_tuple, 0, outputs); | ||
} | ||
|
||
size_t outputs_size = PyTuple_GET_SIZE(outputs_tuple); | ||
|
||
if (outputs_size > ctx->forward_input_tensor_is_duplicable.size()) { | ||
PADDLE_THROW(paddle::platform::errors::InvalidArgument( | ||
"The number of outputs of `PyLayer.backward` should be %d, but " | ||
"received %d.", | ||
ctx->forward_input_tensor_is_duplicable.size(), outputs_size)); | ||
} | ||
|
||
std::vector<std::vector<paddle::experimental::Tensor>> grad_out; | ||
grad_out.reserve(ctx->forward_input_tensor_is_duplicable.size()); | ||
for (size_t i = 0; i < ctx->forward_input_tensor_is_duplicable.size(); i++) { | ||
if (i < outputs_size) { | ||
PyObject* obj = PyTuple_GET_ITEM(outputs_tuple, i); | ||
if (this->OutputMeta()[i][0].IsStopGradient()) { | ||
PADDLE_ENFORCE_EQ( | ||
obj, Py_None, | ||
paddle::platform::errors::InvalidArgument( | ||
"%s's backward function should return None at %d position, " | ||
"because it's forward Tensor's stopgradient is true.", | ||
name(), i)); | ||
grad_out.push_back({}); | ||
} else { | ||
if (ctx->forward_input_tensor_is_duplicable[i]) { | ||
grad_out.push_back(paddle::pybind::GetTensorListFromPyObject(obj)); | ||
} else { | ||
grad_out.push_back({paddle::pybind::GetTensorFromPyObject(obj)}); | ||
} | ||
} | ||
} else { | ||
PADDLE_ENFORCE_EQ( | ||
this->OutputMeta()[i][0].IsStopGradient(), true, | ||
paddle::platform::errors::InvalidArgument( | ||
"%s's backward function should not return empyt at %d position.", | ||
name(), i)); | ||
grad_out.push_back({}); | ||
} | ||
} | ||
|
||
return grad_out; | ||
} | ||
} // namespace egr |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,82 @@ | ||
// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. | ||
// | ||
// Licensed under the Apache License, Version 2.0 (the "License"); | ||
// you may not use this file except in compliance with the License. | ||
// You may obtain a copy of the License at | ||
// | ||
// http://www.apache.org/licenses/LICENSE-2.0 | ||
// | ||
// Unless required by applicable law or agreed to in writing, software | ||
// distributed under the License is distributed on an "AS IS" BASIS, | ||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#pragma once | ||
|
||
#include <Python.h> | ||
|
||
#include "paddle/fluid/eager/autograd_meta.h" | ||
#include "paddle/fluid/eager/grad_node_info.h" | ||
#include "paddle/fluid/eager/hooks.h" | ||
#include "paddle/phi/core/compat/convert_utils.h" | ||
#include "paddle/phi/core/tensor_meta.h" | ||
|
||
namespace egr { | ||
|
||
class GradNodePyLayer : public GradNodeBase { | ||
public: | ||
GradNodePyLayer(PyObject* ctx, size_t bwd_in_slot_num, | ||
size_t bwd_out_slot_num) | ||
: GradNodeBase(bwd_in_slot_num, bwd_out_slot_num) { | ||
ctx_ = ctx; | ||
} | ||
|
||
~GradNodePyLayer() override { Py_DECREF(ctx_); }; | ||
|
||
virtual std::vector<std::vector<paddle::experimental::Tensor>> operator()( | ||
std::vector<std::vector<paddle::experimental::Tensor>>& grads, // NOLINT | ||
bool create_graph = false) override; | ||
|
||
void ClearTensorWrappers() override { VLOG(6) << "Do nothing here now"; } | ||
|
||
bool IsTensorWrappersCleared() override { | ||
VLOG(6) << "Do nothing here now"; | ||
return false; | ||
} | ||
|
||
std::string name() { | ||
return "GradNodePyLayer_" + std::string(Py_TYPE(ctx_)->tp_name); | ||
} | ||
|
||
// for paddle.grad get result | ||
PyObject* GetMutableOutputs() { return outputs_; } | ||
|
||
void SaveForwardOutputsMeta( | ||
const std::vector<std::vector<paddle::experimental::Tensor*>>& | ||
outputs_tensor) { | ||
forward_outputs_meta_.resize(outputs_tensor.size()); | ||
forward_outputs_place_.resize(outputs_tensor.size()); | ||
for (size_t i = 0; i < outputs_tensor.size(); i++) { | ||
forward_outputs_meta_[i].reserve(outputs_tensor[i].size()); | ||
forward_outputs_place_[i].reserve(outputs_tensor[i].size()); | ||
for (auto tensor : outputs_tensor[i]) { | ||
if (tensor->is_dense_tensor()) { | ||
forward_outputs_meta_[i].push_back( | ||
static_cast<phi::DenseTensor*>(tensor->impl().get())->meta()); | ||
} else { | ||
forward_outputs_meta_[i].emplace_back(); | ||
} | ||
forward_outputs_place_[i].emplace_back(tensor->inner_place()); | ||
} | ||
} | ||
} | ||
|
||
private: | ||
PyObject* ctx_{nullptr}; | ||
PyObject* outputs_{nullptr}; | ||
std::vector<std::vector<phi::DenseTensorMeta>> forward_outputs_meta_; | ||
std::vector<std::vector<paddle::platform::Place>> forward_outputs_place_; | ||
}; | ||
|
||
} // namespace egr |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe rename it to avoid miss using of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我给删了,相应的逻辑我在PyLayer的代码中实现了。