Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move renorm op #44676

Merged
merged 14 commits into from
Aug 2, 2022
Merged

Conversation

seemingwang
Copy link
Contributor

PR types

Function optimization

PR changes

Others

Describe

move renorm op

@paddle-bot
Copy link

paddle-bot bot commented Jul 27, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Comment on lines 32 to 34
// #if defined(__NVCC__) || defined(__HIPCC__)
// #include "paddle/fluid/platform/device/gpu/gpu_primitives.h"
// #endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

无用的注释删掉

Comment on lines 53 to 54
// auto& dev_ctx = ctx.template device_context<DeviceContext>();
// std::vector<int64_t> dim_index(dim_size, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些注释应该是无用代码,删掉吧

Comment on lines 58 to 59
// auto* out_data =
// out->mutable_data<T>(context.GetPlace(), size_t(numel * sizeof(T)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

for (int i = 0; i < dim; i++) pre_mul *= input_dims[i];
pow_value.Resize(phi::make_ddim({pre_mul, dimension_each, dim_divisor}));
dim_value.Resize(phi::make_ddim({dimension_each}));
// pow_value.mutable_data<T>(context.GetPlace());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 302 to 303
// out->Resize(phi::make_ddim(phi::vectorize(input_dims)));
// T* out_data = out->mutable_data<T>(context.GetPlace());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 312 to 319
// std::vector<const framework::Tensor*> ins = {x};
// std::vector<framework::Tensor*> outs = {&pow_value};
// auto func = UnsignedPowFunctor<MT, T>(p);
// const auto& cuda_ctx =
// context.template device_context<platform::CUDADeviceContext>();

// paddle::operators::LaunchSameDimsElementwiseCudaKernel<T>(
// cuda_ctx, ins, &outs, func);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


#pragma once

#include "paddle/fluid/memory/buffer.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个头文件有用到吗,没用到可以删掉

Comment on lines +1755 to +1758
py_test_modules(test_renorm_op_without_eager MODULES test_renorm_op ENVS
FLAGS_enable_eager_mode=0)

set_tests_properties(test_renorm_op_without_eager PROPERTIES TIMEOUT 120)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里修改的原因是?

@@ -0,0 +1,390 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个文件我看只有renorm用到了,可以放到impl下,funcs下一般是放置通用的函数

Copy link
Contributor

@XieYunshen XieYunshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for set_tests_properties(test_renorm_op_without_eager PROPERTIES TIMEOUT 120)

@seemingwang seemingwang merged commit 669353c into PaddlePaddle:develop Aug 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants