Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Unify Tensors PR #3]Port framework::Tensor members & interfaces to pten::DenseTensor, test=allcases #38473

Merged

Conversation

jim19930609
Copy link
Contributor

PR types

New features

PR changes

Others

Describe

Port framework::Tensor members & interfaces to pten::DenseTensor

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

meta_.offset = 0;
}

DenseTensor::DenseTensor(const paddle::framework::proto::VarType::Type& dtype) {
Copy link
Contributor

@Shixiaowei02 Shixiaowei02 Dec 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里建议不用第三方库的类型

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时保留对protobuf的依赖,1月底前完成清理。已在TrickList中追踪。

return Split(split_size, axis);
}

void* DenseTensor::mutable_data(const paddle::platform::Place& place,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


DenseTensor();

explicit DenseTensor(const paddle::framework::proto::VarType::Type& dtype);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时保留对protobuf的依赖,1月底前完成清理。已在TrickList中追踪。


Will be adjusted/removed/moved in the near future
*/
public:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EigenTensor / EigenMatrix / EigenVector 友元类请在后续调整去除

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已删除


std::vector<DenseTensor> Chunk(int64_t chunks, int64_t axis) const;

paddle::framework::proto::VarType::Type type() const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂时保留对protobuf的依赖,1月底前完成清理。已在TrickList中追踪。

@jim19930609 jim19930609 changed the title [Unify Tensors PR #3]Port framework::Tensor members & interfaces to pten::DenseTensor [Unify Tensors PR #3]Port framework::Tensor members & interfaces to pten::DenseTensor, test=allcases Dec 30, 2021
meta_.dims = dims;
mutable_data();
if (storage_ != nullptr) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里如果直接默认构造一个DenseTensor,然后调用resize,是不是没有实际发挥作用,且不会有任何提示,建议在后续PR增加报错提示

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resize函数定义里加了一个注释。本质上这里是暂时兼容两种用法:

  1. pten用法:用Allocator初始化DenseTensor,则Resize不仅修改dims还会直接分配内存
  2. fluid用法:默认构造DenseTensor,place未知,则延迟到调用mutable_data(place)时再行分配内存

size_t requested_size = 0);

template <typename T>
T* mutable_data(const DDim& dims,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后续可以在不希望被pten kernel使用、仅用于兼容的方法上均增加注释提示,避免迁移kernel时误用

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下一个PR里会补充相应注释。

data_ = holder;
}

std::shared_ptr<paddle::memory::Allocation> move_data_shared() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请问这里是不是返回 shared_ptr&& 好一些

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢!提交一个后续Patch进行修改。

Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for allocshared

@jim19930609 jim19930609 merged commit dfdc996 into PaddlePaddle:develop Jan 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants