Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove detail from LODTensor #3364

Merged
merged 15 commits into from
Aug 14, 2017
Merged

Conversation

Superjomn
Copy link
Contributor

@Superjomn Superjomn commented Aug 9, 2017

fix: #3441

@Superjomn Superjomn requested a review from wangkuiyi August 9, 2017 13:14
namespace paddle {
namespace framework {

namespace detail {

using LOD = LODTensor::LOD;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No using in header files.

* @level_begin: level to begin slice.
* @level_end: level to end slice.
*/
LOD SliceLOD(const LODTensor::LOD &lod, size_t level_begin, size_t level_end);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems more reasonable to make SliceLOD a method of class LOD LOD::Slice.

* @elem_begin: element's index to begin slice.
* @elem_end: element's index to end slice.
*/
LOD SliceLOD(const LODTensor::LOD &lod, size_t level, size_t elem_begin,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems more reasonable if SliceLOD is a method of LOD: LOD::Slice.


return LODTensor(new_tensor, new_lod);
LODTensor LODTensor::SliceLevels(size_t level_begin, size_t level_end) const {
auto new_lod = detail::SliceLOD(lod_start_pos_, level_begin, level_end);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that if we can fix #3376, then L47 to L52 could be merged into one line?

return LODTensor(lod_.Slice(level_begin, level_end)).ShareDataWith<T>(*this);

auto sliced_tensor = tensor_->Slice<T>(start_idx, end_idx);
auto new_tensor = std::make_shared<Tensor>();
new_tensor->CopyFrom<T>(sliced_tensor, dst_place);
// slice elements just need to update LOD info, because offsets are not
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slice elements ... => Slice elements ...

@@ -37,6 +38,12 @@ class Variable {
return static_cast<T*>(holder_->Ptr());
}

void CloneTensorType(const Variable& other) {
Copy link
Collaborator

@wangkuiyi wangkuiyi Aug 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

template <typename T>
void Assign(const Variable& v) {
   holder_.reset(new PlaceholderImpl<T>(v.Get()));
}
template <typename T>
void Assign(T* v) {
   holder_.reset(new PlaceholderImpl<T>(v));
}

@Superjomn Superjomn changed the title Tensor improve remove detail from LODTensor Aug 12, 2017
Copy link
Member

@jacquesqiao jacquesqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! need following PRs to complete this.

@jacquesqiao jacquesqiao merged commit 1ee633d into PaddlePaddle:develop Aug 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LODTensor merge details
5 participants