Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev add contiguous view ops #7503

Merged
merged 60 commits into from
Mar 1, 2022
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
880c677
view op
Flowingsun007 Feb 14, 2022
7a7c71f
narrow op
Flowingsun007 Feb 14, 2022
3e65403
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 15, 2022
5ab765c
squeeze unsqueeze op
Flowingsun007 Feb 15, 2022
93528b0
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 15, 2022
0508fcd
revert narrow
Flowingsun007 Feb 15, 2022
179d1d0
refine
Flowingsun007 Feb 15, 2022
149e65d
refine
Flowingsun007 Feb 16, 2022
eda2f18
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 17, 2022
1e3aebc
refine
Flowingsun007 Feb 18, 2022
1f4f20d
format
Flowingsun007 Feb 18, 2022
93b2cc5
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 18, 2022
f8aa516
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 18, 2022
5330327
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 18, 2022
e896e4f
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 19, 2022
d935aeb
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 21, 2022
e0d7437
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 21, 2022
1c4d27d
refine
Flowingsun007 Feb 21, 2022
3cd38df
fix comments
Flowingsun007 Feb 21, 2022
9827953
mrefine
Flowingsun007 Feb 21, 2022
508a6dd
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 22, 2022
e051495
refine
Flowingsun007 Feb 22, 2022
8efe8d4
refine
Flowingsun007 Feb 22, 2022
4a4b803
add todo
Flowingsun007 Feb 22, 2022
5fe8df8
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 22, 2022
c6e3ac9
fix comment
Flowingsun007 Feb 22, 2022
c54e447
use computeStride
Flowingsun007 Feb 22, 2022
5cc546c
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 22, 2022
ca4a73b
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 23, 2022
060c09d
auto format by CI
oneflow-ci-bot Feb 23, 2022
3844109
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 24, 2022
542f58a
refine
Flowingsun007 Feb 24, 2022
eb75c64
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 24, 2022
84e2f3f
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 24, 2022
ec9b728
refine
Flowingsun007 Feb 24, 2022
4632f16
Merge branch 'dev_add_contiguous_view_ops' of github.com:Oneflow-Inc/…
Flowingsun007 Feb 24, 2022
563013e
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 25, 2022
6b4badf
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 25, 2022
85fc0ab
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 26, 2022
ae3851f
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 28, 2022
b27fdcf
refine
Flowingsun007 Feb 28, 2022
1e1d5c9
refine
Flowingsun007 Feb 28, 2022
1a26fee
refine
Flowingsun007 Feb 28, 2022
efe7541
support scalar tensor view
Flowingsun007 Feb 28, 2022
061ce6c
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 28, 2022
311c325
refine
Flowingsun007 Feb 28, 2022
4616828
refint
Flowingsun007 Feb 28, 2022
c2c4618
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Feb 28, 2022
ee96906
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Mar 1, 2022
645ae80
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Mar 1, 2022
c46dbf4
auto format by CI
oneflow-ci-bot Mar 1, 2022
38be4ae
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Mar 1, 2022
16e2cfc
fix clang check
Flowingsun007 Mar 1, 2022
d636809
auto format by CI
oneflow-ci-bot Mar 1, 2022
51f8830
refine
Flowingsun007 Mar 1, 2022
fa1bb2e
auto format by CI
oneflow-ci-bot Mar 1, 2022
9d08218
Merge branch 'master' into dev_add_contiguous_view_ops
Flowingsun007 Mar 1, 2022
7615563
Merge branch 'master' into dev_add_contiguous_view_ops
oneflow-ci-bot Mar 1, 2022
958f5d4
Merge branch 'master' into dev_add_contiguous_view_ops
oneflow-ci-bot Mar 1, 2022
85a5069
Merge branch 'master' into dev_add_contiguous_view_ops
oneflow-ci-bot Mar 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 97 additions & 28 deletions oneflow/core/framework/tensor_methods.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,6 @@ Maybe<Tensor> BasicView(const std::shared_ptr<Tensor>& input, const Shape& targe

Maybe<Tensor> BasicView(const std::shared_ptr<Tensor>& input, const Shape& target_shape,
const Stride& target_stride, int64_t storage_offset) {
storage_offset = storage_offset + JUST(JUST(input->AsMirroredTensor())->storage_offset());
// TODO(): Check shape compatible.
auto device = JUST(input->device());
auto tensor_meta = std::make_shared<MirroredTensorMeta>(
Expand All @@ -86,38 +85,14 @@ Maybe<Tensor> BasicView(const std::shared_ptr<Tensor>& input, const Shape& targe
return output;
}

Maybe<Tensor> Reshape(const std::shared_ptr<Tensor>& input, const Shape& shape) {
Maybe<Tensor> Reshape(const std::shared_ptr<Tensor>& input, const Shape& target_shape) {
wyg1997 marked this conversation as resolved.
Show resolved Hide resolved
if (!(input->is_eager() && input->is_local())) {
return Error::RuntimeError() << "view::Reshape(): input should be eager local tensor, but got "
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved
<< (input->is_lazy() ? "lazy" : "consistent");
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved
}
int need_infer_axis = -1;
size_t count = 1;
for (int i = 0; i < shape.NumAxes(); ++i) {
if (shape.At(i) < -1) {
return Error::RuntimeError() << "Invalid shape dimension " << shape.At(i);
} else if (shape.At(i) == -1) {
CHECK_EQ_OR_RETURN(need_infer_axis, -1)
<< "Shape " << shape.ToString() << " has more than 1 axis that needs to be infered.";
need_infer_axis = i;
} else {
count *= shape.At(i);
}
}

std::shared_ptr<Tensor> output;
size_t x_count = input->shape()->Count(0);
if (need_infer_axis == -1) {
CHECK_EQ_OR_RETURN(shape.Count(0), x_count);
output = JUST(BasicView(input, shape, 0));
} else {
Shape infered_shape = shape;
infered_shape.Set(need_infer_axis, x_count / count);
CHECK_EQ_OR_RETURN(infered_shape.Count(0), x_count)
<< "Shape " << shape.ToString() << " is invalid for input of shape "
<< input->shape()->ToString();
output = JUST(BasicView(input, infered_shape, 0));
}
int64_t storage_offset = JUST(JUST(input->AsMirroredTensor())->storage_offset());
std::shared_ptr<Tensor> output = JUST(BasicView(input, target_shape, storage_offset));

if (autograd::GradMode::is_enabled() && input->requires_grad()) {
Shape input_shape(input->shape()->dim_vec());
Expand Down Expand Up @@ -192,6 +167,100 @@ Maybe<Tensor> Slice(const std::shared_ptr<Tensor>& input, const std::vector<int6
return output;
}


Maybe<Tensor> UnSqueeze(const std::shared_ptr<Tensor>& input, const int32_t& expand_dim) {
if (!(input->is_eager() && input->is_local())) {
return Error::RuntimeError()
<< "view::UnSqueeze(): input should be eager local tensor, but got "
<< (input->is_lazy() ? "lazy" : "consistent");
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved
}

const auto& shape = input->shape();
const auto& strides = JUST(input->stride());
const auto& ndim = shape->NumAxes();

DimVector target_dim_vec(ndim + 1);
StrideVector target_stride_vec(ndim + 1);

int cnt = 0;
for (int i = 0; i < ndim; i++) {
if (i == expand_dim) { cnt++; }
target_dim_vec[cnt] = shape->At(i);
target_stride_vec[cnt] = strides->At(i);
cnt++;
}
target_dim_vec[expand_dim] = 1;
target_stride_vec[expand_dim] = strides->At(expand_dim);
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved

int64_t storage_offset = JUST(JUST(input->AsMirroredTensor())->storage_offset());
std::shared_ptr<Tensor> output =
JUST(BasicView(input, Shape(target_dim_vec), Stride(target_stride_vec), storage_offset));

if (autograd::GradMode::is_enabled() && input->requires_grad()) {
auto backward_fn =
std::make_shared<std::function<Maybe<void>(const TensorTuple&, TensorTuple*, bool)>>(
[=](const TensorTuple& out_grads, TensorTuple* in_grads,
bool create_graph) -> Maybe<void> {
autograd::AutoGradMode mode(create_graph);
CHECK_EQ_OR_RETURN(out_grads.size(), 1);
in_grads->resize(1);
in_grads->at(0) = JUST(functional::Reshape(out_grads.at(0), *shape));
return Maybe<void>::Ok();
});
TensorTuple outputs{output};
JUST(GetThreadLocalAutogradEngine()->AddBackwardFuncPtr("view::unsqueeze_backward",
backward_fn, {input}, &outputs));
}
return output;
}

Maybe<Tensor> Squeeze(const std::shared_ptr<Tensor>& input,
const std::vector<int32_t>& squeeze_dims) {
if (!(input->is_eager() && input->is_local())) {
return Error::RuntimeError() << "view::Squeeze(): input should be eager local tensor, but got "
<< (input->is_lazy() ? "lazy" : "consistent");
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved
}

const auto& shape = input->shape();
const auto& strides = JUST(input->stride());
const int64_t ndim = shape->NumAxes();

const int target_ndim = ndim - squeeze_dims.size();
DimVector target_dim_vec(target_ndim);
StrideVector target_stride_vec(target_ndim);

int cnt = 0;
for (int i = 0; i < ndim; i++) {
if (find(squeeze_dims.begin(), squeeze_dims.end(), i) == squeeze_dims.end()) {
target_dim_vec[cnt] = shape->At(i);
target_stride_vec[cnt] = strides->At(i);
cnt++;
}
}
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved

int64_t storage_offset = JUST(JUST(input->AsMirroredTensor())->storage_offset());
std::shared_ptr<Tensor> output =
JUST(BasicView(input, Shape(target_dim_vec), Stride(target_stride_vec), storage_offset));

if (autograd::GradMode::is_enabled() && input->requires_grad()) {
auto backward_fn =
std::make_shared<std::function<Maybe<void>(const TensorTuple&, TensorTuple*, bool)>>(
[=](const TensorTuple& out_grads, TensorTuple* in_grads,
bool create_graph) -> Maybe<void> {
autograd::AutoGradMode mode(create_graph);
CHECK_EQ_OR_RETURN(out_grads.size(), 1);
in_grads->resize(1);
in_grads->at(0) = JUST(functional::ReshapeLike(out_grads.at(0), input));
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved
return Maybe<void>::Ok();
});
TensorTuple outputs{output};
JUST(GetThreadLocalAutogradEngine()->AddBackwardFuncPtr("view::squeeze_backward", backward_fn,
{input}, &outputs));
}
return output;
}


} // namespace view
} // namespace one
} // namespace oneflow
10 changes: 9 additions & 1 deletion oneflow/core/framework/tensor_methods.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,21 @@ Maybe<bool> IsContiguous(const std::shared_ptr<Tensor>& tensor);
namespace view {

Maybe<Tensor> BasicView(const std::shared_ptr<Tensor>& input, const Shape& target_shape,
const Stride& target_strides, int64_t storage_offset);
int64_t storage_offset);

Maybe<Tensor> BasicView(const std::shared_ptr<Tensor>& input, const Shape& target_shape,
const Stride& target_stride, int64_t storage_offset);

Maybe<Tensor> Reshape(const std::shared_ptr<Tensor>& input, const Shape& shape);

Maybe<Tensor> Slice(const std::shared_ptr<Tensor>& input, const std::vector<int64_t>& starts,
const std::vector<int64_t>& ends, const std::vector<int64_t>& steps);

Maybe<Tensor> UnSqueeze(const std::shared_ptr<Tensor>& input, const int32_t& expand_dim);
Flowingsun007 marked this conversation as resolved.
Show resolved Hide resolved

Maybe<Tensor> Squeeze(const std::shared_ptr<Tensor>& input,
const std::vector<int32_t>& squeeze_dims);

} // namespace view
} // namespace one
} // namespace oneflow
Expand Down
16 changes: 10 additions & 6 deletions oneflow/core/functional/functional_api.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -784,6 +784,12 @@
signature: "Tensor (Tensor input, Int32 dim) => Unsqueeze"
bind_python: True

- name: "squeeze"
signature: [
"Tensor (Tensor x, Int32List[1] dim=None) => Squeeze",
]
bind_python: True

- name: "exp"
signature: "Tensor (Tensor x) => Exp"
bind_python: True
Expand Down Expand Up @@ -1107,6 +1113,10 @@
signature: "Tensor (Tensor x, Shape shape) => Reshape"
bind_python: True

- name: "view"
signature: "Tensor (Tensor x, Shape shape) => View"
bind_python: True

- name: "slice_view_1d_contiguous"
signature: "Tensor (Tensor x, Int64 start, Int64 end) => SliceView1dContiguous"
bind_python: True
Expand Down Expand Up @@ -1139,12 +1149,6 @@
signature: "Void (Tensor ref, Tensor value, Int64List start, Int64List stop, Int64List step) => LogicalSliceAssign"
bind_python: True

- name: "squeeze"
signature: [
"Tensor (Tensor x, Int32List[1] dim=None) => Squeeze",
]
bind_python: True

- name: "copy"
signature: "Tensor (Tensor x, String device_type, Int64 device_id) => Copy"
bind_python: True
Expand Down
Loading