Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-382] Shape and Size Operator #10889

Merged
merged 31 commits into from
Jun 30, 2018
Merged

Conversation

anirudhacharya
Copy link
Member

@anirudhacharya anirudhacharya commented May 10, 2018

Description

Shape and Size Operator. Pre-requisite for this issue - #10789

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Shape Operator
  • Size Operator

@haojin2 @eric-haibin-lin @zheng-da

Anirudh Acharya added 2 commits May 10, 2018 11:12
[](const NodeAttrs& attrs) { return std::vector<uint32_t>(1, 1); })
.set_attr<FCompute>("FCompute<cpu>", ShapeCompute<cpu>)
.set_attr<nnvm::FInferShape>("FInferShape",
[](const nnvm::NodeAttrs& attrs,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use spaces instead

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the CI failed due to cpplint. I will fix it.

TYPE_ASSIGN_CHECK(*out_attrs, 0, 0U);
return out_attrs->at(0) != -1;
})
.set_attr<nnvm::FGradient>("FGradient", MakeZeroGradNodes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't there be no appropriate gradient definition for this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, shape and size operator will not have gradients right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I'm asking the same thing. Why did you define FGradient attribute for the op?

@@ -388,6 +388,26 @@ void CastCompute(const nnvm::NodeAttrs& attrs,
});
}

template<typename xpu>
void ShapeCompute(const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alignment.

TShape in_shape = in_data.shape_;
MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
mxnet_op::Kernel<mshadow_op::identity_with_cast, xpu>::Launch(
s, in_data.ndim(), out_data.dptr<DType>(), in_shape.data());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why DType for out_data? Isn't that int64 as in the description?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, i will make this change.

const TBlob& in_data = inputs[0];
const TBlob& out_data = outputs[0];
mshadow::Stream<xpu> *s = ctx.get_stream<xpu>();
TShape in_shape = in_data.shape_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const TShape&.

.describe("Returns a 1D int64 array containing the shape of data.")
.set_num_inputs(1)
.set_num_outputs(1)
.set_attr<nnvm::FInplaceIdentity>("FInplaceIdentity",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will remove it.

TShape target_shape(1);
target_shape[0] = in_attrs->at(0).ndim();
SHAPE_ASSIGN_CHECK(*out_attrs, 0, target_shape);
return out_attrs->at(0).ndim() != 0U && out_attrs->at(0).Size() != 0U;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use shape_is_none(out_attrs->at(0)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you suggest that I define a shape_is_none function and use it here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already available.

std::vector<int>* out_attrs) {
CHECK_EQ(in_attrs->size(), 1U);
CHECK_EQ(out_attrs->size(), 1U);
TYPE_ASSIGN_CHECK(*out_attrs, 0, 0U);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use type enum variable name.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, i will use mshadow::kInt64

@@ -77,6 +77,9 @@ NNVM_REGISTER_OP(_identity_with_attr_like_rhs)
NNVM_REGISTER_OP(reshape_like)
.set_attr<FCompute>("FCompute<gpu>", UnaryOp::IdentityCompute<gpu>);

NNVM_REGISTER_OP(shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does the name come from? This looks confusing and conflicts with the property name shape of NDArray in Python. Need to ensure the documentation can be rendered correctly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you suggest the name of the operator should be? This is more or less the name used in a couple of other frameworks too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure the doc page can be rendered correctly at least.

@zheng-da
Copy link
Contributor

the shape operator doesn't have backward. my biggest concern is that any computation that uses this operator can't perform a backward computation.

});
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please get rid of one blank line here, c++ use only 1 blank line between functions

@@ -399,6 +399,37 @@ NNVM_REGISTER_OP(reshape_like)
.add_argument("lhs", "NDArray-or-Symbol", "First input.")
.add_argument("rhs", "NDArray-or-Symbol", "Second input.");

NNVM_REGISTER_OP(shape)
.describe("Returns a 1D int64 array containing the shape of data.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be clearer if you add an example here?

@anirudhacharya anirudhacharya requested a review from szha as a code owner May 11, 2018 02:16
@anirudhacharya anirudhacharya changed the title [MXNET-382] Shape Operator [MXNET-382] Shape and Size Operator May 11, 2018
@chinakook
Copy link
Contributor

That's great. Keras and Tensorflow both have this op.

@piiswrong
Copy link
Contributor

The name shape_op and size_op looks adhoc

@anirudhacharya
Copy link
Member Author

@piiswrong shape and size are already existing ndarray operations. I changed the names to prevent confusion. Some of the other names that come to mind are - shape_operator, tensor_shape, array_shape.

@reminisce
Copy link
Contributor

How about shape_nd and size_nd, which mimics the naming of gather_nd and scatter_nd?

SHAPE_ASSIGN_CHECK(*out_attrs, 0, target_shape);
return !shape_is_none(out_attrs->at(0));
})
.set_attr<nnvm::FInferType>("FInferType",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not registering FGradient ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shape operator does not have a differential. Check the conversation here - #10789 (comment)

CHECK_EQ(req.size(), 1U);
const TBlob& in_data = inputs[0];
const TBlob& out_data = outputs[0];
out_data.dptr<int64_t>()[0] = in_data.Size();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out_data holds a pointer to gpu memory. you need to explicitly use kernel launch to set the value

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I use kernel launch then I will need a databuffer pointing to in_data.Size(). How would I get that? because in_data.Size() is of type index_t which does not have data or dptr attribute.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know the output size is only 1, so you can just use 1 for that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mxnet_op::Kernel<mshadow_op::identity_with_cast, xpu>::Launch(s, 1U, out_data.dptr<int64_t>(), < what goes here? - in_data.Size()?? >);

@szha
Copy link
Member

szha commented May 21, 2018

Ping for another round of reviews.

@chinakook
Copy link
Contributor

If we have a symbol A with 4 dims, and how can I get the 1-dim size of shape_nd(A) result?
It’s straightforward in Keras and Tensorflow using K.shape(A)[1] and A.shape[1].
I think it’s difficult to do like this in MXNet’s Symbol system.

@@ -96,6 +96,12 @@ struct identity_with_cast {
}
};

struct size_kernel {
MSHADOW_XINLINE static void Map(int i, int64_t *out, unsigned int in) {
out[0] = int64_t(in);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: static_cast<int64_t>

const TShape& in_shape = in_data.shape_;
MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
mxnet_op::Kernel<mshadow_op::identity_with_cast, xpu>::Launch(
s, in_data.ndim(), out_data.dptr<int64_t>(), in_shape.data());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in_shape.data is a pointer in cpu memory which cannot be directly accessed on gpu. You can use Shape<ndim> instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how come this is not captured by CI?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it did, the CI failed for GPU tests. I need to fix it.

@piiswrong
Copy link
Contributor

shape_nd still sounds weird as it's also available in symbol.

BTW I think these operators can be useful but they won't solve the issue #10789

}

MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
mxnet_op::Kernel<mshadow_op::shape_kernel, gpu>::Launch(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, it's simply copying a tiny amount of data from a cpu array to a gpu array. Launching kernel is expensive and a waste of resources. You can just call cudaMemcpyAsync to alleviate the workload.

const TBlob& out_data = outputs[0];
mshadow::Stream<gpu> *s = ctx.get_stream<gpu>();
const TShape& in_shape = in_data.shape_;
Shape<10> temp_shape;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if ndim is greater than 10?

Actually I agree with reminisce, you should use cudamemcopy here so that you don't need this magic number

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, i will fix it.

@anirudhacharya
Copy link
Member Author

@reminisce @piiswrong ping for review.

@@ -92,7 +92,8 @@ MXNET_UNARY_MATH_OP(identity_grad, 1);
struct identity_with_cast {
template<typename DTypeIn, typename DTypeOut>
MSHADOW_XINLINE static void Map(int i, DTypeOut *out, DTypeIn *in) {
out[i] = DTypeOut(in[i]);
DTypeIn in_data = in[i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of this change?

@@ -388,6 +388,20 @@ void CastCompute(const nnvm::NodeAttrs& attrs,
});
}

template<typename xpu>
void ShapeCompute(const nnvm::NodeAttrs& attrs,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to make templates for shape and size functions based on the type of device. CPU and GPU FCompute functions are defined respectively in .cc and .cu and don't share anything.

@anirudhacharya
Copy link
Member Author

anirudhacharya commented Jun 29, 2018

@haojin2 @piiswrong can this be merged?

@@ -398,6 +398,98 @@ NNVM_REGISTER_OP(reshape_like)
.add_argument("lhs", "NDArray-or-Symbol", "First input.")
.add_argument("rhs", "NDArray-or-Symbol", "Second input.");

void ShapeComputeCPU(const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: alignments of all ComputeXPU functions

Anirudh Acharya added 2 commits June 29, 2018 13:53
@anirudhacharya
Copy link
Member Author

@piiswrong @reminisce please merge this.

@eric-haibin-lin eric-haibin-lin merged commit 33022f8 into apache:master Jun 30, 2018
@anirudhacharya anirudhacharya deleted the shape branch June 30, 2018 01:16
XinYao1994 pushed a commit to XinYao1994/incubator-mxnet that referenced this pull request Aug 29, 2018
* Shape Operator

* cuda

* size op

* lint issues

* docs example

* add docs, change op name to avoid conflict, add convenience confluent
method

* change name to _nd

* fix test cases, add new kernel

* test name fix.

* solve gpu memory problem for size and shape

* get rid of FIgnoreInputs attr of shape_nd

* op name change

* fix

* retrigger CI

* retrigger CI

* retrigger CI

* trigger CI

* fix comments

* cpplint

* nit

* trigger CI
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants