Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDArray] Set NDArray::Container.shape_ in NDArray::FromDLPack #5301

Merged
merged 1 commit into from
Apr 10, 2020

Conversation

hlu1
Copy link
Contributor

@hlu1 hlu1 commented Apr 10, 2020

In some cases, the shape info in DLTensor.shape might not be dynamically allocated and may cease to exist after passing the DLTensor to NDArray. We can avoid this problem by setting NDArray::Container.shape_ at construction time and assign it back to DLTensor.shape.

@tqchen
Copy link
Member

tqchen commented Apr 10, 2020

THanks @hlu1 however, according to the DLPack conventionm, the DLManagedTensor won't be deleted until the deleter is called, could it due to the exporter that violates the convention?

@hlu1
Copy link
Contributor Author

hlu1 commented Apr 10, 2020

I'm referring to the case that DLManagedTensor.dl_tensor.shape which is an int64_t* could be pointing to an array that's allocated in the stack and not on the heap. People are usually pretty careful to make sure the data is allocated on the heap and properly deleted in the deleter, but forget to do the same for the shape info.

@hlu1
Copy link
Contributor Author

hlu1 commented Apr 10, 2020

Another thing that's related:

std::vector<int64_t> NDArray::Shape() const {
  return get_mutable()->shape_;
}

https://github.com/apache/incubator-tvm/blob/master/src/runtime/ndarray.cc#L251-L253

If we don't set shape_ in NDArray::FromDLPack, we'll need to fix NDArray::Shape() to make sure it returns the right shape info.

@tqchen
Copy link
Member

tqchen commented Apr 10, 2020

get you, Thanks @hlu1 !

@tqchen tqchen merged commit 4808235 into apache:master Apr 10, 2020
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Apr 16, 2020
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Apr 17, 2020
dpankratz pushed a commit to dpankratz/incubator-tvm that referenced this pull request Apr 24, 2020
monklof pushed a commit to monklof/incubator-tvm that referenced this pull request Jan 22, 2021
…m_data:master to master

* commit 'cd0d52daa6942bdafa9363ff6cfa3d25fcd5b8d6': (824 commits)
  [Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign (apache#5312)
  [Rust][CI] Restore Rust CI (apache#5137)
  Remove PrimExpr from String (apache#5311)
  [Requantize] Cleanup and Optimize Lowering (apache#5286)
  [IR][TRANSFORM] Enable CopyOnWrite for passes. (apache#5309)
  [PYTORCH]Abs, Arange, Softplus ops (apache#5295)
  [LLVM] Fix generation of LLVM intrinsics (apache#5282)
  [BYOC] Add example of Composite + Annotate for DNNL fused op (apache#5272)
  [Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (apache#5243)
  [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (apache#5271)
  [RELAY][FRONTEND][CAFFE2] add Mul and ConvTranspose operator (apache#5302)
  [BYOC] Refine AnnotateTarget and MergeCompilerRegion Passes (apache#5277)
  [CI] Fix the hexagon string (apache#5304)
  [Arith] linear system and equation solver (apache#5171)
  [PYTORCH]Repeat, Reciprocal & Reshape Op support (apache#5280)
  [FRONTEND][TENSORFLOW] Fix gather_nd indices (apache#5279)
  Update device_annotation.cc (apache#5291)
  [REFACTOR][IR] Move to runtime::String (apache#5276)
  [NDArray] Set shape_ in NDArray::FromDLPack (apache#5301)
  [RUNTIME] Initial implementation of Hexagon runtime support (apache#5252)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants