-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NDArray] Set NDArray::Container.shape_ in NDArray::FromDLPack #5301
Conversation
THanks @hlu1 however, according to the DLPack conventionm, the DLManagedTensor won't be deleted until the deleter is called, could it due to the exporter that violates the convention? |
I'm referring to the case that DLManagedTensor.dl_tensor.shape which is an int64_t* could be pointing to an array that's allocated in the stack and not on the heap. People are usually pretty careful to make sure the data is allocated on the heap and properly deleted in the deleter, but forget to do the same for the shape info. |
Another thing that's related:
https://github.com/apache/incubator-tvm/blob/master/src/runtime/ndarray.cc#L251-L253 If we don't set |
get you, Thanks @hlu1 ! |
…m_data:master to master * commit 'cd0d52daa6942bdafa9363ff6cfa3d25fcd5b8d6': (824 commits) [Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign (apache#5312) [Rust][CI] Restore Rust CI (apache#5137) Remove PrimExpr from String (apache#5311) [Requantize] Cleanup and Optimize Lowering (apache#5286) [IR][TRANSFORM] Enable CopyOnWrite for passes. (apache#5309) [PYTORCH]Abs, Arange, Softplus ops (apache#5295) [LLVM] Fix generation of LLVM intrinsics (apache#5282) [BYOC] Add example of Composite + Annotate for DNNL fused op (apache#5272) [Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (apache#5243) [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (apache#5271) [RELAY][FRONTEND][CAFFE2] add Mul and ConvTranspose operator (apache#5302) [BYOC] Refine AnnotateTarget and MergeCompilerRegion Passes (apache#5277) [CI] Fix the hexagon string (apache#5304) [Arith] linear system and equation solver (apache#5171) [PYTORCH]Repeat, Reciprocal & Reshape Op support (apache#5280) [FRONTEND][TENSORFLOW] Fix gather_nd indices (apache#5279) Update device_annotation.cc (apache#5291) [REFACTOR][IR] Move to runtime::String (apache#5276) [NDArray] Set shape_ in NDArray::FromDLPack (apache#5301) [RUNTIME] Initial implementation of Hexagon runtime support (apache#5252) ...
In some cases, the shape info in DLTensor.shape might not be dynamically allocated and may cease to exist after passing the DLTensor to NDArray. We can avoid this problem by setting NDArray::Container.shape_ at construction time and assign it back to DLTensor.shape.