-
-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MNT: Move constructors to a direct style #773
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mtsokol! LGTM so far!
- Let's make all members of
Tensor
private. - Let's store a list of
free
-ablememref
str
s. So, for example, into_sps
let's store["data"]
fordense
, then we will need to calllibc.free
ondata.allocated
. This will apply when we take ownership from MLIR only. - We need to store a reference to NumPy/SciPy objects when we create
Tensor
s from them, as we were doing before. The pattern for_hold_self_ref_in_ret
will apply still in this case, except we can just useTensor._base
which can be the SciPy or NumPy object, orNone
.
CodSpeed Performance ReportMerging #773 will degrade performances by 51.96%Comparing Summary
Benchmarks breakdown
|
@hameerabbasi I applied all your review comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A simplification to the memory logic: We can do most of this in __del__
, check for self._owns_memory
and then call libc.free
on the constituent memref
s if so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the 🥚-cellent work, @mtsokol! LGTM.
Hi @hameerabbasi,
This PR moves MLIR constructors to the direct style (with:
sparse-assembler{direct-out=true}
).I think it's more concise with this approach -
_constructors.py
LOC reduced by 50%, to 250.Now all Tensors created from NumPy arrays or SciPy sparse arrays are owned and freed by the python side.
The ones that are results of MLIR ops (like
add
right now) are allocated by MLIR andowns_memory
in Tensor class is set toFalse
.I haven't figured out yet how to correctly free MLIR allocated memrefs, (tried with
libc.free
) as I think there's still a memory leak aftersparse.add(tensor_a, tensor_b)
.