Design cuda array allocation functions #55
EmilyBourne
started this conversation in
Polls
Replies: 2 comments
-
cc: @jalalium @smazouz42 |
Beta Was this translation helpful? Give feedback.
0 replies
-
The NumPy array creation types currently supported in the main repo are:
I think it would make sense to support most of these. I think the simplest will be:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In order to support cuda arrays we must decide what allocation functions we want to support.
As described in #25 the aim for now is to create an internal cuda module inside pyccel that can be only used with pyccel. This libraray will be used later to add support for other existing libraries (e.g. numba, cupy...)
@bauom suggested using an interface similar to NumPy's. See #37 #38 #39 .
However it is not clear how such an interface would handle the memory location. There are several possibilities. E.g:
cuda.host_ones
on_gpu
) which must be provided as a literal by the userA choice should be made about this. Please answer the poll to indicate your preferred method. Or leave a comment if you have an additional suggestion.
Documentation will need to be written describing the methods. This can be done as the spoofed functions are created.
In order to do this issues must be created to know which functions should be implemented. We can use this issue to keep a list of the desired allocation functions.
Allocation functions to support
0 votes ·
Beta Was this translation helpful? Give feedback.
All reactions