You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
It is important to ensure that all device memory allocated inside of cuDF functions is done through RMM.
It is easy to overlook this, e.g., by forgetting to pass the rmm::exec_policy to a Thrust algorithm that allocates temporary memory.
Describe the solution you'd like
It would be fairly easy to add this to our CI testing by writing a LD_PRELOAD library that overloads cudaMalloc to throw an error if it is called more than once.
This would ensure that there is only a single cudaMalloc call for the pool allocation.
There are some things to be aware of with this solution:
We'd need to ensure the pool is sized such that it won't need to grow for the tests
It would assume we're using cudaMalloc as the upstream resource for the pool (and not cudaMallocManaged)
The text was updated successfully, but these errors were encountered:
The gist of doing LD_PRELOAD injection for cudaMalloc is:
Write an init function annotated with __attribute__((constructor)) to ensure it runs during load time
In the init, use dlsym(RTLD_NEXT, "cudaMalloc") to get a pointer to the real cudaMalloc function
Write an overload of cudaMalloc with an identical signature to the real one
In that overload, add a static counter that ensures it is only called once and simply invokes the previously stored function pointer
Here's an example of how I did this in the past for pthread_mutex_lock/unlock when I was experimenting with annotating the Python GIL with NVTX (it didn't work because NVTX calls pthread_mutex_lock internally and you end up with infinite recursion :( ):
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.
Is your feature request related to a problem? Please describe.
It is important to ensure that all device memory allocated inside of cuDF functions is done through RMM.
It is easy to overlook this, e.g., by forgetting to pass the
rmm::exec_policy
to a Thrust algorithm that allocates temporary memory.Describe the solution you'd like
It would be fairly easy to add this to our CI testing by writing a LD_PRELOAD library that overloads
cudaMalloc
to throw an error if it is called more than once.This would ensure that there is only a single
cudaMalloc
call for the pool allocation.There are some things to be aware of with this solution:
The text was updated successfully, but these errors were encountered: