Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Extend tests to ensure no memory is allocated outside of RMM #11546

Open
jrhemstad opened this issue Aug 16, 2022 · 2 comments
Open

[FEA] Extend tests to ensure no memory is allocated outside of RMM #11546

jrhemstad opened this issue Aug 16, 2022 · 2 comments
Labels
feature request New feature or request libcudf Affects libcudf (C++/CUDA) code.

Comments

@jrhemstad
Copy link
Contributor

Is your feature request related to a problem? Please describe.

It is important to ensure that all device memory allocated inside of cuDF functions is done through RMM.

It is easy to overlook this, e.g., by forgetting to pass the rmm::exec_policy to a Thrust algorithm that allocates temporary memory.

Describe the solution you'd like

It would be fairly easy to add this to our CI testing by writing a LD_PRELOAD library that overloads cudaMalloc to throw an error if it is called more than once.

This would ensure that there is only a single cudaMalloc call for the pool allocation.

There are some things to be aware of with this solution:

  • We'd need to ensure the pool is sized such that it won't need to grow for the tests
  • It would assume we're using cudaMalloc as the upstream resource for the pool (and not cudaMallocManaged)
@jrhemstad jrhemstad added feature request New feature or request libcudf Affects libcudf (C++/CUDA) code. labels Aug 16, 2022
@jrhemstad
Copy link
Contributor Author

The gist of doing LD_PRELOAD injection for cudaMalloc is:

  • Write an init function annotated with __attribute__((constructor)) to ensure it runs during load time
  • In the init, use dlsym(RTLD_NEXT, "cudaMalloc") to get a pointer to the real cudaMalloc function
  • Write an overload of cudaMalloc with an identical signature to the real one
  • In that overload, add a static counter that ensures it is only called once and simply invokes the previously stored function pointer

Here's an example of how I did this in the past for pthread_mutex_lock/unlock when I was experimenting with annotating the Python GIL with NVTX (it didn't work because NVTX calls pthread_mutex_lock internally and you end up with infinite recursion :( ):

https://github.com/jrhemstad/gil_preload/blob/main/gil_preload.cpp#L62-L80

@github-actions
Copy link

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request libcudf Affects libcudf (C++/CUDA) code.
Projects
None yet
Development

No branches or pull requests

2 participants