Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add in new java API for raw host memory allocation #17197

Merged

Conversation

revans2
Copy link
Contributor

@revans2 revans2 commented Oct 29, 2024

Description

This is the first patch in a series of patches that should make it so that all java host memory allocations go through the DefaultHostMemoryAllocator unless another allocator is explicitly provided.

This is to make it simpler to track/control host memory usage.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>
@revans2 revans2 added 3 - Ready for Review Ready for review by team Java Affects Java cuDF API. Spark Functionality that helps Spark RAPIDS improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Oct 29, 2024
@revans2 revans2 self-assigned this Oct 29, 2024
@revans2 revans2 requested a review from a team as a code owner October 29, 2024 15:49
@revans2
Copy link
Contributor Author

revans2 commented Oct 29, 2024

/merge

@rapids-bot rapids-bot bot merged commit 63b773e into rapidsai:branch-24.12 Oct 29, 2024
85 checks passed
rapids-bot bot pushed a commit that referenced this pull request Nov 4, 2024
This is step 3 in a process of getting java host memory allocation to be plugable under a single allocation API. This is really only used for large memory allocations, which is what matters.

This changes the most common java host memory allocation API to call into the plugable host memory allocation API. The reason that this had to be done in multiple steps is because the Spark Plugin code was calling into the common memory allocation API, and memory allocation would end up calling itself recursively.

Step 1. Create a new API that will not be called recursively (#17197)
Step 2. Have the Java plugin use that new API instead of the old one to avoid any recursive invocations (NVIDIA/spark-rapids#11671)
Step 3. Update the common API to use the new backend (this)

There are likely to be more steps after this that involve cleaning up and removing APIs that are no longer needed.

This is marked as breaking even though it does not break any APIs, it changes the semantics enough that it feels like a breaking change.

This is blocked and should not be merged in until Step 2 is merged in, to avoid breaking the Spark plugin.

Authors:
  - Robert (Bobby) Evans (https://github.com/revans2)

Approvers:
  - Nghia Truong (https://github.com/ttnghia)
  - Alessandro Bellina (https://github.com/abellina)

URL: #17204
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review Ready for review by team improvement Improvement / enhancement to an existing function Java Affects Java cuDF API. non-breaking Non-breaking change Spark Functionality that helps Spark RAPIDS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants