Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Ability to train larger datasets from managed memory (atleast for RF) #3538

Open
teju85 opened this issue Feb 22, 2021 · 4 comments
Open
Labels
benchmarking CUDA / C++ CUDA issue Cython / Python Cython or Python issue Experimental Used to denote experimental features feature request New feature or request inactive-30d inactive-90d Perf Related to runtime performance of the underlying code

Comments

@teju85
Copy link
Member

teju85 commented Feb 22, 2021

Is your feature request related to a problem? Please describe.
If the dataset is large enough not to fit in the GPU memory, we should be able to provide an option for the users to train RF model by keeping the dataset entirely on managed-memory.

This can be a more generic request for all algos in cuML. However, to begin with, we could limit ourselves to RF algo.

Additional context
We should certainly study performance implications of this and document those, if any observed.

@teju85 teju85 added feature request New feature or request CUDA / C++ CUDA issue Cython / Python Cython or Python issue Perf Related to runtime performance of the underlying code benchmarking Experimental Used to denote experimental features labels Feb 22, 2021
@dantegd
Copy link
Member

dantegd commented Feb 22, 2021

I believe @cjnolet has done some performance analysis of Naive Bayes with managed memory (correct me if I'm misremembering).

But it would be excellent to have a comprehensive list of results for different types of algorithms! Another thing that will be very useful that is coming very soon is using FAISS 1.7 (#3509 ) that allows for plugging a memory manager, so we should be able to have a single pool e2e for algorithms that use FAISS as well.

@cjnolet
Copy link
Member

cjnolet commented Feb 22, 2021

Here's the original issue about pluggable management filed with the FAISS team: facebookresearch/faiss#1203

@github-actions
Copy link

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

@github-actions
Copy link

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarking CUDA / C++ CUDA issue Cython / Python Cython or Python issue Experimental Used to denote experimental features feature request New feature or request inactive-30d inactive-90d Perf Related to runtime performance of the underlying code
Projects
None yet
Development

No branches or pull requests

3 participants