[FEA] Ability to train larger datasets from managed memory (atleast for RF) #3538
Labels
benchmarking
CUDA / C++
CUDA issue
Cython / Python
Cython or Python issue
Experimental
Used to denote experimental features
feature request
New feature or request
inactive-30d
inactive-90d
Perf
Related to runtime performance of the underlying code
Is your feature request related to a problem? Please describe.
If the dataset is large enough not to fit in the GPU memory, we should be able to provide an option for the users to train RF model by keeping the dataset entirely on managed-memory.
This can be a more generic request for all algos in cuML. However, to begin with, we could limit ourselves to RF algo.
Additional context
We should certainly study performance implications of this and document those, if any observed.
The text was updated successfully, but these errors were encountered: