Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple CPU interop fixes for serialization and cloning #6223

Open
wants to merge 18 commits into
base: branch-25.04
Choose a base branch
from

Conversation

dantegd
Copy link
Member

@dantegd dantegd commented Jan 14, 2025

No description provided.

Copy link

copy-pr-bot bot commented Jan 14, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@github-actions github-actions bot added the Cython / Python Cython or Python issue label Jan 14, 2025
@dantegd dantegd added bug Something isn't working non-breaking Non-breaking change labels Feb 7, 2025
@dantegd
Copy link
Member Author

dantegd commented Feb 7, 2025

/ok to test

Comment on lines 281 to 282
def __sklearn_clone__(self):
return _clone_parametrized(self)
Copy link
Member

@betatim betatim Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes the problem of the cloned estimator not being a wrapped estimator. But it doesn't solve the problem that (some of) the hyperparameters get changed in the process. The other thing I dont like about this is that I don't understand why it solves the problem. clone should do exactly this, at least that is what it looks like to me in https://github.com/scikit-learn/scikit-learn/blob/a4225f305a88eea7bababbfa2ff479a118406c93/sklearn/base.py#L93-L95

Hyper-parameter problem:

from sklearn.decomposition import PCA
from sklearn import clone
pca = PCA(n_components=42, svd_solver="arpack")
pca2 = clone(pca)
print(pca2)
# -> wrapped <class 'sklearn.decomposition._pca.PCA'>
pca2.get_params()
# -> {'handle': <pylibraft.common.handle.Handle object at 0x7fa50b460c90>, 'verbose': 4, 'output_type': 'numpy', 'copy': True, 'iterated_power': 15, 'n_components': 42, 'svd_solver': 'full', 'tol': 1e-07, 'whiten': False, 'random_state': None}

the solver is set to "full" :-/ But this is already a problem before you clone the estimator (get_params returns the wrong thing already before cloning)

So I think we need to understand this a bit better. Because if get_params works correctly and clone gets passed the right thing (with the correct __class__ it should "just work"(tm)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One improvement would be to not delegate __sklearn_clone__ to the original estimator in

if GlobalSettings().accelerator_active or self._experimental_dispatching:
. Then we will get the behaviour this PR adds "for free" and without having to rely on private functions from scikit-learn

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And probably the reason why get_params returns the wrong things is that we use the one on the cuml estimator instead of on the scikit-learn one

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hyperparam change that you mention is not an error, is the intentional change we do to translate the call from CPU to GPU:

the hyperparameters now are correct. And get_params is returning the params of the proxy indeed, which has correct n_components=42.

The real question is the one you pose indeed, what do we need get_params to return and why. But the cloning is now working correctly, the get_params is a separate question

Copy link
Member

@betatim betatim Feb 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think get_params should return the variables and values that you'd expect as a scikit-learn user. This means the return value should be the same when the accelerator is turned on and when it is turned off.

My thinking is:

  • the scikit-learn API is the boundary layer: below we can do what we want to make the accelerator work, above it should all look "just like scikit-learn"
  • if we return something different we need to explain to users that we do this, why, etc
  • the more we make different, even with good reason, the higher the chances something somewhere that relies on this breaks
  • it will make the translator simpler to reason about, because the input will only ever be user provided values (not sometimes user values and sometimes values that have already been translated)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did the changes, now get_params and set_params behave as you describe. And removed the usage of sklearn_clone as per the suggestion.

Need to check in pytests before we merge, but could use another quick rereview @betatim to see if you find any other fundamental issues with the approach.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that it is kinda late and pushed some changes, hope I didn't break something in the last couple commits

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Pushed a commit with two tests (credit for some of them to my friend Cursor)

@dantegd dantegd changed the base branch from branch-25.02 to branch-25.04 February 14, 2025 04:50
Copy link
Contributor

@viclafargue viclafargue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@betatim
Copy link
Member

betatim commented Feb 14, 2025

/ok to test

@betatim betatim marked this pull request as ready for review February 14, 2025 13:41
@betatim betatim requested a review from a team as a code owner February 14, 2025 13:41
@betatim betatim requested review from teju85 and betatim February 14, 2025 13:41
Co-authored-by: Simon Adorf <sadorf@nvidia.com>
@@ -186,6 +239,7 @@ def test_defaults_args_only_methods():


def test_kernel_ridge():
import cupy as cp
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why move this here? Maybe we should leave a comment for people from the future to explain why it can't be imported at the top of the file (or move it back if this was just for debugging)

km = cluster.KMeans(n_clusters=13)
ckm = cuml.KMeans.from_sklearn(km)

assert ckm.n_clusters == 13
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lucky number 13 :D

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should merge this PR. It improves things and fixes several things.

We can keep improving the from_/as_sklearn round tripping. I think the test from https://github.com/rapidsai/cuml/pull/6342/files#r1963552769 still doesn't pass (even if you exclude the raft handle). But lets look at that in a new PR

@betatim
Copy link
Member

betatim commented Feb 21, 2025

The two not optional jobs both fail with this:

 =========================== short test summary info ============================
FAILED test_basic_estimators.py::test_kernel_ridge - AssertionError: y_pred should be a np.ndarray, but is a <class 'cupy.ndarray'>
assert not True
 +  where True = isinstance(array([ 0.95477083, -0.99923366, -0.49547406, ..., -0.99986215,\n        0.91445195,  0.88462877]), <class 'cupy.ndarray'>)
 +    where <class 'cupy.ndarray'> = <module 'cupy' from '/opt/conda/envs/test/lib/python3.10/site-packages/cupy/__init__.py'>.ndarray
= 1 failed, 529 passed, 5 xfailed, 6 xpassed, 119 warnings in 96.05s (0:01:36) =

I added this test in #6327 where it passed. But, I am wondering if this test should always pass or only it he accelerator is enabled (it should be skipped if the accelerator is disabled)? When I wrote the test I meant "when the accelerator is active, it should always output a Numpy array". I didn't think about the "accelerator off" case, so happy to just skip this test in that case. I guess it depends on what the default output type is, mirror?, when the accelerator is off.

The failures in the optional jobs look more serious. They are of this type:

FAILED test_random_forest.py::test_create_classification_model[8-10-10-1.0] - AttributeError: _cpu_model

For several estimators :-/

There is also a CUDA error but maybe that is a spurious one?

@dantegd
Copy link
Member Author

dantegd commented Feb 21, 2025

@betatim it was late last night, the error on the optional jobs is a small one, pushing a fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working cuml-cpu Cython / Python Cython or Python issue non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants