-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: fallback to stock scikit-learn
after onedal
fail
#1907
ENH: fallback to stock scikit-learn
after onedal
fail
#1907
Conversation
@samir-nasibli do you need clarification on the background of this oddity in the codebase before we close this PR? It comes from these steps in the DAAL codebase: https://github.com/oneapi-src/oneDAL/blob/main/cpp/daal/src/algorithms/service_kernel_math.h#L721 |
@icfaust thank you! I am definitely not going to close this PR, some case needed to be addressed. |
I'll wait to review until you mark it ready, I'm curious to see if you plan to refactor the fallback mechanism or fix a bug in the PLU factorization on oneDAL-side first. |
Please read the short description to the PR more carefully. When the proposal is expanded and ready for discussion/review, the PR will be marked ready for review |
/intelci: run |
Linear regression fails all tests on both GPU and CPU, that requires additional investigation. I will separate deprecation and changes for the LogReg into the separate PR, since this eventually passed all tests. See #1996 |
The statistics of the onedal4py backend usage and sklearn after onedal4py backend should be investigate. The mechanism itself for falling back sklearn_after_onedal will be updated via config_context. Follow up tickets will be created. |
/intelci: run |
…mir-nasibli/scikit-learn-intelex into dep/skl_fallback_after_onedal
/intelci: run |
scikit-learn
after onedal
fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is impacted by #2083 , that would mean only LinearRegression GPU, and LogisticRegression GPU estimators are influenced. Is it worth having a new config keyword for these specific cases? @Alexsandruss should weigh in. I'm still in favor of closing this PR. I would say the removal can occur naturally when these issues are addressed in the individual estimators.
Forgot to mention @avolkov-intel for comment on this as well. |
This PR doesn't change py default anything. Only on testing we are removing these. |
fixed set_config for the config default values
/intelci: run |
def test_set_config_works(): | ||
"""Test validates that the config settings were applied correctly by | ||
set_config. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests will require generalization and parameterization in future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Alexsandruss . Please suggest inputs I will cover with a ticket
…ion#1907) * ENH: fallback to stock `scikit-learn` after `onedal` fail * fix `allow_sklearn_after_onedal` setting * covered config_context with more tests * fixed set_config for the config default values
Description
In several estimators there are second level fallback flow, that is not obvious and controlled by the user.
Proposed to add control on fallback computation to stock scikit-learn after
onedal
backend in case of runtime error ononedal
backend computations.config_context
extended forallow_sklearn_after_onedal
param, that control this behavior.By default it is allowed to fallback to stock scikit-learn after
onedal
backend in case of runtime error. Insklearnex
testing and sklearn conformance testing this is blocked, just to check onedal backend.Done
test_config_context_works
test_set_config_works
test suit.