-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let the DispReconstructor also compute a score for the sign prediction #2479
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2479 +/- ##
=======================================
Coverage 92.44% 92.44%
=======================================
Files 234 234
Lines 19900 19917 +17
=======================================
+ Hits 18396 18413 +17
Misses 1504 1504 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changelog missing
@@ -681,12 +682,17 @@ def _predict(self, key, table): | |||
else: | |||
prediction[valid] = valid_norms | |||
|
|||
prediction[valid] *= self._models[key][1].predict(X) | |||
sign_proba = self._models[key][1].predict_proba(X)[:, 0] | |||
# proba is [0 and 1] where 0 => very certain -1, 1 => very certain 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this explanation, is it supposed to say that sign_proba
goes from -1 to 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The classes predicted by sign are "1" and "-1" and the [:,0]
component of the output of predict_proba
are the likelihood scores for the "1" class. Therefore a score of 0 equals a high likelihood of class "-1".
@@ -945,7 +964,7 @@ def _cross_validate_regressor(self, telescope_type, train, test): | |||
prediction, _ = regressor._predict(telescope_type, test) | |||
truth = test[regressor.target] | |||
r2 = r2_score(truth, prediction) | |||
return prediction, truth, {"R^2": r2} | |||
return {f"{regressor.prefix}_energy": prediction}, truth, {"R^2": r2} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there isn't any test for CrossValidator
? Not really something to resolve in this PR I guess...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no direct test, yes.
However the code is covered by the tests for the train tools which also inspect the cross validation output.
No description provided.