Skip to content

Commit

Permalink
Fix scoring kwargs call (#33)
Browse files Browse the repository at this point in the history
Fix a bug when calling `OutcomeEvaluator` with custom metrics due to
positional arguments misalignment.

Fix an issue with Travis failing to install in a Python 3.7 build.
* https://app.travis-ci.com/github/IBM/causallib/jobs/565683172
* pytest-dev/pytest#7371 (comment)
* tobinus/python-podgen#124


Co-authored-by: ehudkr <ehudkaravani@gmail.com>
  • Loading branch information
yoavkt and ehudkr authored Apr 6, 2022
1 parent cd258bd commit 02f01c8
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 2 deletions.
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ before_script:
- ./cc-test-reporter before-build
install:
- pip install --upgrade pip
- pip install --upgrade importlib-metadata # Solve a python 3.7 install bug: https://app.travis-ci.com/github/IBM/causallib/jobs/566048347
- pip install -r requirements.txt
- pip install -r causallib/contrib/requirements.txt
- pip install --upgrade pytest coverage
Expand Down
13 changes: 11 additions & 2 deletions causallib/evaluation/outcome_evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,18 @@ def score_estimation(self, prediction, X, a_true, y_true, metrics_to_evaluate=No
def _score_single(self, y_true, prediction, prediction_prob, outcome_is_binary, metrics_to_evaluate):
"""Score a single prediction based on whether `y_true` is classification or regression"""
if outcome_is_binary:
score = self.score_binary_prediction(y_true, prediction, prediction_prob, metrics_to_evaluate)
score = self.score_binary_prediction(
y_true=y_true,
y_pred=prediction,
y_pred_proba=prediction_prob,
metrics_to_evaluate=metrics_to_evaluate
)
else:
score = self.score_regression_prediction(y_true, prediction, metrics_to_evaluate)
score = self.score_regression_prediction(
y_true=y_true,
y_pred=prediction,
metrics_to_evaluate=metrics_to_evaluate
)
# score = pd.DataFrame(score).T
# score = score.apply(pd.to_numeric, errors="ignore") # change dtype of each column to numerical if possible.
return score
Expand Down

0 comments on commit 02f01c8

Please sign in to comment.