-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In AUC and AUCPR metrics, detect whether weights are per-instance or per-group #4216
Conversation
@xydrolase Any updates? How can I help? |
51815a5
to
af90762
Compare
@trivialfis I noticed lack of tests in ranking. And when I tried some weighted data, I am getting AUCPR > 1.0 :( Will need to look deeper... |
It turns out that AUCPR is quite broken for learning-to-rank task ( |
@trivialfis @RAMitchell I'm inclined to merge this pull request, and submit another pull request to fix the AUCPR metric. What do you think? See #4431 |
Don't merge this yet; #4436 should be merged first. |
Overview
This is a proposed bugfix based on PR #3379 originally submitted by @ngoyal2707, which intends to change how the instance weight vector is defined.
For a training dataset of K groups and N total training examples/instances, @ngoyal2707's original PR requires a weight vector of length K. That is, each group has its own weight, and the same weight is shared by all instances in the same group. While this scheme of specifying group weights works without issues in isolation, it however does conflicts with the computation of certain ranking metrics (e.g.
auc
andaucpr
) in https://github.com/dmlc/xgboost/blob/master/src/metric/rank_metric.cc, which expects the weights to be populated for every training example.Example
To demonstrate the bug, we can use the following example Python code:
The above code will run without issues. But if we add a ranking metric that also uses weights to the
eval_metric
parameter, like:The
xgb.train
function call will fail with the following exception (most likely due to the underlying C++ code accessing memory that is out-of-bound):Proposed fix
To resolve the issue demonstrated above, this PR proposes to change the calculation of ranking objective in
rank_obj.cc
such that the utilization of weights is aligned with the implementation inrank_metric.cc
.Note that the alternative is to change how the weights are accessed in
rank_metric.cc
, such that when training a ranking model, the calculation of AUC and AUC-PR metrics assumes that the weights are provided on a per-group basis. However, using a per-group weighting scheme does reduce the overall flexibility of specifying weights.ToDo
Update the documentation in https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.DMatrix.set_weight so that it reflects the changes.