-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R-package] added argument eval_train_metric to lgb.cv() (fixes #4911) #4918
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for picking this up!
Please do add a unit test testing this behavior. Let me know if you need help with it.
Please re-generate the documentation.
sh build-cran-package.sh --no-build-vignettes
R CMD INSTALL --with-keep.source ./lightgbm_3.3.1.99.tar.gz
cd R-package
Rscript -e "roxygen2::roxygenize(load = 'installed')"
...to fix issues like the following in CI (e.g. in this build)
Codoc mismatches from documentation object 'lgb.cv':
lgb.cv
Code: function(params = list(), data, nrounds = 100L, nfold = 3L,
label = NULL, weight = NULL, obj = NULL, eval = NULL,
verbose = 1L, record = TRUE, eval_freq = 1L, showsd =
TRUE, eval_train_metric = FALSE, stratified = TRUE,
folds = NULL, init_model = NULL, colnames = NULL,
categorical_feature = NULL, early_stopping_rounds =
NULL, callbacks = list(), reset_data = FALSE,
serializable = TRUE)
Docs: function(params = list(), data, nrounds = 100L, nfold = 3L,
label = NULL, weight = NULL, obj = NULL, eval = NULL,
verbose = 1L, record = TRUE, eval_freq = 1L, showsd =
TRUE, stratified = TRUE, folds = NULL, init_model =
NULL, colnames = NULL, categorical_feature = NULL,
early_stopping_rounds = NULL, callbacks = list(),
reset_data = FALSE, serializable = TRUE)
Argument names in code not in docs:
eval_train_metric
Mismatches in argument names (first 3):
Position: 13 Code: eval_train_metric Docs: stratified
Position: 14 Code: stratified Docs: folds
Position: 15 Code: folds Docs: init_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks very much! The new test looks great! Just one more small suggestion.
Co-authored-by: James Lamb <jaylamb20@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, thanks!
Thanks as always for the help @mayer79 ! |
Thanks go to you, sir :-) |
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
Attempt to solve #4911
If we set
eval_train_metric = TRUE
inlgb.cv()
, then we get output like[1] "[50]: train's binary_logloss:0.248276+0.00109575 valid's binary_logloss:0.248655+0.00448549"
The resulting object "cvm" contains the information in the
record_evals
slot and the average training performance of the best round could be found by something likecvm$record_evals$train$binary_logloss$eval[[cvm$best_iter]]
.There is no unit test yet, but that does not mean we shouldn't write one.