-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute metrics not on each iteration but with some fixed step #4107
Comments
@normanwang92 Hi! No, there is no such possibility in LightGBM. Custom objectives and metrics can be used only in a specific language wrapper. How custom function in CLI should look like you think? |
What I really wanted is to apply metric_freq/eval_freq in lgb.train. What I found was evaluating custom metrics at every single step slows down training quite a lot, and it seems to drag down the GPU utilization rate. I tried using lgb.predict at every x step but it still quite inefficient even when you build the prediction incrementally. In some cases the incremental prediction takes longer than a grid search. Would be great if we have an efficient way of getting custom evaluation metrics at every x step. |
Sure, but how would you like to define the custom metrc? In the Python package, for example, you can pass in a Python function. How would you like to be able to define the custom metric for use with the CLI? |
I'm not sure really. In my case, all I'd like to have is an efficient way of evaluating custom metrics every x step. If the metric_freq can be made available in python API then that solves my problem! The reason I asked this question was metric_freq is CLI only and I wasn't sure if CLI takes custom feval. I haven't really thought about how I'd define custom metric for CLI sorry. |
Here is the corresponding code for training routine: LightGBM/python-package/lightgbm/engine.py Lines 239 to 256 in d6ebd06
As a quick workaround I think you can add condition like i % period == 0 to the following if statement:LightGBM/python-package/lightgbm/engine.py Line 253 in d6ebd06
|
@normanwang92 Does new issue heading reflect your real needs correctly? |
It does thank you! I'll def try it out! |
Here's my attempt to modify lgb.train (by passing `def train(params, train_set, num_boost_round=100,
|
@normanwang92 In the above code, the iteration index |
Closed in favor of being in #2302. We decided to keep all feature requests in one place. Welcome to contribute to this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature. |
@jameslamb, @StrikerRUS I'm open to develop this. Just to double check these are the high level changes needed:
if "period" in params:
callbacks_set.add(
callback.log_evaluation(
...
)
)
# check evaluation result.
if valid_sets is not None and (i + 1) % period == 0:
if is_valid_contain_train:
evaluation_result_list.extend(booster.eval_train(feval))
evaluation_result_list.extend(booster.eval_valid(feval)) |
@TremaMiguel I think we can reuse |
Ok, this issue is marked as uncompleted #2302, should it be marked as completed or removed from the list? Is there any open issue do you suggest me to work on? |
@TremaMiguel I mean, this issue is actual and some users may benefit from implementing it. I just said that we can take |
Summary
Motivation
Description
References
The text was updated successfully, but these errors were encountered: