-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Dart booster. #1220
Add Dart booster. #1220
Conversation
@@ -313,8 +352,9 @@ class GBTree : public GradientBooster { | |||
} | |||
} | |||
// commit new trees all at once | |||
inline void CommitModel(std::vector<std::unique_ptr<RegTree> >&& new_trees, | |||
int bst_group) { | |||
inline virtual void |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove inline, inline and virtual are conflicting with each other
@marugari Thanks for the updates. You donot need to open another PR normally, simply update the contents in your branch will update this PR I have made a few more comments. Please also fix the lint error as indicated in https://travis-ci.org/dmlc/xgboost/jobs/132091950 You can reproduce the style check locally by typing make lint |
Any updates on this ? |
da6b5c4
to
a99b7e8
Compare
Update. |
- type of sampling algorithm. | ||
- "uniform": dropped trees are selected uniformly. | ||
- "weighted": dropped trees are selected in proportion to weight. | ||
* normalize_type [default=0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default= "original"
I fixed it. |
One last thing. The name original gbtree and learning rate for normalize
|
In the paper, the authors propose a normalize type. ("original" in my implementation) I will consider appropriate names. |
Does that mean we should automatically choosing the current "learning_rate" by default? |
If My calculation is correct, we should choose "learning_rate" as normalize type. |
Let us directly remove normalize type for now, and go with your implementation by default then :) |
please make the update and i will merge the changes in |
Thanks this is merged! |
@marugari Thanks for the great job of bringing in the DART trainer. To make it more well known and widely accessible to users, I would like to invite you to write a markdown guest blogpost introducing what DART is and how you can use it in xgboost with a few code examples. We can post it on the DMLC website as well documents of XGBoost. Please let me know how you think about it and let me know if you need help in reviewing it. You can first create a PR to https://github.com/dmlc/xgboost/tree/master/doc/tutorials |
Thank you for your great support. I'm writing a Japanese blog post. Documents will be submitted afterward. |
I found my mistakes. I apologize for the trouble. |
no problem, please feel free to open a pr to update the code |
@marugari Any updates on possible guest blogpost? |
#1199