This repository has been archived by the owner on Jun 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 170
LightGBM and basic features
Kamil A. Kaczmarek edited this page Jul 11, 2018
·
20 revisions
- Make sure that you put correct paths to data: neptune_random_search.yaml:L21. Consider your own ranges of hyper-parameters and number of runs.
neptune run --config neptune_random_search.yaml main.py train_evaluate_predict --pipeline_name lightGBM
- Make sure that you put correct paths to data: neptune.yaml:L21
neptune run --config neptune.yaml main.py train_evaluate_predict --pipeline_name lightGBM
In both cases pipeline is called lightGBM
First solution uses some of the features from application_{train, test}.csv
provided explicitly in the pipeline_config.py, that is CATEGORICAL_COLUMNS and NUMERICAL_COLUMNS.
The model is LightGBM and is defined here: models.py:L7. Model accepts parameters from the neptune.yaml:L39 in case of the single run or from the neptune_random_search.yaml:L39 in case of the random search
We do simple predictions clipping, defined in the post postprocessing.py:L6
check our GitHub organization https://github.com/neptune-ml for more cool stuff 😃
Kamil & Kuba, core contributors
- chestnut 🌰: LightGBM and basic features
- seedling 🌱: Sklearn and XGBoost algorithms and groupby features
- blossom 🌼: LightGBM on selected features
- tulip 🌷: LightGBM with smarter features
- sunflower 🌻: LightGBM clean dynamic features
- four leaf clover 🍀: Stacking by feature diversity and model diversity