This repository has been archived by the owner on Jun 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 170
Stacking by feature diversity and model diversity
Jakub edited this page Sep 3, 2018
·
13 revisions
We added model and feature diversity and used stacking to combine the results.
We refactored the feature engineering so that it extracts all the features from train/valid/test in one go and later the features are divided by idx.
Application data -> eda-application.ipynb π
Installment Payments data -> eda-installments.ipynb π
POS Cash Balance application data -> eda-pos_cash_balance.ipynb π
- Logistic Regression CV 0.749 neptune experiment
- Neural Network CV 0.762 neptune experiment
- LightGBM on various feature subsets
Then we used stacking on all the out of fold predictions we had:
- out of fold predictions stacking with LightGBM got CV 0.7972 neptune experiment
- out of fold predictions + features (specified in the neptune_stacking.yaml CV 0.7975 neptune experiment
Since the diagram below is quite wide (it uses multiple input files), here is a link to the larger version.
check our GitHub organization https://github.com/neptune-ml for more cool stuff π
Kamil & Kuba, core contributors
- chestnut π°: LightGBM and basic features
- seedling π±: Sklearn and XGBoost algorithms and groupby features
- blossom πΌ: LightGBM on selected features
- tulip π·: LightGBM with smarter features
- sunflower π»: LightGBM clean dynamic features
- four leaf clover π: Stacking by feature diversity and model diversity