This paper was presented at Big DML, 2021. FKPD_BigDML_Submission-Final.pdf
BigDML_Paper96_FKPD_Final.pptx
The aim of this paper is to design light weight and faster neural networks.- To evaluate Custom CNN with pre trained architectures
- To evaluate augmentation vs imputation
- To fine tune Custom CNNs manually and by Auto-Keras Tuner
The paper aims at establishing the superiority of custom CNN over pre trained architectures. Further augmentation techniques are superior to imputation techniques in case of missing values.
The dataset used here is a kaggle dataset and consists of null values.
Null vs Non Null |
---|
- Forward fill imputation
- KNN imputation The following augmentations have been used in our model building.
Augmentation |
---|
Custom tuned and Keras tuned CNN | Pre trained architectures |
---|---|
Custom Models | Pre trained models |
---|---|
It is seen that custom models with augmentation fare better than imputation techniques with the least rmse scores. Similarly Fine tuned MobileNetV2 with augmentation has the least RMSE score among transfer learning models and also among Custom models suggesting that augmentation is a more robust technique than just imputation.
- Manually tuned custom model has lower inference time on CPU than Keras tuned model- MobilenetV2 yielded lower RMSE than Manually tuned custom model inspite of increased parameters. This can be attributed to the architectural efficiency of the mobilenet architectures.
Model size | Latency |
---|---|