This repository contains code to visualize the double descent phenomenon and the Neural Tangent Kernel linear approximation of deep neural network models on regression tasks. It can be used to run on your own dataset or to apply custom networks.
Double descent is a phenomenon where performance first improves, then gets worse, and then improves again with increasing model size, training time or data size,
here is an example of a double descent phenomenon obtained from this paper :
For a neural network with a function f and parameters w, we define the NTK linear approximation for x as :
Dataset | Architecture | Optimizer | Epochs | Width interval | Data Link | Results |
---|---|---|---|---|---|---|
auto-mpg with 0% and 20% label noise | 2 hidden layers neural network | Adam with LR=0.001 | 1000 | [1 1300] | link | link |
- Free software: MIT
- Please contact us at engineer@hi-paris.fr
[1] NAKKIRAN, Preetum, KAPLUN, Gal, BANSAL, Yamini, et al. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021, vol. 2021, no 12, p. 124003.
[2] JACOT, Arthur, GABRIEL, Franck, et HONGLER, Clément. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 2018, vol. 31.