This repository contains a Jupyter notebook, in which I built a very simple neural network to demonstrate the information bottleneck theory for deep learning proposed by Naftali Tishby. The information bottleneck method is based on information theory. It was first introduced in 1999, and was recently applied to deep neural networks as an attempt to look inside the black box of deep learning.
For more information about the theory, please refer to their paper or this talk on Youtube. This article on the Quanta Magazine also contains very useful information.