Skip to content

Repo for a paper about constructing priors on very deep models.

License

Notifications You must be signed in to change notification settings

lorraine2/deep-limits

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Avoiding Pathologies in Very Deep Networks

Experiment source code and latex source for http://arxiv.org/pdf/1402.5836.pdf

Abstract:

Choosing appropriate architectures and regularization strategies for deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.

This paper appeared in the 2014 Artificial Intelligence and Statistics conference, held in Reykjavik, Iceland.

Authors: David Duvenaud, Oren Rippel, Ryan P. Adams, and Zoubin Ghahramani

Feel free to email me with any questions at (dduvenaud@seas.harvard.edu).

About

Repo for a paper about constructing priors on very deep models.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TeX 77.4%
  • HTML 10.5%
  • MATLAB 9.9%
  • PostScript 2.2%