-
Notifications
You must be signed in to change notification settings - Fork 0
/
introduction.tex
10 lines (7 loc) · 2.64 KB
/
introduction.tex
1
2
3
4
5
6
7
8
9
10
\section{Introduction}
The key to convolutional neural networks (CNNs) lies in the way they employ convolution as a local and shift-invariant operation on Euclidean spaces, e.g.\ $\RR$ for audio or $\RR^2$ for images.
Recently, the concept of CNNs has been extended to more general spaces to exploit different structures that may underlie the data:
This includes spherical convolutions for rotationally invariant data~\cite{cohen2018sphericalcnn,esteves2018sphericalcnn,defferrard2020deepsphere}, more general convolutions on homogeneous spaces~\cite{cohen2016groupnn,kondor2018groupnn,worrall2017harmonicnn}, or convolutions on graphs~\cite{bruna2014graphnn,defferrard2016convolutional}.
Graph neural networks (GNNs) have proven to be an effective tool that can take into account irregular graphs to better learn interactions in the data~\cite{bronstein2017geometric,wu2020survey}.
Although graphs are useful in describing complex systems of irregular relations in a variety of settings, they are intrinsically limited to modeling pairwise relationships. The advance of topological methods in machine learning~\cite{Gabrielsson2020topological, Hofer2019LearningRO, rieck2018neural}, and the earlier establishment of \emph{topological data analysis (TDA)}~\cite{carlsson2008,chazal2017,edelsbrunner2010computational,ghrist2008barcodes} as a field in its own right, have confirmed the usefulness of viewing data as topological spaces in general, or in particular as simplicial complexes. The latter can be thought of as a higher-dimensional analog of graphs~\cite{moore2012,patania2017}. We here take the view that structure is encoded in \emph{simplicial complexes}, and that these represent $n$-fold interactions. In this setting, we present \emph{simplicial neural networks (SNNs)}, a neural network framework that take into account locality of data living over a simplicial complex in the same way a GNN does for graphs or a conventional CNN does for grids.
Higher-order relational learning methods, of which hypergraph neural networks~\cite{feng2018hypergraphs} and motif-based GNNs~\cite{monti2018motif} are examples, have already proven useful in some applications, e.g.\ protein interactions~\cite{ze2020graph}. However the mathematical theory underneath the notion of convolution in these approaches does not have clear connections with the global topological structure of the space in question. This leads us to believe that our method, motivated by Hodge--de Rham theory, is far better suited for situations where topological structure is relevant, such as perhaps in the processing of data that exists naturally as vector fields or data that is sensitive to the space's global structure.