MACHINE LEARNING

Why does Deep Learning work? - A perspective from Group Theory

Resource type
Authors/contributors
Title
Why does Deep Learning work? - A perspective from Group Theory
Abstract
Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of {\em shadow} groups whose elements serve as close approximations. Over the shadow groups, the pre-training step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the {\em simplest}. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.
Publication
arXiv:1412.6621 [cs, stat]
Date
2015-02-28
Short Title
Why does Deep Learning work?
Accessed
2019-11-22T17:38:08Z
Library Catalog
Extra
ZSCC: NoCitationData[s0] arXiv: 1412.6621
Notes
Comment: 13 pages, 5 figures
Citation
Paul, A., & Venkatasubramanian, S. (2015). Why does Deep Learning work? - A perspective from Group Theory. ArXiv:1412.6621 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1412.6621
Attachment
Processing time: 0.02 seconds

Graph of references

(from Zotero to Gephi via Zotnet with this script)
Graph of references