Conferenza / Incontro

Data-driven emergence of convolutional structure in neural networks

21 settembre 2022
Orario di inizio 
17:00
PovoZero - Via Sommarive 14, Povo (Trento)
Aula Fortuna – TIFPA
Organizzato da: 
Department of Physics
Destinatari: 
Comunità universitaria
Partecipazione: 
Ingresso libero
Speaker: 
Dr. Alessandro Ingrosso - International Centre for Theoretical Physics - Trieste

Exploiting invariances in the inputs is crucial for constructing efficient representations and accurate predictions in neural circuits. In neuroscience, translation invariance is at the heart of models of the visual system, while convolutional neural networks designed to exploit translation invariance triggered the first wave of deep learning successes. While the hallmark of convolutions, namely localised receptive fields that tile the input space, can be implemented with fully-connected neural networks, learning convolutions directly from inputs in a fully-connected network has so far proven elusive.

In this talk, I will show how initially fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localised, space-tiling receptive fields. Both translation invariance and non-trivial higher-order statistics are needed to learn convolutions from scratch. I will provide an analytical and numerical characterisation of the pattern-formation mechanism responsible for this phenomenon in a simple model, which results in an unexpected link between receptive field formation and the tensor decomposition of higher-order input correlations.