Tuesday, 27 June 2017

How the brain learns to "listen" to faces

A study by CIMeC researchers, published in PNAS, could point to new directions for research on brain development and rehabilitation techniques for people with cochlear implants

Versione stampabile

Born-deaf individuals are able to “listen to faces”. In their case, the region of the brain that detects auditory stimuli reorganizes, and works in a way similar to that of brain regions detecting visual stimuli. In this way, they obtain important identity information on the people they are talking to that usually pass through voice processing channels, for example on their age, sex, feelings, emotions and intentions.

This is what emerges from a study conducted by the Center for Mind/Brain Sciences (CIMeC) of the University of Trento that has recently been published in PNAS

For the first time, the study demonstrates that these changes are not accidental, but are determined by intrinsic constraints that have a genetic base in human evolution. So the brain is both plastic and rigid at the same time.

«Our project is inherent to the Nature - Culture divide on the development of human brain» commented Olivier Collignon, head of the project, member of CIMeC and Professor at the Université Catholique de Louvain (Belgium). «Neuroscientific studies have demonstrated that the human brain has an extraordinary ability to adapt to different situations in the course of life, but it is not clear to what extent this ability is determined by genetic constraints. What happens to born-deaf individuals clearly demonstrates that brain plasticity can be limited by genetically-determined specializations, as is the case, it has been shown, with blind people».

The study also confirms that face and voice detection and processing in the human brain use shared mechanisms, even though they rely on distinct sensory channels.

There appears to exist a privileged link between the visual and auditory systems, that could date back to the early evolution and development of human brain. This link allows deaf people to combine information on faces and voices to extract useful data on the identity and affective states of people they are interacting with.

«It is probably because of this privileged link that the brain adapts to the inability to perceive vocal signals, modifying the areas of the brain specialized for voice perception so that they detect and process face information» explained Stefania Benetti, of CiMeC, leading author of the study.

Where could these new observations lead to? Neuroscientific studies so far have agreed that brain plasticity does not help hearing recovery, which is now possible thanks to a prosthetic solution known as “cochlear implant”. In particular, it was supposed that once voice-sensitive areas had reorganized to perceive visual information they could not be reverted to process hearing information.

«In clinical and rehabilitation practice – explained Francesco Pavani, of CiMeC, co-author of the study – this led to a recommendation to strengthen hearing abilities (voice) and silence the visual ones (lip movement and facial expressions) in the rehabilitation of people with cochlear implants. But the results of our study somehow challenge this recommendation. They suggest that these sensory channels, which have been interconnected since the early stages of brain development, could instead be used in rehabilitation practices to improve oral language processing through visual information».