A recent discovery explains how our brain handles conversations in noisy environments and could have a significant impact on the development of more efficient hearing aids.
Vinay Raghavan, a researcher at Columbia University in New York, gave an interesting explanation of how the brain processes speech perception. According to him, the prevailing idea was that only the voice of the person we are paying attention to is processed by the brain.
see more
Sweet news: Lacta launches Sonho de Valsa e Ouro chocolate bar…
Brazilian wine wins label award at the 'Oscars' of…
However, Raghavan questions this notion, noting that when someone screams in a crowded place, we don't ignore them even when we're focused on someone else.
Experts investigate how the human brain processes voices
During the controlled study by Vinay Raghavan and his team, electrodes were attached to the brains of seven individuals during epilepsy surgeries, allowing the monitoring of brain activity.
During this procedure, participants were exposed to a 30-minute audio clip with two voices superimposed.
The participants remained awake during the surgery and were instructed to alternate their attention between the two voices present in the audio. One of the voices was that of a man, while the other was that of a woman.
To the voices overlapping spoke simultaneously, with similar volumes, but, in certain moments of the clip, a voice was louder than the other, simulating the range of volumes found in background conversations in environments crowded.
The research team used data obtained from participants' brain activity to develop a model that predicted how the brain processes voices of different volumes and how this can vary depending on the voice in which the participant has been trained to to focus.
Search result
The results revealed that the louder voice of the two was encoded both in the primary auditory cortex, responsible for by the conscious perception of sound, and in the secondary auditory cortex, responsible for more sound processing. complex.
This finding was surprising, since the participants were instructed not to focus on the loudest voice, yet the brain processed this information in a meaningful way.
According to Raghavan, this study is groundbreaking in showing, through neuroscience, that the brain encodes speech information even when we are not paying active attention to it.
The discovery opens a new way to understand how the brain processes stimuli to which we are not directing our attention.
Traditionally, it has been believed that the brain selectively processes only those stimuli that we are consciously focused on. However, the results of this study challenge this view, demonstrating that the brain continues to encode information even when we are distracted or engaged in other tasks.
The results also revealed that the lower voice was only processed by the brain in the cortices primary and secondary when participants were instructed to focus their attention on that voice specific.
Furthermore, it surprisingly took the brain an additional 95 milliseconds to process that voice as speech compared to when participants were told to focus on the loudest voice.
Still according to Vinay Raghavan, the findings of the study show that the brain probably employs different controls to encode and represent voices with different volumes during a conversation. This understanding could be applied in the development of more effective hearing aids.
The specialist suggests that, if it were possible to create a hearing aid capable of identifying which person the user is paying attention to, would it be possible to increase the volume of just that person's voice specific.
A breakthrough of this caliber could significantly improve the listening experience in noisy environments, allowing the user to better focus on the sound source of interest.
Lover of movies and series and everything that involves cinema. An active curious on the networks, always connected to information about the web.