Complex Sound Patterns Are Recognized By Newborn Brains

University of Vienna

Nonlinguistic Sounds Activate Language-Related Networks in the Brain

A team of researchers, including psycholinguist Jutta Mueller from the University of Vienna, has discovered that newborns are capable of learning complex sound sequences that follow language-like rules. This groundbreaking study provides long-sought evidence that the ability to perceive dependencies between non-adjacent acoustic signals is innate. The findings were recently published in the prestigious journal PLOS Biology.

It has long been known that babies can learn sequences of syllables or sounds that directly follow one another. However, human language often involves patterns that link elements which are not adjacent. For example, in the sentence "The tall woman who is hiding behind the tree calls herself Catwoman," the subject "The tall woman" is connected to the verb ending "-s," indicating third-person singular. Language development research suggests that children begin to master such rules in their native language by the age of two. However, learning experiments have shown that even infants as young as five months can detect rules between non-adjacent elements, not just in language but in non-linguistic sounds, such as tones. "Even our closest relatives, chimpanzees, can detect complex acoustic patterns when embedded in tones," says co-author Simon Townsend from the University of Zurich.

Pattern Recognition in Sounds is Innate

Although many previous studies suggested that the ability to recognize patterns between non-adjacent sounds is innate, there was no clear-cut evidence-until now. The international team of researchers has provided this evidence by observing the brain activity of newborns and six-month-old infants as they listened to complex sound sequences. In their experiment, newborns-just a few days old-were exposed to sequences where the first tone was linked to a non-adjacent third tone. After only six minutes of listening to two different types of sequences, the babies were presented with new sequences that followed the same pattern but at a different pitch. These new sequences were either correct or contained an error in the pattern. Using near-infrared spectroscopy to measure brain activity, the researchers found that the newborns' brains could distinguish between the correct and incorrect sequences.

Sounds Activate Language-Related Networks in the Brain

"The frontal cortex-the area of the brain located just behind the forehead-played a crucial role in newborns," explains Yasuyo Minagawa from Keio University in Tokyo. The strength of the frontal cortex's response to incorrect sound sequences was linked to the activation of a predominantly left-hemispheric network, which is also essential for language processing. Interestingly, six-month-old infants showed activation in this same language-related network when distinguishing between correct and incorrect sequences. The researchers concluded that complex sound patterns activate these language-related networks from the very beginning of life. Over the first six months, these networks become more stable and specialized.

Early Learning Experiences Are Key

"Our findings demonstrate that the brain is capable of responding to complex patterns, like those found in language, from day one," explains Jutta Mueller from the University of Vienna's Department of Linguistics. "The way brain regions connect during the learning process in newborns suggests that early learning experiences may be crucial for forming the networks that later support the processing of complex acoustic patterns."

These insights are key to understanding the role of environmental stimulation in early brain development. This is especially important in cases where stimulation is lacking, inadequate, or poorly processed, such as in premature babies. The researchers also highlighted that their findings show how non-linguistic acoustic signals, like the tone sequences used in the study, can activate language-relevant brain networks. This opens up exciting possibilities for early intervention programs, that could, for example, use musical stimulation to foster language development.

Original publication:

Lin Cai, Takeshi Arimitsu, Naomi Shinohara, Takao Takahashi, Yoko Hakuno, Masahiro Hata, Ei-ichi Hoshino, Stuart K. Watson, Simon W. Townsend, Jutta L. Mueller & Yasuyo Minagawa (2024). Functional reorganization of brain regions supporting artificial grammar learning across the first half year of life. PLOS Biology.https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002610

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.