Affiliation:
1. Institute for Learning and Brain Sciences University of Washington Seattle Washington USA
2. Department of Speech and Hearing Sciences University of Washington Seattle Washington USA
3. Department of Linguistics University of Washington Seattle Washington USA
Abstract
AbstractInfants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants’ daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English‐learning infants’ home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen‐science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in‐person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4Research Highlights
This study is the first to compare music input to speech input in infants’ natural home environment across infancy.
We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments.
Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older.
Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input.
Funder
National Institutes of Health