Author:
Phillips E.A.M.,Goupil L.,Ives J. E.,Labendzki P.,Whitehorn M.,Marriott Haresign I.,Wass S.V.
Abstract
AbstractNeural entrainment to slow modulations in the amplitude envelope of infant-directed speech is thought to drive early language learning. Most previous research with infants examining speech-brain tracking has been conducted in controlled, experimental settings, which are far from the complex environments of everyday interactions. Whilst recent work has begun to investigate speech-brain tracking to naturalistic speech, this work has been conducted in semi-structured paradigms, where infants listen to live adult speakers, without engaging in free-flowing social interactions. Here, we test the applicability of mTRF modelling to measure speech-brain tracking in naturalistic and bidirectional free-play interactions of 9-12-month-olds with their caregivers. Using a backwards modelling approach, we test individual and generic training procedures, and examine the effects of data quantity and quality on model fitting. We show model fitting is most optimal using an individual approach, trained on continuous segments of interaction data. Corresponding to previous findings, individual models showed significant speech-brain tracking at delta modulation frequencies, but not in alpha and theta bands. These findings open new methods for studying the interpersonal micro-processes that support early language learning. In future work, it will be important to develop a mechanistic framework for understanding how our brains track naturalistic speech during infancy.
Publisher
Cold Spring Harbor Laboratory