Abstract
AbstractSpeech perception and memory for speech require active engagement. Gestural theories have emphasized mainly the effect of speaker’s movements on speech perception. They fail to address the effects of listener movement, focusing on communication as a boundary condition constraining movement among interlocutors. The present work attempts to break new ground by using multifractal geometry of physical movement as a common currency for supporting both sides of the speaker-listener dyads. Participants self-paced their listening to a narrative, after which they completed a test of memory querying their narrative comprehension and their ability to recognize words from the story. The multifractal evidence of nonlinear interactions across timescales predicted the fluency of speech perception. Self-pacing movements that enabled listeners to control the presentation of speech sounds constituted a rich exploratory process. The multifractal nonlinearity of this exploration supported several aspects of memory for the perceived spoken language. These findings extend the role of multifractal geometry in the speaker’s movements to the narrative case of speech perception. In addition to posing novel basic research questions, these findings make a compelling case for calibrating multifractal structure in text-to-speech synthesizers for better perception and memory of speech.
Publisher
Cold Spring Harbor Laboratory