Affiliation:
1. University of California, Santa Cruz
2. University of California, Santa Cruz,
Abstract
We examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. In Experiment 1, four natural speech statement-question pairs were identified by participants, and then analyzed to determine which characteristics were ecologically valid. These characteristics were tested in subsequent experiments to determine if they were also functionally valid. In Experiment 2, the characteristics of the most discriminable utterance pair were successfully extended to the other utterance pairs. For Experiment 3, an auditory continuum (varying in F0, amplitude, duration) was crossed with a visual continuum (varying in eyebrow raise, head tilt), using synthetic speech and a computer-animated head. Participants judged five levels along each of these two speech continua between a prototypical statement and prototypical question, in an expanded factorial design. Experiments 4 and 5 were unable to appreciably enhance the weak visual effect relative to the strong auditory effect (from Experiment 3). Overall, we found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. However, the weak visual effect relative to the robustly strong auditory effect precluded optimal integration and conclusive examination of information processing through model-fitting.
Subject
Speech and Hearing,Linguistics and Language,Sociology and Political Science,Language and Linguistics,General Medicine
Cited by
62 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献