Affiliation:
1. Division of Head and Neck Surgery UCLA School of Medicine, and Audiology and Speech Pathology VA Medical Center, West Los Angeles
Abstract
Acoustic analysis is often favored over perceptual evaluation of voice because it is considered objective, and thus reliable. However, recent studies suggest this traditional bias is unwarranted. This study examined the relative reliability of human listeners and automatic systems for measuring perturbation in the evaluation of pathologic voices. Ten experienced listeners rated the roughness of 50 voice samples (ranging from normal to severely disordered) on a 75 mm visual analog scale. Rating reliability within and across listeners was compared to the reliability of jitter measures produced by several voice analysis systems (CSpeech, SoundScope, CSL, and an interactive hand-marking system). Results showed that overall listeners agreed as well or better than “objective” algorithms. Further, listeners disagreed in predictable ways, whereas automatic algorithms differed in seemingly random fashions. Finally, listener reliability increased with severity of pathology; objective methods quickly broke down as severity increased. These findings suggest that listeners and analysis packages differ greatly in their measurement characteristics. Acoustic measures may have advantages over perceptual measures for discriminating among essentially normal voices; however, reliability is not a good reason for preferring acoustic measures of perturbation to perceptual measures.
Publisher
American Speech Language Hearing Association
Subject
Speech and Hearing,Linguistics and Language,Language and Linguistics
Cited by
136 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献