Author:
Duville Mathilde Marie,Alonso-Valerdi Luz María,Ibarra-Zarate David I.
Abstract
Artificial voices are nowadays embedded into our daily lives with latest neural voices approaching human voice consistency (naturalness). Nevertheless, behavioral, and neuronal correlates of the perception of less naturalistic emotional prosodies are still misunderstood. In this study, we explored the acoustic tendencies that define naturalness from human to synthesized voices. Then, we created naturalness-reduced emotional utterances by acoustic editions of human voices. Finally, we used Event-Related Potentials (ERP) to assess the time dynamics of emotional integration when listening to both human and synthesized voices in a healthy adult sample. Additionally, listeners rated their perceptions for valence, arousal, discrete emotions, naturalness, and intelligibility. Synthesized voices were characterized by less lexical stress (i.e., reduced difference between stressed and unstressed syllables within words) as regards duration and median pitch modulations. Besides, spectral content was attenuated toward lower F2 and F3 frequencies and lower intensities for harmonics 1 and 4. Both psychometric and neuronal correlates were sensitive to naturalness reduction. (1) Naturalness and intelligibility ratings dropped with emotional utterances synthetization, (2) Discrete emotion recognition was impaired as naturalness declined, consistent with P200 and Late Positive Potentials (LPP) being less sensitive to emotional differentiation at lower naturalness, and (3) Relative P200 and LPP amplitudes between prosodies were modulated by synthetization. Nevertheless, (4) Valence and arousal perceptions were preserved at lower naturalness, (5) Valence (arousal) ratings correlated negatively (positively) with Higuchi’s fractal dimension extracted on neuronal data under all naturalness perturbations, (6) Inter-Trial Phase Coherence (ITPC) and standard deviation measurements revealed high inter-individual heterogeneity for emotion perception that is still preserved as naturalness reduces. Notably, partial between-participant synchrony (low ITPC), along with high amplitude dispersion on ERPs at both early and late stages emphasized miscellaneous emotional responses among subjects. In this study, we highlighted for the first time both behavioral and neuronal basis of emotional perception under acoustic naturalness alterations. Partial dependencies between ecological relevance and emotion understanding outlined the modulation but not the annihilation of emotional integration by synthetization.
Funder
Consejo Nacional de Ciencia y Tecnología
Subject
Cellular and Molecular Neuroscience,Neuroscience (miscellaneous)
Reference75 articles.
1. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers.;Akçay;Speech Commun.,2020
2. You’re not you when you’re angry: Robust emotion features emerge by recognizing speakers;Aldeneh;IEEE Trans. Affect. Comput.,2021
3. Selective and efficient neural coding of communication signals depends on early acoustic and social environment.;Amin;PLoS One,2013
4. The perception and analysis of the likeability and human likeness of synthesized speech;Baird;Interspeech 2018,2018
5. Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of sampled sound.;Boersma;IFA Proc.,1993
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献