Abstract
AbstractBackgroundSentiment expression and detection are crucial for effective and empathetic human-robot interaction. Previous work in this field often focuses on non-verbal emotion expression, such as facial expressions and gestures. Less is known about which specific prosodic speech elements are required in human-robot interaction. Our research question was: what prosodic elements are related to emotional speech in human-computer/robot interaction?MethodsThe scoping review was conducted in alignment with the Arksey and O’Malley methods. Literature was identified from the SCOPUS, IEEE Xplore, ACM Digital Library and PsycINFO databases in May 2021. After screening and de-duplication, data were extracted into an Excel coding sheet and summarised.ResultsThirteen papers, published from 2012 to 2020 were included in the review. The most commonly used prosodic elements were tone/pitch (n = 8), loudness/volume (n = 6) speech speed (n = 4) and pauses (n = 3). Non-linguistic vocalisations (n = 1) were less frequently used. The prosodic elements were generally effective in helping to convey or detect emotion, but were less effective for negative sentiment (e.g., anger, fear, frustration, sadness and disgust).DiscussionFuture research should explore the effectiveness of commonly used prosodic elements (tone, loudness, speed and pauses) in emotional speech, using larger sample sizes and real-life interaction scenarios. The success of prosody in conveying negative sentiment to humans may be improved with additional non-verbal cues (e.g., coloured light or motion). More research is needed to determine how these may be combined with prosody and which combination is most effective in human-robot affective interaction.
Funder
ministry of trade, industry and energy
University of Auckland
Publisher
Springer Science and Business Media LLC
Subject
General Computer Science,Human-Computer Interaction,Philosophy,Electrical and Electronic Engineering,Control and Systems Engineering,Social Psychology
Reference50 articles.
1. Picard R (1997) Affective Computing. MIT Press, USA
2. Gasteiger N, Broadbent E (2021) AI, Robotics, Medicine and Health Sciences, in The Routledge Social Science Handbook of AI, A. Elliott, Editor. Routledge, New York
3. Ochs M et al (2006) A computational model of capability-based emotion elicitation for rational agent. 1st workshop on Emotion and Computing-Current Research and Future Impact. Bremen, Germany
4. Lim J et al (2021) Subsentence Extraction from Text Using Coverage-Based Deep Learning Language Models. Sensors 21(8):2712
5. Antona M et al (2019) My robot is happy today: how older people with mild cognitive impairments understand assistive robots’ affective output, in 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. p. 416–424
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献