Affiliation:
1. Affective Engineering and Computer Arts Laboratory, Graduate School of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki 567-8570, Japan
Abstract
As social robots become more prevalent, they often employ non-speech sounds, in addition to other modes of communication, to communicate emotion and intention in an increasingly complex visual and audio environment. These non-speech sounds are usually tailor-made, and research into the generation of non-speech sounds that can convey emotions has been limited. To enable social robots to use a large amount of non-speech sounds in a natural and dynamic way, while expressing a wide range of emotions effectively, this work proposes an automatic method of sound generation using a genetic algorithm, coupled with a random forest model trained on representative non-speech sounds to validate each produced sound’s ability to express emotion. The sounds were tested in an experiment wherein subjects rated the perceived valence and arousal. Statistically significant clusters of sounds in the valence arousal space corresponded to different emotions, showing that the proposed method generates sounds that can readily be used in social robots.
Reference32 articles.
1. Sound design for emotion and intention expression of socially interactive robots;Jee;Intell. Serv. Robot.,2010
2. Bethel, C., and Murphy, R. (2006, January 13–15). Auditory and other non-verbal expressions of affect for robots. Proceedings of the 2006 AAAI Fall Symposium, Washington, DC, USA.
3. Read, R. (2014). The Study of Non-Linguistic Utterances for Social Human-Robot Interaction. [Ph.D. Thesis, University of Plymouth].
4. Review of Semantic-Free Utterances in Social Human-Robot Interaction;Yilmazyildiz;Int. J. Hum. Comput. Interact.,2016
5. Sound Synthesis for Communicating Nonverbal Expressive Cues;Salichs;IEEE Access,2017