Author:
Axelsson Agnes,Skantze Gabriel
Abstract
Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.
Funder
Stiftelsen för Strategisk Forskning
Subject
Computer Science Applications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Computer Science (miscellaneous)
Reference100 articles.
1. Multimodal sentiment analysis via RNN variants;Agarwal,2019
2. FurHat: a back-projected human-like robot head for multiparty human-machine interaction;Al Moubayed,2012
3. The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena;Allwood;Lang. Resour. Eval.,2007
4. On the semantics and pragmatics of linguistic feedback;Allwood;J. Semant,1992
5. Modelling adaptive presentations in human-robot interaction using behaviour trees;Axelsson,2019
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献