Abstract
During communication, humans express their emotional states using various modalities (e.g., facial expressions and gestures), and they estimate the emotional states of others by paying attention to multimodal signals. To ensure that a communication robot with limited resources can pay attention to such multimodal signals, the main challenge involves selecting the most effective modalities among those expressed. In this study, we propose an active perception method that involves selecting the most informative modalities using a criterion based on energy minimization. This energy-based model can learn the probability of the network state using energy values, whereby a lower energy value represents a higher probability of the state. A multimodal deep belief network, which is an energy-based model, was employed to represent the relationships between the emotional states and multimodal sensory signals. Compared to other active perception methods, the proposed approach demonstrated improved accuracy using limited information in several contexts associated with affective human–robot interaction. We present the differences and advantages of our method compared to other methods through mathematical formulations using, for example, information gain as a criterion. Further, we evaluate performance of our method, as pertains to active inference, which is based on the free energy principle. Consequently, we establish that our method demonstrated superior performance in tasks associated with mutually correlated multimodal information.
Subject
Artificial Intelligence,Computer Science Applications
Reference41 articles.
1. The Theory of Constructed Emotion: an Active Inference Account of Interoception and Categorization;Barrett;Soc. Cogn. Affect Neurosci.,2017
2. Emotional Expression Recognition with a Cross-Channel Convolutional Neural Network for Human-Robot Interaction;Barros,2015
3. Developing Crossmodal Expression Recognition Based on a Deep Neural Model;Barros;Adaptive Behav.,2016
4. Recognition of Affective Communicative Intent in Robot-Directed Speech;Breazeal;Autonomous Robots,2002
5. Emotion and Sociable Humanoid Robots;Breazeal;Int. J. Human-Computer Stud.,2003
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献