Affiliation:
1. Department of Informatics, University of Piraeus, Piraeus 185 34, Greece
2. Department of Restoration and Conservation of Cultural Heritage, Technological Educational Institute of the Ionian Islands, 2 Kalvou Sq., 291 00 Zakynthos, Greece
Abstract
In this paper, we present and discuss a novel approach for the integration of audio-lingual and visual-facial modalities in a bi-modal user interface for affect recognition. As it is widely acknowledged, two or more modalities of interaction can provide complementary information to each other with respect to affect recognition. However, satisfactory progress has not yet been achieved towards the integration of these modalities, since the problem of combining them effectively is quite complicated. In our research, we combine the two modalities from the perspective of a human observer by employing a multi-criteria decision making theory for dynamic affect recognition of computer users. An important research milestone that is required in our approach is the specification of the strengths and weaknesses of each modality with respect to affect recognition concerning 6 basic emotion states. These emotion states are happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. For this purpose, we describe two empirical studies that we have conducted involving human users and human observers concerning the recognition of emotions from audio-lingual and visual-facial modalities. The results of the empirical studies have been used to assign weights to criteria for the application of a multi-criteria decision making theory. Moreover, the results of the empirical studies provide information that may be used by other researchers in the field of affect recognition.
Publisher
World Scientific Pub Co Pte Lt
Subject
Artificial Intelligence,Artificial Intelligence
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献