Author:
Lukac Martin,Zhambulova Gulnaz,Abdiyeva Kamila,Lewis Michael
Abstract
AbstractHuman-machine communication can be substantially enhanced by the inclusion of high-quality real-time recognition of spontaneous human emotional expressions. However, successful recognition of such expressions can be negatively impacted by factors such as sudden variations of lighting, or intentional obfuscation. Reliable recognition can be more substantively impeded due to the observation that the presentation and meaning of emotional expressions can vary significantly based on the culture of the expressor and the environment within which the emotions are expressed. As an example, an emotion recognition model trained on a regionally-specific database collected from North America might fail to recognize standard emotional expressions from another region, such as East Asia. To address the problem of regional and cultural bias in emotion recognition from facial expressions, we propose a meta-model that fuses multiple emotional cues and features. The proposed approach integrates image features, action level units, micro-expressions and macro-expressions into a multi-cues emotion model (MCAM). Each of the facial attributes incorporated into the model represents a specific category: fine-grained content-independent features, facial muscle movements, short-term facial expressions and high-level facial expressions. The results of the proposed meta-classifier (MCAM) approach show that a) the successful classification of regional facial expressions is based on non-sympathetic features b) learning the emotional facial expressions of some regional groups can confound the successful recognition of emotional expressions of other regional groups unless it is done from scratch and c) the identification of certain facial cues and features of the data-sets that serve to preclude the design of the perfect unbiased classifier. As a result of these observations we posit that to learn certain regional emotional expressions, other regional expressions first have to be “forgotten”.
Publisher
Springer Science and Business Media LLC
Reference47 articles.
1. Friesen, E. & Ekman, P. Facial action coding system: A technique for the measurement of facial movement. Palo Alto 3(2), 5 (1978).
2. Huang, C., Chen, G., Yu, H., Bao, Y. & Zhao, L. Speech emotion recognition under white noise, Arch. Acoust., 38(4), (2013). https://acoustics.ippt.pan.pl/index.php/aa/article/view/308
3. Huang, C. et al. Practical speech emotion recognition based on online learning: From acted data to elicited data. Math. Probl. Eng. 2013, 265819. https://doi.org/10.1155/2013/265819 (2013).
4. Mehrabian, A. Silent Messages (Wadsworth Pub. Co, 1971).
5. Mehrabian, A. ‘silent messages’ - a wealth of information about nonverbal communication (body language) (2009).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献