Affiliation:
1. School of Mathematical and Computer Science Heriot‐Watt University Dubai UAE
Abstract
AbstractExpert systems are being extensively used to make critical decisions involving emotional analysis in affective computing. The evolution of deep learning algorithms has improved the potential for extracting value from multimodal emotional data. However, these black‐box algorithms do not often explain the heuristics behind processing the input features for achieving certain outputs. This study focuses on the risks of using black‐box deep learning models for critical tasks, such as emotion recognition, and describes how human understandable interpretations of the workings of these models are extremely important. This study utilizes one of the largest multimodal datasets available–CMU‐MOSEI. Many researchers have used the pre‐extracted features provided by the CMU Multimodal SDK with black‐box deep learning models making it difficult to interpret the contribution of its individual features. This study describes the implications of significant features from various modalities (audio, video, text) identified using XAI in Multimodal Emotion Recognition. It describes the process of curating reduced feature models by using the Gradient SHAP XAI method. These reduced models with highly contributing features achieve comparable and at times even better results compared to their corresponding all‐feature models as well as the baseline model GraphMFN. This study reveals that carefully selecting significant features for a model can help filter out irrelevant features, and attenuate the noise or bias caused by them, leading to an improved performance efficiency of the expert systems by making them transparent, easily interpretable, and trustworthy.
Subject
Artificial Intelligence,Computational Theory and Mathematics,Theoretical Computer Science,Control and Systems Engineering
Reference72 articles.
1. A2Zadeh. (2018).A2Zadeh/CMU‐MultimodalSDK: CMU MultimodalSDK is a machine learning platform for development of advanced multimodal models as well as easily accessing and processing multimodal datasets.
2. Multimodal Video Sentiment Analysis Using Deep Learning Approaches, a Survey
3. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
4. Multi‐task learning for multi‐modal emotion recognition and sentiment analysis;Akhtar M.;Computation and Language,2019
5. Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献