Abstract
Emotion recognition in conversations is an important step in various virtual chatbots which require opinion-based feedback, like in social media threads, online support, and many more applications. Current emotion recognition in conversations models face issues like: (a) loss of contextual information in between two dialogues of a conversation, (b) failure to give appropriate importance to significant tokens in each utterance, (c) inability to pass on the emotional information from previous utterances. The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by performing unique feature extraction using knowledge graphs, sentiment lexicons and phrases of natural language at all levels (word and position embedding) of the utterances. Experiments on emotion recognition in conversations datasets show that AdCOFE is beneficial in capturing emotions in conversations.
Reference28 articles.
1. Neural machine translation by jointly learning to align and translate;Bahdanau;ArXiv,2014
2. IEMOCAP: interactive emotional dyadic motion capture database;Busso;Language Resources and Evaluation,2008
3. Understanding emotions in text using deep learning and big data;Chatterjee;Computers in Human Behavior,2019
4. BERT: pre-training of deep bidirectional transformers for language understanding;Devlin,2019
5. DialogueGCN: a graph convolutional neural network for emotion recognition in conversation;Ghosal,2020
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献