CNN-Based Models for Emotion and Sentiment Analysis Using Speech Data

Author:

Madan Anjum1ORCID,Kumar Devender2ORCID

Affiliation:

1. NSUT, New Delhi, India

2. Information Technology, NSUT, New Delhi, India

Abstract

The study aims to present an in-depth Sentiment Analysis (SA) grounded by the presence of emotions in the speech signals. Nowadays, all kinds of web-based applications ranging from social media platforms and video-sharing sites to e-commerce applications provide support for Human-Computer Interfaces (HCIs). These media applications allow users to share their experiences in all forms such as text, audio, video, GIF, etc. The most natural and fundamental form of expressing oneself is through speech. Speech-Based Sentiment Analysis (SBSA) is the task of gaining insights into speech signals. It aims to classify the statement as neutral, negative, or positive. On the other hand, Speech Emotion Recognition (SER) categorizes speech signals into the following emotions: disgust, fear, sadness, anger, happiness, and neutral. It is necessary to recognize the sentiments along with the profoundness of the emotions in the speech signals. To cater to the above idea, a methodology is proposed defining a text-oriented SA model using the combination of CNN and Bi-LSTM techniques along with an embedding layer, applied to the text obtained from speech signals; achieving an accuracy of 84.49%. Also, the proposed methodology suggests an Emotion Analysis (EA) model based on the CNN technique highlighting the type of emotion present in the speech signal with an accuracy measure of 95.12%. The presented architecture can also be applied to different other domains like product review systems, video recommendation systems, education, health, security, etc.

Publisher

Association for Computing Machinery (ACM)

Reference63 articles.

1. Sentiment analysis through recurrent variants latterly on convolutional neural network of Twitter

2. Stressed speech emotion recognition using feature fusion of teager energy operator and MFCC

3. Surekha Reddy Bandela and T Kishore Kumar. 2018. Emotion recognition of stressed speech using teager energy and linear prediction features. In 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT). IEEE, 422–425.

4. Speech emotion recognition based on parallel CNN-attention networks with multi-fold data augmentation;Bautista John Lorenzo;Electronics,2022

5. A comparative study of traditional and newly proposed features for recognition of speech under stress

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3