Author:
Patel Pavitra,Chaudhari A. A.,Pund M. A.,Deshmukh D. H.
Abstract
<p>Speech emotion recognition is an important issue which affects the human machine interaction. Automatic recognition of human emotion in speech aims at recognizing the underlying emotional state of a speaker from the speech signal. Gaussian mixture models (GMMs) and the minimum error rate classifier (i.e. Bayesian optimal classifier) are popular and effective tools for speech emotion recognition. Typically, GMMs are used to model the class-conditional distributions of acoustic features and their parameters are estimated by the expectation maximization (EM) algorithm based on a training data set. In this paper, we introduce a boosting algorithm for reliably and accurately estimating the class-conditional GMMs. The resulting algorithm is named the Boosted-GMM algorithm. Our speech emotion recognition experiments show that the emotion recognition rates are effectively and significantly boosted by the Boosted-GMM algorithm as compared to the EM-GMM algorithm.<br />During this interaction, human beings have some feelings that they want to convey to their communication partner with whom they are communicating, and then their communication partner may be the human or machine. This work dependent on the emotion recognition of the human beings from their speech signal<br />Emotion recognition from the speaker’s speech is very difficult because of the following reasons: Because of the existence of the different sentences, speakers, speaking styles, speaking rates accosting variability was introduced. The same utterance may show different emotions. Therefore it is very difficult to differentiate these portions of utterance. Another problem is that emotion expression is depending on the speaker and his or her culture and environment. As the culture and environment gets change the speaking style also gets change, which is another challenge in front of the speech emotion recognition system.</p>
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Speech emotion recognition using the novel SwinEmoNet (Shifted Window Transformer Emotion Network);International Journal of Speech Technology;2024-07-10
2. Semantic speech analysis using machine learning and deep learning techniques: a comprehensive review;Multimedia Tools and Applications;2023-12-19
3. Assessing Audio-Based Transformer Models for Speech Emotion Recognition;2023 7th International Symposium on Innovative Approaches in Smart Technologies (ISAS);2023-11-23
4. RNN-Based Method for Classifying Natural Human Emotional States from Speech;2023 25th International Conference on Digital Signal Processing and its Applications (DSPA);2023-03-29
5. Emotion Extraction from Speech using Deep Learning;2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI);2022-03-02