Affiliation:
1. University of Genoa, Italy
Abstract
Human-machine interaction is performed by devices such as the keyboard, the touch-screen, or speech-to-text applications. For example, a speech-to-text application is software that allows the device to translate the spoken words into text. These tools translate explicit messages but ignore implicit messages, such as the emotional status of the speaker, filtering out a portion of information available in the interaction process. This chapter focuses on emotion detection. An emotion-aware device can also interact more personally with its owner and react appropriately according to the user’s mood, making the user-machine interaction less stressful. The chapter gives the guidelines for building emotion-aware smartphone applications in an opportunistic way (i.e., without the user’s collaboration). In general, smartphone applications might be employed in different contexts; therefore, the to-be-detected emotions might be different.
Reference101 articles.
1. Abdennadher, S. M. A. (2007). BECAM tool a semi-automatic tool for bootstrapping emotion corpus annotation and management. European Conference on Speech and Language Processing (EUROSPEECH ‘07), (pp. 946–949).
2. Agneessens, A., Bisio, I., Lavagetto, F., Marchese, M., & Sciarrone, A. (2010). Speaker Count application for smartphone platforms. Wireless Pervasive Computing (ISWPC), 2010 5th IEEE International Symposium on, (pp. 361-366). Modena.
3. Ang, J. R. D. (2002). Prosody-based automatic detection of annoyance and frustration in human computer dialog. Proc. Int. Conf. Spoken Language Processing (ICSLP ’02), (pp. 2037-2040).
4. Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献