Affiliation:
1. HCI Lab, Business Hall of Yonsei University, Seoul 03722, Republic of Korea
Abstract
Data are one of the important factors in artificial intelligence (AI). Moreover, in order for AI to understand the user and go beyond the role of a simple machine, the data contained in the user’s self-disclosure is required. In this study, two types of robot self-disclosures (disclosing robot utterance, involving user utterance) are proposed to elicit higher self-disclosure from AI users. Additionally, this study examines the moderating effects of multi-robot conditions. In order to investigate these effects empirically and increase the implications of research, a field experiment with prototypes was conducted in the context of using smart speaker of children. The results indicate that both types of robot self-disclosures were effective in eliciting the self-disclosure of children. The interaction effect between disclosing robot and involving user was found to take a different direction depending on the sub-dimension of the user’s self-disclosure. Multi-robot conditions partially moderate the effects of the two types of robot self-disclosures.
Funder
Basic Science Research Program through the National Research Foundation of Korea
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference119 articles.
1. Ng, A. (2022, October 01). What Artificial Intelligence Can and Can’t Do Right Now. Available online: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now.
2. Nielsen (2022, October 01). (Smart) Speaking My Language: Despite Their Vast Capabilities, Smart Speakers Are All About the Music. Available online: https://www.nielsen.com/insights/2018/smart-speaking-my-language-despite-their-vast-capabilities-smart-speakers-all-about-the-music/.
3. Survey on speech emotion recognition: Features, classification schemes, and databases;Kamel;Pattern Recognit.,2011
4. Emotional speech recognition: Resources, features, and methods;Ververidis;Speech Commun.,2006
5. Han, K., Yu, D., and Tashev, I. (2014, January 14–18). Speech emotion recognition using deep neural network and extreme learning machine. Proceedings of the Interspeech, Singapore.