Affiliation:
1. School of Economics and Management Beijing Institute of Petrochemical Technology Beijing China
2. International Business School Suzhou Xi'an Jiaotong‐Liverpool University Suzhou Jiangsu Province China
3. Carson College of Business Washington State University Pullman Washington USA
Abstract
AbstractThis paper explores human trust in artificial intelligence (AI), focusing on the effects of social categorization (ingroup vs. outgroup) and AI human‐likeness through two pre‐registered studies involving 160 participants each. The first study, a lab experiment in China, and the second, an online experiment representative of the United States, both utilized a trust game to assess trust across four conditions: ingroup‐humanoid AI, ingroup‐non‐humanoid AI, outgroup‐humanoid AI, and outgroup‐non‐humanoid AI. Results indicated higher trust for ingroup and humanoid AIs, with statistical significance. Mixed‐design ANOVA was used to analyze the data, revealing significant main effects and interactions. The second study also identified an emotional connection as a mediator in trust, suggesting significant design implications for AI in trust‐critical sectors like healthcare and autonomous transportation.
Funder
Beijing Municipal Social Science Foundation