Affiliation:
1. School of Public Administration, University of Nebraska at Omaha, Omaha, NE, USA
2. Department of Management Information Systems, National Chengchi University, Taipei, Taiwan
Abstract
Artificial intelligence (AI) applications in public services are an emerging and crucial issue in the modern world. Many countries utilize AI-enabled systems to serve citizens and deliver public services. Although AI can bring more efficiency and responsiveness, this technology raises privacy and social inequality concerns. From the perspective of behavioral public administration (BPA), citizens’ use of AI-enabled systems depends on their perception of this technology. This study proposes a conceptual framework connecting citizens’ perceptions, trust, and intention to follow instructions from the government-supported AI-enabled recommendation system in the pandemic. Our study launches an online-based experimental survey and analyzes the data with the partial least square structural equation model (PLS-SEM). The research findings suggest that algorithmic transparency increases trust in the recommendations, but privacy concerns decrease the trust when the system asks for sensitive information. Additionally, citizens familiar with technologies are more likely to trust the recommendations in the feature-based communication strategy. Finally, trust in the recommendations can mediate the impacts of citizens’ perceptions of the AI system. This study clarifies the effects of perceptions, identifies the role of trust, and explores the communication strategies in citizens’ intention to follow the AI-enabled system recommendations. The results can deepen AI research in public administration and provide policy suggestions for the public sector to develop strategies to increase policy compliance with system recommendations.
Subject
Public Administration,Sociology and Political Science
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献