Abstract
This paper addresses a thus-far neglected dimension in human-artificial intelligence (AI) augmentation: machine-induced reflections. By establishing a grounded theoretical-informed model of machine-induced reflection, we contribute to the ongoing discussion in information systems (IS) regarding AI and research on reflection theories. In our multistage study, physicians used a machine learning-based (ML) clinical decision support system (CDSS) to see if and how this interaction can stimulate reflective practice in the context of an X-ray diagnosis task. By analyzing verbal protocols, performance metrics, and survey data, we developed an integrative theoretical foundation to explain how ML-based systems can help stimulate reflective practice. Individuals engage in more critical or shallower modes depending on whether they perceive a conflict or agreement with these CDSS systems, which in turn leads to different levels of reflection depth. By uncovering the process of machine-induced reflections, we offer IS research a different perspective on how such AI-based systems can help individuals become more reflective, and consequently more effective, professionals. This perspective stands in stark contrast to the traditional, efficiency-focused view of ML-based decision support systems and also enriches theories on human-AI augmentation.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献