Abstract
AbstractThe high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a hybrid epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.
Funder
Swiss Federal Institute of Technology Zurich
Publisher
Springer Science and Business Media LLC
Reference61 articles.
1. Alvarado, R. (2022). What kind of trust does AI deserve, if any? AI and Ethics, 3, 1–15.
2. Alvarado, R. (2023). Ai as an epistemic technology. Science and Engineering Ethics, 29(5), 1–30.
3. Amann, J., A. Blasimme, E. Vayena, D. Frey, V.I. Madai, and P. Consortium. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20, 1–9.
4. Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., Ribeiro, M. T., & Weld, D. (2021). Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1–16)
5. Benk, M., Tolmeijer, S., von Wangenheim, F., & Ferrario, A. (2022). The value of measuring trust in AI-A socio-technical system perspective. arXiv:2204.13480