Abstract
AbstractThis paper aims, first, to argue against using opaque AI technologies in decision making processes, and second to suggest that we need to possess a qualitative form of understanding about them. It first argues that opaque artificially intelligent technologies are suitable for users who remain indifferent to the understanding of decisions made by means of these technologies. According to virtue ethics, this implies that these technologies are not well-suited for those who care about realizing their moral capacity. The paper then draws on discussions on scientific understanding to suggest that an AI technology becomes understandable to its users when they are provided with a qualitative account of the consequences of using it. As a result, explainable AI methods can render an AI technology understandable to its users by presenting the qualitative implications of employing the technology for their lives.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference25 articles.
1. Burrell, J.: How the machine thinks: understanding opacity in machine learning systems. Big Data Soc. 3(1), 1–12 (2016)
2. Müller, V. C.: Ethics of artificial intelligence and robotics. In The Stanford encyclopedia of philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/ (2021a).
3. Müller, V. C.: Deep opacity undermines data protection and explainable artificial intelligence. In AISB 2021 Symposium Proceedings: Overcoming Opacity in Machine Learning, 18–21 (2021b)
4. Beisbart, C., Räz, T.: Philosophy of science at sea: clarifying the interpretability of machine learning. Philosophy Compass (2022). https://doi.org/10.1111/phc3.12830
5. Durán, J.M., Sand, M., Jongsma, K.: The ethics and epistemology of explanatory AI in medicine and healthcare. Ethics Inf. Technol. 24, 42 (2022). https://doi.org/10.1007/s10676-022-09666-7
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献