Coarse ethics: how to ethically assess explainable artificial intelligence
-
Published:2021-09-12
Issue:
Volume:
Page:
-
ISSN:2730-5953
-
Container-title:AI and Ethics
-
language:en
-
Short-container-title:AI Ethics
Author:
Izumo TakashiORCID, Weng Yueh-Hsuan
Abstract
AbstractThe integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed to enable user-interpretability. The extant research has focussed on technical issues, but it is also desirable to apply a branch of ethics to deal with the trade-off problem. This scholarly domain is labelled coarse ethics in this study, which discusses two issues vis-à-vis AI prediction as a type of evaluation. First, which formal conditions would allow trade-offs? The study posits two minimal requisites: adequately high coverage and order-preservation. The second issue concerns conditions that could justify the trade-off between computable accuracy and human interpretability, to which the study suggests two justification methods: impracticability and adjustment of perspective from machine-computable to human-interpretable. This study contributes by connecting ethics to autonomous systems for future regulation by formally assessing the adequacy of AI rationales.
Publisher
Springer Science and Business Media LLC
Reference55 articles.
1. Anderson, M., Anderson, S.L.: GenEth: a general ethical dilemma analyzer. Paladyn. J. Behav. Robot. 9, 337–357 (2018). https://doi.org/10.1515/pjbr-2018-0024 2. Arrieta, A.B., Díaz-Rodríguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fus. 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012 3. Baer, B.R., Gilbert, D.E., Wells, M.T.: Fairness criteria through the lens of directed acyclic graphs. In: Dubber, M.D., et al. (eds.) The Oxford Handbook of Ethics of AI, pp. 493–520. Oxford University Press, New York (2020) 4. Bartneck, C., Belpaeme, T., Eyssel, F., Kanda, T., Keijsers, M., Šabanović, S.: Human-Robot Interaction: An Introduction. Cambridge University Press, Cambridge (2020) 5. Barfield, W., Barfield, J.: An introduction to law and algorithms. In: Barfield, W. (ed.) The Cambridge Handbook of the Law of Algorithms. Cambridge University Press, Cambridges (2020)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|