Abstract
AbstractWho is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and moral agency, which can be analysed with a more differentiated toolset of moral terminology. It suggests that the impression of responsibility gaps only occurs if we gloss over the complexity of the moral situation in which artificial intelligent tools are employed and if—counterfactually—we ascribe them some kind of pseudo-agential status.
Funder
Forschungszentrum Jülich GmbH
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference45 articles.
1. Beck, S.: The problem of ascribing legal responsibility in the case of robotics. AI Soc. 31(4), 473–481 (2016). https://doi.org/10.1007/s00146-015-0624-5
2. Bostrom, N.: Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford (2014)
3. Bovens, M.: Analysing and assessing accountability: a conceptual framework1. Eur. Law J. 13(4), 447–468 (2007). https://doi.org/10.1111/j.1468-0386.2007.00378.x
4. Brandom, R.: Action, norms, and practical reasoning. Nous 32(Supplement 12), 127–139 (1998)
5. Butler, S.: Erewhon. Trubner & Co, London (1872)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献