Abstract
AbstractMany recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only when applied to agents, which current AI technologies are not; and the concern is that this misapplication creates an incentive to inappropriately attribute trustworthiness or responsibility to AI technologies. We endeavour to show that we have good reason to avoid any general AI policy that uses agentially loaded labelling. We suggest labelling these policies not in terms of some qualification of AI, but rather in terms of our approach to the technology and its wider development and use context – focusing on being trustworthy and responsible about AI, rather than on trustworthy or responsible AI.
Publisher
Springer Science and Business Media LLC
Reference88 articles.
1. Agarwal, S., & Mishra, S. (2021). Responsible AI. Springer International Publishing.
2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Ethics of Data and Analytics (pp. 254–264). Auerbach.
3. Arcesati, R. (2021). Lofty principles, conflicting incentives: AI ethics and governance in China. Mercator Institute for China Studies.
4. Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260. https://doi.org/10.1086/292745
5. Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2021). Privacy issues of AI. An introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Springer. https://doi.org/10.1007/978-3-030-51110-4_8