Abstract
AbstractThis paper aims to show that the experience of ‘primary recognition’ (O’Hara in Moral certainty and the foundations of morality, Palgrave Macmillan, London, 2018) can be extended to human AI interactions. That is, I argue that human beings can (and do) experience non-rational, reflex moral responses to AI and social robots that fit O’Hara’s description of primary recognition. I give two plausible examples, one involving a military mine-sweeping robot and the other, a toy dinosaur called a ‘Pleo’. These experiences of primary recognition do not, however, settle the question of whether any particular AI can be considered a true moral patient or a ‘person’.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference28 articles.
1. Branham, S., Weaver, M.: Re/framing virtual conversational partners: a feminist critique and tentative move towards a new design paradigm. In: Design, User Experience, and Usability: Users and Interactions: Proceedings (Part II) of the 4th International Conference, DUXU, pp. 172–183. Springer (2015)
2. Cushman, F., Gray, K., Gaffey, A., Mendes, W.B.: Simulating murder: the aversion to harmful action. Emotion 12(1), 2–7 (2012)
3. Darling, K.: Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior toward robotic objects. In: Calo, R., Froomkin, A.M., Kerr, I. (eds.) Robot Law, pp. 213–231 (2016)
4. Fagella, D., Calling Siri names? You’re not alone—a closer look at misuse of AI agents. https://emerj.com/ai-podcast-interviews/calling-siri-names-youre-not-alone-a-closer-look-at-misuse-of-ai-agents/. Accessed March 2023 (2019)
5. Garreau, J.: Bots on the ground in the field of battle (or Even above It), robots are a soldier’s best friend, Washington Post (2007)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献