Abstract
AbstractIt might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I will argue that concerns about vulnerability are more matters of institutional design than individual ethics, both when it comes to creating humanlike brain emulations, and when animal-like emulations are concerned. Consequently, the article contains reflection on some institutional measures that can be taken to protect the sims' interests. It concludes that an institutional framework more likely to succeed in this task is competitive and poly-centric, rather than monopolistic and centralized.
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference53 articles.
1. Basl, J. (2013). The ethics of creating artificial consciousnesses. APA Newsletter on Philosophy and Computers, 13(1), 23–29.
2. Beckers, S. (2018). AAAI: An argument against artificial intelligence. In V. Müller (Ed.), Philosophy and theory of artificial intelligence 2017 (pp. 235–247). Springer.
3. Benatar, D. (2006). Better never to have been : The harm of coming into existence. Oxford University Press.
4. Blackford, R., & Broderick, D. (2014). Intelligence unbound: The future of uploaded and machine minds. Wiley Blackwell.
5. Chudnoff, E. (2013). Intuition (1st ed.). Oxford University Press.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献