Abstract
AbstractThe suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status.
Publisher
Cambridge University Press (CUP)
Subject
Health Policy,Issues, ethics and legal aspects,Health (social science)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献