Abstract
In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献