Abstract
AbstractAlgorithms are used across a wide range of societal sectors such as banking, administration, and healthcare to make predictions that impact on our lives. While the predictions can be incredibly accurate about our present and future behavior, there is an important question about how these algorithms in fact represent human identity. In this paper, we explore this question and argue that machine learning algorithms represent human identity in terms of what we shall call the statistical individual. This statisticalized representation of individuals, we shall argue, differs significantly from our ordinary conception of human identity, which is tightly intertwined with considerations about biological, psychological, and narrative continuity—as witnessed by our most well-established philosophical views on personal identity. Indeed, algorithmic representations of individuals give no special attention to biological, psychological, and narrative continuity and instead rely on predictive properties that significantly exceed and diverge from those that we would ordinarily take to be relevant for questions about how we are.
Publisher
Springer Science and Business Media LLC
Reference45 articles.
1. Ammitzbøll Flügge A, Holten Møller N, Hildebrandt T, Palmer Olsen H (2022) Er du grøn—algoritmer til beslutningstøtte i det offentlige. En kvalitativ undersøgelse af sagsbehandleres praksis og brug af ASTA til profilering af nyledige dagpengemodtagere. (Are you green—algorithms for decision support in the public sector. A qualitative study of case handlers' practice and use of ASTA for profiling newly unemployed unemployment benefit recipients). Department of Computer Science, University of Copenhagen. https://static1.squarespace.com/static/5e3ad7fa73600c394b539f6b/t/628e3f91ba738054d6b7d90c/1653489554932/Er+du+gr%C3%B8n+Algoritmer+til+beslutningsst%C3%B8tte+%28Fl%C3%BCgge+et+al.%2C+2022%29.pdf. Accessed 7 Dec 2023
2. Ayodele TO (2010) Types of machine learning algorithms. New Adv Mach Learn 3:19–48
3. Babushkina D, Votsis A (2022) Disruption, technology and the question of (artificial) identity. AI Ethics 2(4):611–622
4. Bjerring JC, Busch J (2021) Artificial intelligence and patient-centered decision-making. Philos Technol 34(2):349–371
5. Bolinger R (2021) Explaining the justificatory asymmetry between statistical and individualized evidence. In: Robson J, Hoskins Z (eds) The social epistemology of legal trials. Routledge, pp 60–76
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献