Abstract
AbstractMany modern digital products use Machine Learning (ML) to emulate human abilities, knowledge, and intellect. In order to achieve this goal, ML systems need the greatest possible quantity of training data to allow the Artificial Intelligence (AI) model to develop an understanding of “what it means to be human”. We propose that the processes by which companies collect this data are problematic, because they entail extractive practices that resemble labour exploitation. The article presents four case studies in which unwitting individuals contribute their humanness to develop AI training sets. By employing a post-Marxian framework, we then analyse the characteristic of these individuals and describe the elements of the capture-machine. Then, by describing and characterising the types of applications that are problematic, we set a foundation for defining and justifying interventions to address this form of labour exploitation.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference70 articles.
1. Agostinelli A, Denk TI, Borsos Z, Engel J, Verzetti M, Caillon A, Huang Q, Jansen A, Roberts A, Tagliasacchi M, Sharifi M (2023) Musiclm: Generating music from text. arXiv preprint arXiv:2301.11325
2. Altenried M (2020) The platform as factory: crowdwork and the hidden labour behind artificial intelligence. Cap Class 44(2):145–158
3. Avanesi V, Teurlings J (2022) “I’m Not a Robot,” or am I?: Micro-Labor and the Immanent Subsumption of the Social in the Human Computation of ReCAPTCHAs. Int J Commun 16:1441–1459
4. Bahmanteymouri E (2016) An ontological investigation of urban growth management policies under neoliberalism. PhD Thesis, The University of Auckland. ResearchSpace@Auckland
5. Bahmanteymouri E (2021) A Lacanian understanding of urban development plans under the neoliberal discourse. Plan Theory 20(3):231–254
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献