Affiliation:
1. Federation of American Scientists
2. CMU
Abstract
In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements. We investigate the appropriate designation of ML crowdsourcing studies, focusing our inquiry on natural language processing to expose unique challenges for research oversight. Crucially, under the U.S. Common Rule, these judgments hinge on determinations of
aboutness
, concerning both whom (or what) the collected data is about and whom (or what) the analysis is about. We highlight two challenges posed by ML: the same set of workers can serve multiple roles and provide many sorts of information; and ML research tends to embrace a dynamic workflow, where research questions are seldom stated ex ante and data sharing opens the door for future studies to aim questions at different targets. Our analysis exposes a potential loophole in the Common Rule, where researchers can elude research ethics oversight by splitting data collection and analysis into distinct studies. Finally, we offer several policy recommendations to address these concerns.
Publisher
Association for Computing Machinery (ACM)
Reference31 articles.
1. MasakhaNER: named entity recognition for African languages;Adelani D. F.;Transactions of the Association for Computational Linguistics,2021
2. Birmingham-Southern College. Do I need IRB approval? https://www.bsc.edu/academics/irb/documents/BSC%20IRB%20Decision%20Tree.pdf.
3. ERASER: A Benchmark to Evaluate Rationalized NLP Models
4. Dodge J. Gururangan S. Card D. Schwartz R. Smith N. A. 2019. Show your work: improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the Ninth International Joint Conference on Natural Language Processing (EMNLPIJCNLP) 2 185?2 194; https://aclanthology.org/D19-1224/.
5. Fort, K., Adda, G., Cohen, K. B. 2011. Amazon Mechanical Turk: gold mine or coal mine? Computational Linguistics 37 (2), 413?420; https://aclanthology.org/J11-2010.pdf.