The Role of Workers in AI Ethics and Governance

Author:

Nedzhvetskaya Nataliya1,Tan J. S.2

Affiliation:

1. Sociology, University of California, Berkeley

2. Independent scholar

Abstract

Abstract While the role of states, corporations, and international organizations in AI governance has been extensively theorized, the role of workers has received comparatively little attention. This chapter looks at the role that workers play in identifying and mitigating harms from AI technologies. Harms are the causally assessed “impacts” of technologies. They arise despite technical reliability and are not a result of technical negligence but rather of normative uncertainty around questions of safety and fairness in complex social systems. There is high consensus in the AI ethics community on the benefits of reducing harms but less consensus on mechanisms for determining or addressing harms. This lack of consensus has led to numerous collective actions by workers protesting how harms are identified and addressed in their workplace. This chapter theorizes the role of workers within AI governance and constructs a model of harm reporting processes in AI workplaces. The harm reporting process involves three steps: identification, the governance decision, and the response. Workers draw upon three types of claims to argue for jurisdiction over questions of AI governance: subjection, control over the product of one’s labor, and proximate knowledge of systems. Examining the past decade of AI-related worker activism allows us to understand how different types of workers are positioned within a workplace that produces AI systems, how their position informs their claims, and the place of collective action in staking their claims. This chapter argues that workers occupy a unique role in identifying and mitigating harms caused by AI systems.

Publisher

Oxford University Press

Reference78 articles.

1. Allyn, Bobby. (2020). Google AI team demands ousted Black researcher be rehired and promoted. National Public Radio. https://www.npr.org/2020/12/17/947413170/google-ai-team-demands-ousted-black-researcher-be-rehired-and-promoted.

2. A harm-reduction framework for algorithmic fairness.;IEEE Security Privacy,2018

3. Amazon Employees for Climate Justice. (2019). Amazon employees are joining the Global Climate Walkout, 9/20. Medium. https://amazonemployees4climatejustice.medium.com/amazon-employees-are-joining-the-global-climate-walkout-9-20-9bfa4cbb1ce3.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?;2023 ACM Conference on Fairness, Accountability, and Transparency;2023-06-12

2. At The Tensions of South and North;The Oxford Handbook of AI Governance;2022-10-20

3. Tech Worker Organizing for Power and Accountability;2022 ACM Conference on Fairness, Accountability, and Transparency;2022-06-20

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3