The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence

Author:

Thomas Christopher,Roberts HuwORCID,Mökander Jakob,Tsamados Andreas,Taddeo Mariarosaria,Floridi Luciano

Abstract

AbstractArtificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as resource extraction and processing, exploitative labour practices and energy intensive model training. Thus, the scope of current AI assurance practice is insufficient for ensuring that AI is ethical in a holistic sense, i.e. in ways that are legally permissible, socially acceptable, economically viable and environmentally sustainable. This article addresses this shortcoming by arguing for a broader approach to AI assurance that is sensitive to the full scope of AI development and deployment harms. To do so, the article maps harms related to AI and highlights three examples of harmful practices that occur upstream in the AI supply chain and impact the environment, labour, and data exploitation. It then reviews assurance mechanisms used in adjacent industries to mitigate similar harms, evaluating their strengths, weaknesses, and how effectively they are being applied to AI. Finally, it provides recommendations as to how a broader approach to AI assurance can be implemented to mitigate harms more effectively across the whole AI supply chain.

Publisher

Springer Science and Business Media LLC

Reference103 articles.

1. Abbott K, Levi-Faur D and Snidal D (2017) Introducing regulatory intermediaries. https://doi.org/10.1177/0002716217695519. Accessed 4 July 2023

2. Acemoglu D, Restrepo P (2019) Automation and new tasks: how technology displaces and reinstates labor. J Econ Perspect 33(2):3–30. https://doi.org/10.1257/jep.33.2.3

3. Batarseh FA, Freeman L, Huang C-H (2021) A survey on artificial intelligence assurance. J Big Data 8(1):60. https://doi.org/10.1186/s40537-021-00445-7

4. Boffo and Pantalano (2020) ESG investing: practices, progress and challenges. https://www.oecd.org/finance/ESG-Investing-Practices-Progress-Challenges.pdf

5. Bommasani R et al (2022) ‘On the opportunities and risks of foundation models’. Preprint at http://arxiv.org/abs/2108.07258. Accessed 8 Dec 2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3