A Misdirected Principle with a Catch: Explicability for AI

Author:

Robbins ScottORCID

Abstract

Abstract There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.

Funder

H2020 European Research Council

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Philosophy

Reference46 articles.

1. Ahmed, M. (2018). Aided by Palantir, the LAPD uses predictive policing to monitor specific people and neighborhoods. The Intercept. Retrieved October 11, 2018, from The Intercept website: https://theintercept.com/2018/05/11/predictive-policing-surveillance-los-angeles/ .

2. AI at Google: Our principles. (2018). Google. Retrieved January 14, 2019, from Google website: https://www.blog.google/technology/ai/ai-principles/ .

3. AI Principles. (2017). Future of Life Institute. Retrieved January 14, 2019, from Future of Life Institute website: https://futureoflife.org/ai-principles/ .

4. AI Universal Guidelines—thepublicvoice.org. (2018). The Public Voice. Retrieved January 14, 2019, from https://thepublicvoice.org/ai-universal-guidelines/ .

5. Article 36. (2015). Killing by machine: Key issues for understanding meaningful human control website. Article 36. Retrieved April 4, 2019, from Article 36 website: http://www.article36.org/autonomous-weapons/killing-by-machine-key-issues-for-understanding-meaningful-human-control/ .

Cited by 111 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3