Deep Neural Networks, Explanations, and Rationality

Author:

Lee Edward A.ORCID

Abstract

Abstract“Rationality” is the principle that humans make decisions on the basis of step-by-step (algorithmic) reasoning using systematic rules of logic. An ideal “explanation” for a decision is a chronicle of the steps used to arrive at the decision. Herb Simon’s “bounded rationality” is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. As a consequence, human decision-making in complex cases mixes some rationality with a great deal of intuition, relying more on Daniel Kahneman’s “System 1” than “System 2.” A DNN-based AI, similarly, does not arrive at a decision through a rational process in this sense. An understanding of the mechanisms of the DNN yields little or no insight into any rational explanation for its decisions. The DNN is also operating in a manner more like System 1 than System 2. Humans, however, are quite good at constructing post hoc rationalizations of their intuitive decisions. If we demand rational explanations for AI decisions, engineers will inevitably develop AIs that are very effective at constructing such post hoc rationalizations. With their ability to handle vast amounts of data, the AIs will learn to build rationalizations using many more precedents than any human could, thereby constructing rationalizations for any decision that will become very hard to refute. The demand for explanations, therefore, could backfire, resulting in effectively ceding to the AIs much more power.

Publisher

Springer Nature Switzerland

Reference29 articles.

1. Barrat, J.: Our Final Invention: Artificial Intelligence and the End of the Human Era. St. Martin’s Press (2013)

2. Bostrom, N.: Superintelligence: Paths, Dangers. Strategies. Oxford University Press, Oxford, UK (2014)

3. Bubeck, S., Chandrasekaran, V., et al.: Sparks of artificial general intelligence: Early experiments with GPT-4 (22 March 2023). https://doi.org/10.48550/arXiv.2303.12712. arXiv: 2303.12712

4. Chomsky, N., Roberts, I., Watumull, J.: The false promise of ChatGPT. The New York Times (8 March 2023). https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

5. Danziger, S., Levav, J., Avnaim-Pesso, L.: Extraneous factors in judicial decisions. Proc. Nat. Acad. Sci. United States . 108(17), 6889–6892 (2011). https://doi.org/10.1073/pnas.1018033108

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3