Assessing the risk of takeover catastrophe from large language models

Author:

Baum Seth D.1ORCID

Affiliation:

1. Global Catastrophic Risk Institute Washington District of Columbia USA

Abstract

AbstractThis article presents a risk analysis of large language models (LLMs), a type of “generative” artificial intelligence (AI) system that produces text, commonly in response to textual inputs from human users. The article is specifically focused on the risk of LLMs causing an extreme catastrophe in which they do something akin to taking over the world and killing everyone. The possibility of LLM takeover catastrophe has been a major point of public discussion since the recent release of remarkably capable LLMs such as ChatGPT and GPT‐4. This arguably marks the first time when actual AI systems (and not hypothetical future systems) have sparked concern about takeover catastrophe. The article's analysis compares (A) characteristics of AI systems that may be needed for takeover, as identified in prior theoretical literature on AI takeover risk, with (B) characteristics observed in current LLMs. This comparison reveals that the capabilities of current LLMs appear to fall well short of what may be needed for takeover catastrophe. Future LLMs may be similarly incapable due to fundamental limitations of deep learning algorithms. However, divided expert opinion on deep learning and surprise capabilities found in current LLMs suggests some risk of takeover catastrophe from future LLMs. LLM governance should monitor for changes in takeover characteristics and be prepared to proceed more aggressively if warning signs emerge. Unless and until such signs emerge, more aggressive governance measures may be unwarranted.

Publisher

Wiley

Reference125 articles.

1. Ahmad B. Thakur S. Tan B. Karri R. &Pearce H.(2023).Fixing hardware security bugs with large language models.https://arxiv.org/abs/2302.01215

2. Alaga J. &Schuett J.(2023).Coordinated pausing: An evaluation‐based coordination scheme for frontier AI developers.https://arxiv.org/abs/2310.00374

3. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability;Ananny M.;New Media & Society,2018

4. Anderljung M. Barnhart J. Korinek A. Leung J. O'Keefe C. Whittlestone J. Avin S. Brundage M. Bullock J. Cass‐Beggs D. Chang B. Collins T. Fist T. Hadfield G. Hayes A. Ho L. Hooker S. Horvitz E. Kolt N. …Wolf K.(2023).Frontier AI regulation: Managing emerging risks to public safety.https://arxiv.org/abs/2307.03718

5. Anthropic. (2023).Core views on AI safety: When why what and how.Anthropic.https://www.anthropic.com/index/core‐views‐on‐ai‐safety

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3