Trust in Artificial Intelligence: Modeling the Decision Making of Human Operators in Highly Dangerous Situations

Author:

Venger Alexander L.1,Dozortsev Victor M.2

Affiliation:

1. Department of Social Sciences and Humanities, Dubna State University, 141982 Dubna, Russia

2. Moscow Institute of Physics and Technology (MIPT), 117303 Moscow, Russia

Abstract

A prescriptive simulation model of a process operator’s decision making assisted with an artificial intelligence (AI) algorithm in a technical system control loop is proposed. Situations fraught with a catastrophic threat that may cause unacceptable damage were analyzed. The operators’ decision making was interpreted in terms of a subjectively admissible probability of disaster and subjectively necessary reliability of its assessment, which reflect the individual psychological aspect of operator’s trust in AI. Four extreme decision-making strategies corresponding to different ratios between the above variables were distinguished. An experiment simulating a process facility, an AI algorithm and operator’s decision making strategy was held. It showed that depending on the properties of a controlled process (its dynamics and the hazard onset’s speed) and the AI algorithm characteristics (Type I and II error rate), each of such strategies or some intermediate strategy may prove to be more beneficial than others. The same approach is applicable to the identification and analysis of sustainability of strategies applied in real-life operating conditions, as well as to the development of a computer simulator to train operators to control hazardous technological processes using AI-generated advice.

Publisher

MDPI AG

Subject

General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)

Reference41 articles.

1. Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions;Riedl;Electron. Mark.,2022

2. Jones, S.E. (2006). Against Technology: From the Luddites to Neo-Luddism, Taylor & Francis.

3. Hart, G., and Goldwater, B. (1980). Recent False Alerts from the Nation’s Missile Attack Warning System, U.S. Government Printing Office.

4. Trust in technology: Designing for appropriate reliance;Lee;Hum. Factors,2004

5. The factors of increase in trust and decrease in distrust of human to technique;Akimova;Psychol. Stud.,2017

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3