Trust in an Autonomous Agent for Predictive Maintenance: How Agent Transparency Could Impact Compliance

Author:

Simon Loïck,Rauffet Philippe,Guérin Clément,Seguin Cédric

Abstract

In the context of Industry 4.0, human operators will increasingly cooperate with intelligent systems, considered as teammates in the joint activity. This human-autonomy teaming is particularly prevalent in the activity of predictive maintenance, where the system advises the operator to advance or postpone some operations on the machines according to the projection of their future state. Like in human-human cooperation, the effectiveness of cooperation with those autonomous agents especially depends on the notion of trust. The challenge is to calibrate an appropriate level of trust and avoid misuse, disuse or abuse of the recommending system. Compliance (i.e. positive response of the operator on advice from an autonomous agent) can be interpreted as an objective measure of trust as the operator relies on the advice from the autonomous agent. This compliance is also based on the risk perception of the situation as the operator assesses the risk and the benefits of advancing or postponing an operation. A way to calibrate the trust and enhance risk perception is to use the transparency concept. Transparency has been defined as an information during a human-machine interaction that is easy to use with the intent to promote the comprehension, the shared awareness, the intent, the role, the interaction, the performance, the future plans and the reasoning process. This research will focus on two aspects of the transparency concept : the reliability of the autonomous agent ; the outcomes linked to the advice of the autonomous agent. The objective of this research is to understand the effect of the autonomous agent transparency on human trust after an advice from an autonomous agent (here an AI for predictive maintenance) for a more or less risky situation. Our hypothesis is that transparency will impact compliance (H1: Risk transparency will decrease compliance ; H2: Reliability transparency will increase compliance ; H3: Full transparency will decrease compliance)For this experiment we recruited participants to complete decision situations (i.e. accept or deny a proposition, from a predictive maintenance algorithme, of advancing or postponing a CMMS maintenance). A software for predictive maintenance in maritime context was used to address those situations. During this experiment, agent transparency level is manipulated by displaying information related to agent reliability and to situation outcomes, separately or in combination. This agent transparency is mixed with situation complexity (high or low) and the type of advice (advancinc or postponing the maintenance interventions). Age, gender, profession and affinity for the use of technology are assessed for control variables. As the situation represents risk taking, a scale for propensity of risk taking is also used. Trust (subjective and objective), risk perception and mental workload are measured after each situation. As a final question, the participant gives the main information he used to make his choice for each experimental setting.

Publisher

AHFE International

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3