Human-robot interaction through adjustable social autonomy

Author:

Cantucci Filippo1,Falcone Rino1,Castelfranchi Cristiano1

Affiliation:

1. Institute of Cognitive Science and Technology, National Research Council of Italy, (ISTC-CNR), Rome

Abstract

Autonomy is crucial in cooperation. The complexity of HRI scenarios requires autonomous robots able to exploit their superhuman computations (based on DNN, Machine Learning techniques and Big Data) in a trustworthy way. Trustworthiness is not only a matter of accuracy, privacy or security, but it is becoming more and more a matter of adaptation to humans agency. As claimed by Falcone and Castelfranchi, autonomy means the possibility of dislaying or providing an unexpected behavior (including refusal) that departs from a requested (agreed upon or not) behavior. In this sense, the autonomy to decide how to adopt a task delegated by the user, with respect to her/his own real needs and goals, distinguishes intelligent and trustworthy robots from highly performing robots. This kind of smart help can be provided only by cognitive robots able to represent and ascribe mental states (beliefs, goals, intentions, desires etc.) to their interlocutors. The mental states attribution can be the result of complex reasoning mechanisms or can be fast and automatic, based on scripts, roles, categories or stereotypes typically exploited by humans every time they interact in everyday life. In all these cases, robots that build and use cognitive models of humans (that have a Theory of Mind of their interlocutors), have to operate also a meta-evaluation of their own predictive skills to build those models. Robots have to be endowed with the capability to self-trust their skills to interpret the interlocutors and the context, for producing smart and effective decisions towards humans. After exploring the main concepts that make collaboration between humans and robots trustworthy and effective, we present the first of a series of experiments draw for testing different aspects of a designed cognitive architecture for trustworthy HRI. This architecture, based on consolidated theoretical principles (theory of social adjustable autonomy, theory of mind, theory of trust) has the main goal to build cognitive robots that provide smart, trustworthy collaboration, every time a human requires their help. In particular, the experiment has been designed in order to demonstrate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain a high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.

Publisher

IOS Press

Subject

Artificial Intelligence

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3