Abstract
In this paper the Buechner–Tavani model of digital trust is revised—new conditions for self-trust are incorporated into the model. These new conditions raise several philosophical problems concerning the idea of a substantial self for social robotics, which are closely examined. I conclude that reductionism about the self is incompatible with, while the idea of a substantial self is compatible with, trust relations between human agents, between human agents and artificial agents, and between artificial agents.
Reference25 articles.
1. Trust and multi-agent systems: applying the “diffuse, default model” of trust to experiments involving artificial agents
2. Autonomy and Trust in the Context of Artificial Agents;Buechner,2015
3. Re-Thinking Trust and Trustworthiness in Digital Environments;Buechner,2014
4. Defining Trust and E-Trust
5. Self-Trust A Study of Reason, Knowledge, and Autonomy;Lehrer,1997
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献