Affiliation:
1. University of Urbino “Carlo Bo”, Urbino, Italy
Abstract
Collective adaptive systems (CAS) often adopt cooperative operating strategies to run distributed decision-making mechanisms. Sometimes, their effectiveness massively relies on the collaborative nature of individuals’ behavior. Stimulating cooperation while preventing selfish and malicious behaviors is the main objective of trust and reputation models. These models are largely used in distributed, peer-to-peer environments and, therefore, represent an ideal framework for improving the robustness, as well as security, of CAS. In this article, we propose a formal framework for modeling and verifying trusted CAS. From the modeling perspective, mobility, adaptiveness, and trust-based interaction represent the main ingredients used to define a flexible and easy-to-use paradigm. Concerning analysis, formal automated techniques based on equivalence and model checking support the prediction of the CAS behavior and the verification of the underlying trust and reputation models, with the specific aim of estimating robustness with respect to the typical attacks conducted against webs of trust.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,Modeling and Simulation
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献