Affiliation:
1. Sorbonne Université, CNRS, LIP6, 75005, Paris, France
Abstract
We consider mobile robotic entities that have to cooperate to solve assigned tasks. In the literature, two models have been used to model their visibility sensors: the full visibility model, where all robots can see all other robots, and the limited visibility model, where there exists a limit [Formula: see text] such that all robots closer than [Formula: see text] are seen and all robots further than [Formula: see text] are not seen. We introduce the uncertain visibility model, which generalizes both models by considering that a subset of the robots further than [Formula: see text] cannot be seen. An empty subset corresponds to the full visibility model, and a subset containing every such robot corresponds to the limited visibility model. Then, we explore the impact of this new visibility model on the feasibility of benchmarking tasks in mobile robots computing: gathering, uniform circle formation, luminous rendezvous, and leader election. For each task, we determine the weakest visibility adversary that prevents task solvability, and the strongest adversary that allows task solvability. Our work sheds new light on the impact of visibility sensors in the context of mobile robot computing, and paves the way for more realistic algorithms that can cope with uncertain visibility sensors.
Publisher
World Scientific Pub Co Pte Lt
Subject
Hardware and Architecture,Theoretical Computer Science,Software