Author:
Aminof Benjamin,De Giacomo Giuseppe,Rubin Sasha
Abstract
We address two central notions of fairness in the literature of nondeterministic fully observable domains. The first, which we call stochastic fairness, is classical, and assumes an environment which operates probabilistically using possibly unknown probabilities. The second, which is language-theoretic, assumes that if an action is taken from a given state infinitely often then all its possible outcomes should appear infinitely often; we call this state-action fairness. While the two notions coincide for standard reachability goals, they differ for temporally extended goals. This important difference has been overlooked in the planning literature and has led to the use of a product-based reduction in a number of published algorithms which were stated for state-action fairness, for which they are incorrect, while being correct for stochastic fairness. We remedy this and provide a correct optimal algorithm for solving state-action fair planning for ltl/ltlf goals, as well as a correct proof of the lower bound of the goal-complexity. Our proof is general enough that it also provides, for the no-fairness and stochastic-fairness cases, multiple missing lower bounds and new proofs of known lower bounds. Overall, we show that stochastic fairness is better behaved than state-action fairness.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Fair $$\omega $$-Regular Games;Lecture Notes in Computer Science;2024
2. Solving Two-Player Games Under Progress Assumptions;Lecture Notes in Computer Science;2023-12-30
3. Temporally extended goal recognition in fully observable non-deterministic domain models;Applied Intelligence;2023-12-14
4. Stochastic Best-Effort Strategies for Borel Goals;2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS);2023-06-26
5. Behavioral QLTL;Multi-Agent Systems;2023