Affiliation:
1. University of Colorado at Boulder, USA
2. University of North Carolina at Chapel Hill, USA
Abstract
Humans working with autonomous artificially intelligent systems may not be experts in the inner workings of their machine teammates, but need to understand when to employ, trust, and rely on the system. A critical challenge is to develop machine agents with the capacity to understand their own capabilities and limitations, and the ability to communicate this information to human partners. Self-assessment is an emerging field that tackles this challenge through the development of algorithms that enable autonomous agents to understand and communicate their competency. These methods can engender appropriate trust and align human expectations with autonomous assistant abilities. However, current research in self-assessment is dispersed across many fields, including artificial intelligence, robotics, and human factors. This survey connects work from these disparate areas and reviews state-of-the-art methods for algorithmic self-assessments that enable autonomous agents to estimate, understand, and communicate valuable information pertaining to their competency, with focus on methods that can improve interactions within human-machine teams. To better understand the landscape of self-assessment approaches, we present a framework for categorizing work in self-assessment based on underlying algorithm type:
test-based
,
learning-based
, or
knowledge-based
. We synthesize common features across these approaches and discuss relevant future directions for research in this emerging space.
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference123 articles.
1. Aastha Acharya , Rebecca Russell , and Nisar R Ahmed . 2020 . Explaining conditions for reinforcement learning behaviors from real and imagined data . Proceedings of the Workshop on Challenges of Real-World RL at NeurIPS 2020 (2020 ). https://doi.org/10.48550/arXiv.2011.09004 arXiv:arXiv:2011.0900 10.48550/arXiv.2011.09004 Aastha Acharya, Rebecca Russell, and Nisar R Ahmed. 2020. Explaining conditions for reinforcement learning behaviors from real and imagined data. Proceedings of the Workshop on Challenges of Real-World RL at NeurIPS 2020 (2020). https://doi.org/10.48550/arXiv.2011.09004 arXiv:arXiv:2011.0900
2. Competency Assessment for Autonomous Agents using Deep Generative Models
3. Aastha Acharya , Rebecca Russell , and Nisar R Ahmed . 2022 . Uncertainty Quantification for Competency Assessment of Autonomous Agents. the Workshop on Safe and Reliable Robot Autonomy under Uncertainty at ICRA 2022 (2022). https://doi.org/10.48550/arXiv.2206.10553 arXiv:arXiv:2206.10553 10.48550/arXiv.2206.10553 Aastha Acharya, Rebecca Russell, and Nisar R Ahmed. 2022. Uncertainty Quantification for Competency Assessment of Autonomous Agents. the Workshop on Safe and Reliable Robot Autonomy under Uncertainty at ICRA 2022 (2022). https://doi.org/10.48550/arXiv.2206.10553 arXiv:arXiv:2206.10553
4. Robots Conquer the Underground: What Darpa's Subterranean Challenge Means for the Future of Autonomous Robots
5. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)