Affiliation:
1. Michigan State University, East Lansing, USA
Abstract
Trustworthy artificial intelligence (Trusted AI) is of utmost importance when learning-enabled components (LECs) are used in autonomous, safety-critical systems. When reliant on deep learning, these systems need to address the reliability, robustness, and interpretability of learning models. In addition to developing strategies to address these concerns, appropriate software architectures are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. This work describes Anunnaki, a model-driven framework comprising loosely-coupled modular services designed to monitor and manage LECs with respect to Trusted AI assurance concerns when faced with different sources of uncertainty. More specifically, the Anunnaki framework supports the composition of independent, modular services to assess and improve the resilience and robustness of AI systems. The design of Annunaki was guided by several key software engineering principles (e.g., modularity, composability, and reusability) in order to facilitate its use and maintenance to support different aggregate monitoring and assurance analysis tools for LESs and their respective data sets. We demonstrate Anunnaki on two autonomous platforms, a terrestrial rover, and an unmanned aerial vehicle. Our studies show how Anunnaki can be used to manage the operations of different autonomous learning-enabled systems with vision-based LECs while exposed to uncertain environmental conditions.
Funder
National Science Foundation
Air Force Research Laboratory
Publisher
Association for Computing Machinery (ACM)