Abstract
There is growing consensus and appreciation for the importance of trust in the development of Artificial Intelligence (AI) technologies; however, there is a reliance on principles-based frameworks. Recent research has highlighted the principles/practice gap, where principles alone are not actionable, and may not be wholly effective in developing more trustworthy AI. We argue for complementary, evidence-based tools to close the principles/practice gap, and present ELATE (Evidence-Based List of Exploratory Questions for AI Trust Engineering) as one such resource. We discuss several tools or approaches for making ELATE actionable within the context of systems development.
Funder
Data & Human Centered Solutions Innovation Center
Reference39 articles.
1. Blasch E., Sung J., Nguyen T. (2020, November 9). Multisource AI scorecard table for system evaluation. AAAI FSS-20: Artificial intelligence in government and the public sector, Washington, DC, USA.
2. Evaluation in participatory design
3. Trusting Automation: Designing for Responsivity and Resilience