Abstract
European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.
Subject
Molecular Medicine,Biomedical Engineering,Biochemistry,Biomaterials,Bioengineering,Biotechnology
Reference113 articles.
1. Boeing 737 Max: What’s Happened after the 2 Deadly Crashes
https://www.nytimes.com/interactive/2019/business/boeing-737-crashes.html
2. Making BREAD: Biomimetic Strategies for Artificial Intelligence Now and in the Future
3. Average Is Over: Powering America Beyond the Age of the Great Stagnation;Cowen,2013
4. Mastering the game of Go with deep neural networks and tree search
5. Mastering the game of Go without human knowledge
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献