Author:
Arya Vijay,Bellamy Rachel K. E.,Chen Pin-Yu,Dhurandhar Amit,Hind Michael,Hoffman Samuel C.,Houde Stephanie,Liao Q. Vera,Luss Ronny,Mojsilović Aleksandra,Mourad Sami,Pedemonte Pablo,Raghavendra Ramya,Richards John,Sattigeri Prasanna,Shanmugam Karthikeyan,Singh Moninder,Varshney Kush R.,Wei Dennis,Zhang Yunfeng
Abstract
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, have different explanation needs. To address these needs, in 2019, we created AI Explainability 360, an open source software toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics. This paper examines the impact of the toolkit with several case studies, statistics, and community feedback. The different ways in which users have experienced AI Explainability 360 have resulted in multiple types of impact and improvements in multiple metrics, highlighted by the adoption of the toolkit by the independent LF AI & Data Foundation. The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献