Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanations
Author:
Chari Shruthi1, Seneviratne Oshani1, Ghalwash Mohamed2, Shirai Sola1, Gruen Daniel M.1, Meyer Pablo2, Chakraborty Prithwish2, McGuinness Deborah L.1
Affiliation:
1. Computer Science, Rensselaer Polytechnic Institute, NY, US 2. Center for Computational Health, IBM Research, NY, US
Abstract
In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO’s capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems’ outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
Subject
Computer Networks and Communications,Computer Science Applications,Information Systems
Reference52 articles.
1. Assessing the practice of biomedical ontology evaluation: Gaps and opportunities;Amith;Journal of biomedical informatics,2018 2. V. Arya, R.K. Bellamy, P.-Y. Chen, A. Dhurandhar, M. Hind, S.C. Hoffman, S. Houde, Q.V. Liao, R. Luss, A. Mojsilović et al., AI explainability 360: Impact and design, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 12651–12657. 3. D. Bau, B. Zhou, A. Khosla, A. Oliva and A. Torralba, Network dissection: Quantifying interpretability of deep visual representations, in: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 6541–6549. 4. J. Brank, M. Grobelnik and D. Mladenic, A survey of ontology evaluation techniques, in: Proceedings of the Conference on Data Mining and Data Warehouses (SiKDD 2005), Citeseer, Ljubljana Slovenia, 2005, pp. 166–170. 5. Cancer Biomedical Informatics Grid, Unified Medical Language System, National Cancer Institute Thesarus (NCIT).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|