The Machine as an Autonomous Explanatory Agent
Abstract
The holy grail of Artificial Intelligence (AI) is to transform the machine into an agent that can decide, make inferences, cluster the contents, predict, recommend, and exhibit similar higher cognitive faculties. The prowess of Large Language Models (LLMs) serves as evidence: they enable seamless natural language communication and widespread use across various fields by swiftly processing unstructured data and handling diverse datasets with agility. However, in order to be competent in the fields of science and industry, an agent with such capabilities must be reliable, i.e., accountable for its decisions and actions, which is a per se attribute of an autonomous agent. In this respect, this paper aims to determine whether state-of-the-art technologies have already created an autonomous explanatory agent or are paving the way for the machine to become an autonomous explanatory agent. To achieve this, the paper is structured as follows: The first part investigates the types and levels of explanations in explanation models, providing a foundation for understanding the nature of explanations in everyday life. The second part explores explanations in the context of artificial intelligence, focusing on types of explanatory systems in the research field of eXplainable AI (XAI). The third part delves into whether and to what extent the state-of-the-art machine learning models function as autonomous explanatory agents, based on the exploration in the second part and considering the field of Human-Computer Interaction.
Publisher
Turk Felsefe Dernegi
Reference15 articles.
1. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. 2. Baclawski, K., Bennett, M., Berg-Cross, G., Fritzsche, D., Sharma, R., Singer, J., ... & Whitten, D. (2020). Ontology Summit 2019 communiqué: Explanations. Applied Ontology, 15(1), 91-107. 3. Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, 5185-5198. 4. Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8, No. 1, 8-13. 5. Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
|
|