AFTEA Framework for Supporting Dynamic Autonomous Driving Situation
-
Published:2024-09-06
Issue:17
Volume:13
Page:3535
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Kim Subi1ORCID, Kang Jieun1ORCID, Yoon Yongik2ORCID
Affiliation:
1. Department of IT Engineering, Sookmyung Women’s University, 100, Cheongpa-ro 47-gil, Yongsan-gu, Seoul 04310, Republic of Korea 2. Department of Artificial Intelligence Engineering, Sookmyung Women’s University, 100, Cheongpa-ro 47-gil, Yongsan-gu, Seoul 04310, Republic of Korea
Abstract
The accelerated development of AI technology has brought about revolutionary changes in various fields of society. Recently, it has been emphasized that fairness, accountability, transparency, and explainability (FATE) should be considered to support the reliability and validity of AI-based decision-making. However, in the case of autonomous driving technology, which is directly related to human life and requires real-time adaptation and response to various changes and risks in the real world, environmental adaptability must be considered in a more comprehensive and converged manner. In order to derive definitive evidence for each object in a convergent autonomous driving environment, it is necessary to transparently collect and provide various types of road environment information for driving objects and driving assistance and to construct driving technology that is adaptable to various situations by considering all uncertainties in the real-time changing driving environment. This allows for unbiased and fair results based on flexible contextual understanding, even in situations that do not conform to rules and patterns, by considering the convergent interactions and dynamic situations of various objects that are possible in a real-time road environment. The transparent, environmentally adaptive, and fairness-based outcomes provide the basis for the decision-making process and support clear interpretation and explainability of decisions. All of these processes enable autonomous vehicles to draw reliable conclusions and take responsibility for their decisions in autonomous driving situations. Therefore, this paper proposes an adaptability, fairness, transparency, explainability, and accountability (AFTEA) framework to build a stable and reliable autonomous driving environment in dynamic situations. This paper explains the definition, role, and necessity of AFTEA in artificial intelligence technology and highlights its value when applied and integrated into autonomous driving technology. The AFTEA framework with environmental adaptability will support the establishment of a sustainable autonomous driving environment in dynamic environments and aims to provide a direction for establishing a stable and reliable AI system that adapts to various real-world scenarios.
Funder
IITP (Institute of Information & Coummunications Technology Planning & Evaluation)-ICAN Development of Hashgraph-based Blockchain Enhancement Scheme and Implementation of Testbed for Autonomous Driving program National Research Foundation of Korea
Reference48 articles.
1. Connor, S., Li, T., Roberts, R., Thakkar, S., Liu, Z., and Tong, W. (2022). Adaptability of AI for safety evaluation in regulatory science: A case study of drug-induced liver injury. Front. Artif. Intell., 5. 2. Trustworthy AI: A computational perspective;Liu;ACM Trans. Intell. Syst. Technol.,2022 3. Rane, N., Choudhary, S., and Rane, J. (2023). Explainable Artificial Intelligence (XAI) approaches for transparency and accountability in financial decision-making. SSRN, 4640316. 4. Alikhademi, K., Richardson, B., Drobina, E., and Gilbert, J. (2021). Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv. 5. Explanations in autonomous driving: A survey;Omeiza;IEEE Trans. Intell. Transp. Syst.,2021
|
|