What Should we Reasonably Expect from Artificial Intelligence?
-
Published:2024-03-19
Issue:1
Volume:18
Page:217-245
-
ISSN:2782-2923
-
Container-title:Russian Journal of Economics and Law
-
language:
-
Short-container-title:RusJEL
Affiliation:
1. Federal University of Minas Gerais – UFMG
Abstract
Objective: the objective of this article is to address the misalignment between the expectations of Artificial Intelligence (or just AI) systems and what they can currently deliver. Despite being a pervasive and cutting-edge technology present in various sectors, such as agriculture, industry, commerce, education, professional services, smart cities, and cyber defense, there exists a discrepancy between the results some people anticipate from AI and its current capabilities. This misalignment leads to two undesirable outcomes: Firstly, some individuals expect AI to achieve results beyond its current developmental stage, resulting in unrealistic demands. Secondly, there is dissatisfaction with AI's existing capabilities, even though they may be sufficient in many contexts.Methods: the article employs an analytical approach to tackle the misalignment issue, analyzing various market applications of AI and unveils their diversity, demonstrating that AI is not a homogeneous, singular concept. Instead, it encompasses a wide range of sector-specific applications, each serving distinct purposes, possessing inherent risks, and aiming for specific accuracy levels.Results: the primary finding presented in this article is that the misalignment between expectations and actual AI capabilities arises from the mistaken premise that AI systems should consistently achieve accuracy rates far surpassing human standards, regardless of the context. By delving into different market applications, the author advocates for evaluating AI's potential and accepted levels of accuracy and transparency in a context-dependent manner. The results highlight that each AI application should have different accuracy and transparency targets, tailored on a case-by-case basis. Consequently, AI systems can still be valuable and welcomed in various contexts, even if they offer accuracy or transparency rates lower or much lower than human standards.Scientific novelty: the scientific novelty of this article lies in challenging the widely held misconception that AI should always operate with superhuman accuracy and transparency in all scenarios. By unraveling the diversity of AI applications and their purposes, the author introduces a fresh perspective, emphasizing that expectations and evaluations should be contextualized and adapted to the specific use case of AI.Practical significance: the practical significance of this article lies in providing valuable guidance to stakeholders within the AI field, including regulators, developers, and customers. The article's realignment of expectations based on context fosters informed decision-making and promotes responsible AI development and implementation. It seeks to enhance the overall utilization and acceptance of AI technologies by promoting a realistic understanding of AI's capabilities and limitations in different contexts. By offering more comprehensive guidance, the article aims to support the establishment of robust regulatory frameworks and promote the responsible deployment of AI systems, contributing to the improvement of AI applications in diverse sectors. The author's call for fine-tuned expectations aims to prevent dissatisfaction arising from unrealistic demands and provide solid guidance for AI development and regulation.
Publisher
Kazan Innovative University named after V. G. Timiryasov
Reference67 articles.
1. Ambrose, M. L. (2014). Regulating the Loop: Ironies of Automation Law. In WeRobot 2014. Miami: University of Miami. https://perma.cc/E9KL-CSNK 2. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 3. Balkin, J. M. (2017). The Three Laws of Robotics in the Age of Big Data. Yale Law School Research Paper, 592, 01–28. https://ssrn.com/abstract=2890965 4. Bambauer, D. E., & Risch, M. (2021). Worse Than Human? Arizona State Law Journal, 53(4), 1091–1151. https://ssrn.com/abstract=3897126 5. Benkler, Y. (2019, May). Don’t let industry write the rules for AI. Nature, 569, 01. https://doi.org/10.1038/d41586-019-01413-1
|
|