Affiliation:
1. School of Business and Trade, International MBA Institute, Dhaka International University, Bangladesh
Abstract
The phenomenon of hallucinations takes place when generative artificial intelligence systems, such as large language models (LLMs) like ChatGPT, generate outputs that are illogical, factually incorrect, or otherwise unreal. In generative artificial intelligence, hallucinations have the ability to unlock creative potential, but they also create challenges for producing accurate and trustworthy AI outputs. Both concerns will be covered in this abstract. Artificial intelligence hallucinations can be caused by a variety of factors. There is a possibility that the model will show an inaccurate response to novel situations or edge cases if the training data is insufficient, incomplete, or biassed. It is common for generative artificial intelligence to generate content in response to cues, regardless of the model's “understanding” or the quality of its output.
Reference51 articles.
1. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
2. Large language models and the perils of their hallucinations
3. Boschetti, S., Prossinger, H., Hladký, T., Říha, D., Příplatová, L., Kopecký, R., & Binter, J. (2023). Are Patterns Game for Our Brain? AI Identifies Individual Differences in Rationality and Intuition Characteristics of Respondents Attempting to Identify Random and Non-random Patterns. International Conference on Human-Computer Interaction, (pp. 151–161). IEEE.
4. Challenging the status quo and exploring the new boundaries in the age of algorithms: Reimagining the role of generative AI in distance education and online learning.;A.Bozkurt;Asian Journal of Distance Education,2023
5. Bridy, A. (2012). Coding creativity: Copyright and the artificially intelligent author. Stan. Tech. L. Rev., 5.