Affiliation:
1. Big Data and Blockchain Technologies Science and Innovation Center, Astana IT University, 020000 Astana, Kazakhstan
Abstract
Large language models (LLMs) can store factual knowledge within their parameters and have achieved superior results in question-answering tasks. However, challenges persist in providing provenance for their decisions and keeping their knowledge up to date. Some approaches aim to address these challenges by combining external knowledge with parametric memory. In contrast, our proposed QA-RAG solution relies solely on the data stored within an external knowledge base, specifically a dense vector index database. In this paper, we compare RAG configurations using two LLMs—Llama 2b and 13b—systematically examining their performance in three key RAG capabilities: noise robustness, knowledge gap detection, and external truth integration. The evaluation reveals that while our approach achieves an accuracy of 83.3%, showcasing its effectiveness across all baselines, the model still struggles significantly in terms of external truth integration. These findings suggest that considerable work is still required to fully leverage RAG in question-answering tasks.
Funder
Ministry of Science and Higher Education of the Republic of Kazakhstan
Reference39 articles.
1. OpenAI (2024, June 17). ChatGPT (Mar 14 Version) [Large Language Model]. Available online: https://chat.openai.com/chat.
2. Chase, H. (2022). LangChain, GitHub. Available online: https://github.com/langchain-ai/langchain.
3. Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Lovenia, H., Ji, Z., Yu, T., and Chung, W. (2023). A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv.
4. Can ChatGPT boost artistic creation: The need of imaginative intelligence for parallel art;Guo;IEEE/CAA J. Autom. Sin.,2023
5. Survey of hallucination in natural language generation;Ji;ACM Comput. Surv.,2023