QA-RAG: Exploring LLM Reliance on External Knowledge

Author:

Mansurova Aigerim1,Mansurova Aiganym1ORCID,Nugumanova Aliya1ORCID

Affiliation:

1. Big Data and Blockchain Technologies Science and Innovation Center, Astana IT University, 020000 Astana, Kazakhstan

Abstract

Large language models (LLMs) can store factual knowledge within their parameters and have achieved superior results in question-answering tasks. However, challenges persist in providing provenance for their decisions and keeping their knowledge up to date. Some approaches aim to address these challenges by combining external knowledge with parametric memory. In contrast, our proposed QA-RAG solution relies solely on the data stored within an external knowledge base, specifically a dense vector index database. In this paper, we compare RAG configurations using two LLMs—Llama 2b and 13b—systematically examining their performance in three key RAG capabilities: noise robustness, knowledge gap detection, and external truth integration. The evaluation reveals that while our approach achieves an accuracy of 83.3%, showcasing its effectiveness across all baselines, the model still struggles significantly in terms of external truth integration. These findings suggest that considerable work is still required to fully leverage RAG in question-answering tasks.

Funder

Ministry of Science and Higher Education of the Republic of Kazakhstan

Publisher

MDPI AG

Reference39 articles.

1. OpenAI (2024, June 17). ChatGPT (Mar 14 Version) [Large Language Model]. Available online: https://chat.openai.com/chat.

2. Chase, H. (2022). LangChain, GitHub. Available online: https://github.com/langchain-ai/langchain.

3. Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Lovenia, H., Ji, Z., Yu, T., and Chung, W. (2023). A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv.

4. Can ChatGPT boost artistic creation: The need of imaginative intelligence for parallel art;Guo;IEEE/CAA J. Autom. Sin.,2023

5. Survey of hallucination in natural language generation;Ji;ACM Comput. Surv.,2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3