BACKGROUND
The rapid evolution of Large Language Models (LLMs), such as BERT and GPT, has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.
OBJECTIVE
This scoping review aims to systematically explore and map the factors influencing the credibility of LLMs in mental health support. Specifically, the review will assess LLMs' reliability, explainability, and ethical implications in this context.
METHODS
The review will follow the PRISMA extension for scoping reviews (PRISMA-ScR) and the Joanna Briggs Institute (JBI) methodology. A comprehensive search will be conducted in databases such as PsycINFO, Medline via PubMed, Web of Science, IEEE Xplore, and ACM Digital Library. Studies published from 2019 onwards in English and peer-reviewed will be included. The Population-Concept-Context (PCC) framework will guide the inclusion criteria. Two independent reviewers will screen and extract data, resolving discrepancies through discussion. Data will be synthesized and presented descriptively.
RESULTS
The review will map the current evidence on the credibility of LLMs in mental health support. It will identify factors influencing the reliability and explainability of these models and discuss ethical considerations for their use. The findings will provide practitioners, researchers, policymakers, and users insights.
CONCLUSIONS
This scoping review will fill a critical gap in the literature by systematically examining the credibility of LLMs in mental health support. The results will inform future research, practice, and policy development, ensuring the responsible integration of LLMs in mental health services.