The Invisible Embedded “Values” Within Large Language Models: Implications for Mental Health Use (Preprint)

Author:

Hadar-Shoval Dorit,Asraf Kfir,Mizrachi Yonathan,Haber Yuval,Elyoseph Zohar

Abstract

BACKGROUND

Large language models (LLMs) hold promises for mental health applications due to their impressive language capabilities. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have an ethical importance. Schwartz's Theory of Basic Values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics. This study leverages STBV to map the motivational values-like infrastructure underpinning leading LLMs.

OBJECTIVE

This study aimed to (1) evaluate whether Schwartz's Theory of Basic Values, a framework quantifying cultural value orientations, can measure values-like constructs within leading LLMs; and (2) determine if LLMs exhibit distinct values-like patterns from humans and each other.

METHODS

Four LLMs (Bard, Claude 2, ChatGPT-3.5, ChatGPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess values-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs’ value profiles, their results were compared to published data from a diverse sample of 53,472 humans across 49 nations that had completed the PVQ-RR. This allowed assessing if the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests.

RESULTS

The PVQ-RR showed good reliability and validity for quantifying values-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs’ value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction while deemphasizing achievement, power and security relative to humans. Successful discriminant analysis differentiated the four models' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs’ responses when presented mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational values-like constructs that shape their decision-making.

CONCLUSIONS

While the study demonstrated Schwartz's theory can effectively characterize values-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the models underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental healthcare must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.

Publisher

JMIR Publications Inc.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3