Abstract
ABSTRACTThe safety of large language models (LLMs) as mental health chatbots is not fully established. This study evaluated the risk escalation responses of publicly available ChatGPT conversational agents when presented with prompts of increasing depression severity and suicidality. The average referral point to a human was at the midpoint of escalating prompts. However, most agents only definitively recommended professional help at the highest level of risk. Few agents included crisis resources like suicide hotlines. The results suggest current LLMs may fail to escalate mental health risk scenarios appropriately. More rigorous testing and oversight are needed before deployment in mental healthcare settings.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献