Affiliation:
1. School of Health Sciences University of Southampton Southampton, Hampshire UK
Abstract
Accessible SummaryWhat is Known on the Subject?
Artificial intelligence (AI) is freely available, responds to very basic text input (such as a question) and can now create a wide range of outputs, communicating in many languages or art forms. AI platforms like OpenAI's ChatGPT can now create passages of text that could be used to create plans of care for people with mental health needs. As such, AI output can be difficult to distinguish from human‐output, and there is a risk that its use could go unnoticed.
What this Paper Adds to Existing Knowledge?
Whilst it is known that AI can produce text or pass pre‐registration health‐profession exams, it is not known if AI can produce meaningful results for care delivery.
We asked ChatGPT basic questions about a fictitious person who presents with self‐harm and then evaluated the quality of the output. We found that the output could look reasonable to laypersons but there were significant errors and ethical issues. There are potential harms to people in care if AI is used without an expert correcting or removing these errors.
What are the Implications for Practice?
We suggest that there is a risk that AI use could cause harm if it was used in direct care delivery. There is a lack of policy and research to safeguard people receiving care ‐ and this needs to be in place before AI should be used in this way. Key aspects of the role of a mental health nurse are relational and AI use may diminish mental health nurses' ability to provide safe care in its current form.
Many aspects of mental health recovery are linked to relationships and social engagement, however AI is not able to provide this and may push the people who are in most need of help further away from services that assist recovery.
AbstractBackgroundArtificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care‐delivery is not yet known.AimsTo use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors’ clinical experience and existing guidance.Materials & MethodsBasic text commands were input into ChatGPT about a fictitious person called ‘Emily’ who presents with self‐injurious behaviour. The output from ChatGPT was then evaluated against the authors’ clinical experience and current (national) care guidance.ResultsChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way.DiscussionAI use is increasing in direct‐care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care‐recipient engagement, ‘recycle’ existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms.ConclusionUse of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT’s ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
Subject
Pshychiatric Mental Health
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献