BACKGROUND
Artificial intelligence (AI) tools hold much promise for mental healthcare by increasing the scalability and accessibility of care. However, current development and evaluation practices of AI tools in mental healthcare limit the meaningfulness of their evaluation for healthcare contexts and thereby, the practical usefulness of such tools for professionals and clients alike.
OBJECTIVE
To move towards meaningful evaluation of AI tools in eMental health, this article demonstrates the evaluation of an AI monitoring tool that detects the need for more intensive care in an online grief intervention for older mourners.
METHODS
We take a three-fold evaluation approach (1) using the F1-metric to evaluate the tool’s capacity to classify user monitoring parameters, including affect, as (a) in need of more intensive support, or (b) recommendable to continue using the online grief intervention as is; (2) using linear regression to assess the predictive value of users’ monitoring parameters for clinical changes in grief, depression, and loneliness over the course of a 10-week intervention. Finally, (3) we collect qualitative experience data from eCoaches (N=4) who incorporated the monitoring in their weekly e-mail guidance during the 10-week intervention.
RESULTS
(1) Based on N=174 binary recommendation decisions, the F1-score of the monitoring tool was 0.91. (2) Due to minimal variation in depression and loneliness scores after the 10-week intervention compared to before the intervention, only one linear regression was conducted with the difference score in grief before and after the intervention as dependent variable and participants’ mean score on the monitoring assessment tool, the estimate and slope of growth curves fitted to each participant’s response pattern to the monitoring assessment tool as predictors. Only the mean score exhibited predictive value for the observed change in grief (R2 =1.19, SE 0.33, t(df) = 3.58(16), P=.002). (3) The eCoaches appreciated the monitoring tool as a) an opportunity to confirm their impression about the participant based on a clinical interview prior to the intervention, b) a source for personalizing their e-mail guidance and c) an opportunity to detect when participants’ mental health deteriorated during the intervention.
CONCLUSIONS
Each evaluation approach used in this article comes with its own set of limitations and challenges, including (a) skewed class distributions in prediction tasks based on real-life mental health data and (b) choosing meaningful statistical analyses based on clinical trial designs not targeted at evaluating AI tools. However, using multiple evaluation methods provides a good basis for drawing clinically meaningful conclusions and recommendations for improving the clinical value of any specific AI monitoring tool for its intended clinical context.