Abstract
AbstractThis proof-of-concept study aimed to assess the ability of a mobile application and cloud analytics software solution to extract facial expression information from participant selfie videos. This is one component of a solution aimed at extracting possible health outcome measures based on expression, voice acoustics and speech sentiment from video diary data provided by patients. Forty healthy volunteers viewed 21 validated images from the International Affective Picture System database through a mobile app which simultaneously captured video footage of their face using the selfie camera. Images were intended to be associated with the following emotional responses: anger, disgust, sadness, contempt, fear, surprise and happiness. Both valence and arousal scores estimated from the video footage associated with each image were adequate predictors of the IAPS image scores (p < 0.001 and p = 0.04 respectively). 12.2% of images were categorised as containing a positive expression response in line with the target expression; with happiness and sadness responses providing the greatest frequency of responders: 41.0% and 21.4% respectively. 71.2% of images were associated with no change in expression. This proof-of-concept study provides early encouraging findings that changes in facial expression can be detected when they exist. Combined with voice acoustical measures and speech sentiment analysis, this may lead to novel measures of health status in patients using a video diary in indications including depression, schizophrenia, autism spectrum disorder and PTSD amongst other conditions.
Funder
Nottingham Trent University
Publisher
Springer Science and Business Media LLC
Subject
Health Information Management,Health Informatics,Information Systems,Medicine (miscellaneous)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献