Abstract
IntroductionSocial isolation has been found to be a significant risk factor for health outcomes, on par with traditional risk factors. This isolation is characterised by reduced social interactions, which can be detected acoustically. To accomplish this, we created a machine learning algorithm called SocialBit. SocialBit runs on a smartwatch and detects minutes of social interaction based on vocal features from ambient audio samples without natural language processing.Methods and analysisIn this study, we aim to validate the accuracy of SocialBit in stroke survivors with varying speech, cognitive and physical deficits. Training and testing on persons with diverse neurological abilities allows SocialBit to be a universally accessible social sensor. We are recruiting 200 patients and following them for up to 8 days during hospitalisation and rehabilitation, while they wear a SocialBit-equipped smartwatch and engage in naturalistic daily interactions. Human observers tally the interactions via a video livestream (ground truth) to analyse the performance of SocialBit against it. We also examine the association of social interaction time with stroke characteristics and outcomes. If successful, SocialBit would be the first social sensor available on commercial devices for persons with diverse abilities.Ethics and disseminationThis study has received ethical approval from the Institutional Review Board of Mass General Brigham (Protocol #2020P003739). The results of this study will be published in a peer-reviewed journal.
Funder
National Institutes of Health
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献