Abstract
Hand function is a central determinant of independence after stroke. Measuring hand use in the home environment is necessary to evaluate the impact of new interventions, and calls for novel wearable technologies. Egocentric video can capture hand-object interactions in context, as well as show how more-affected hands are used during bilateral tasks (for stabilization or manipulation). Automated methods are required to extract this information. The objective of this study was to use artificial intelligence-based computer vision to classify hand use and hand role from egocentric videos recorded at home after stroke. Twenty-one stroke survivors participated in the study. A random forest classifier, a SlowFast neural network, and the Hand Object Detector neural network were applied to identify hand use and hand role at home. Leave-One-Subject-Out-Cross-Validation (LOSOCV) was used to evaluate the performance of the three models. Between-group differences of the models were calculated based on the Mathews correlation coefficient (MCC). For hand use detection, the Hand Object Detector had significantly higher performance than the other models. The macro average MCCs using this model in the LOSOCV were 0.50 ± 0.23 for the more-affected hands and 0.58 ± 0.18 for the less-affected hands. Hand role classification had macro average MCCs in the LOSOCV that were close to zero for all models. Using egocentric video to capture the hand use of stroke survivors at home is technically feasible. Pose estimation to track finger movements may be beneficial to classifying hand roles in the future.
Funder
Heart and Stroke Foundation of Canada
Natural Sciences and Engineering Research Council of Canada
Publisher
Public Library of Science (PLoS)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献