3D Video Tracking Technology in the Assessment of Orofacial Impairments in Neurological Disease: Clinical Validation

Author:

Jafari Deniz12ORCID,Simmatis Leif12,Guarin Diego3,Bouvier Liziane14,Taati Babak2,Yunusova Yana124ORCID

Affiliation:

1. Department of Speech-Language Pathology, Rehabilitation Sciences Institute, University of Toronto, Ontario, Canada

2. KITE, Toronto Rehabilitation Institute, University Health Network, Ontario, Canada

3. University of Florida, Gainesville

4. Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, Ontario, Canada

Abstract

Purpose: This study sought to determine whether clinically interpretable kinematic features extracted automatically from three-dimensional (3D) videos were correlated with corresponding perceptual clinical orofacial ratings in individuals with orofacial impairments due to neurological disorders. Method: 45 participants (19 diagnosed with motor neuron diseases [MNDs] and 26 poststroke) performed two nonspeech tasks (mouth opening and lip spreading) and one speech task (repetition of a sentence “Buy Bobby a Puppy”) while being video-recorded in a standardized lab setting. The color video recordings of participants were assessed by an expert clinician—a speech language pathologist—on the severity of three orofacial measures: symmetry, range of motion (ROM), and speed. Clinically interpretable 3D kinematic features, linked to symmetry, ROM, and speed, were automatically extracted from video recordings, using a deep facial landmark detection and tracking algorithm for each of the three tasks. Spearman correlations were used to identify features that were significantly correlated ( p value < .05) with their corresponding clinical scores. Clinically significant kinematic features were then used in the subsequent multivariate regression models to predict the overall orofacial impairment severity score. Results: Several kinematic features extracted from 3D video recordings were associated with their corresponding perceptual clinical scores, indicating clinical validity of these automatically derived measures. Different patterns of significant features were observed between MND and poststroke groups; these differences were aligned with clinical expectations in both cases. Conclusions: The results show that kinematic features extracted automatically from simple clinical tasks can capture characteristics used by clinicians during assessments. These findings support the clinical validity of video-based automatic extraction of kinematic features.

Publisher

American Speech Language Hearing Association

Subject

Speech and Hearing,Linguistics and Language,Language and Linguistics

Reference52 articles.

1. The diagnostic utility of patient-report and speech-language pathologists’ ratings for detecting the early onset of bulbar symptoms due to ALS

2. Bandini, A. , Green, J. R. , Richburg, B. D. , & Yunusova, Y. (2018). Automatic detection of orofacial impairment in stroke. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018-September (pp. 1711–1715). https://doi.org/10.21437/Interspeech.2018-2475

3. Bandini, A. , Green, J. R. , Taati, B. , Orlandi, S. , Zinman, L. , & Yunusova, Y. (2018). Automatic detection of amyotrophic lateral sclerosis (ALS) from video-based analysis of facial movements: Speech and non-speech tasks. Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 (pp. 150–157). https://doi.org/10.1109/FG.2018.00031

4. Kinematic Features of Jaw and Lips Distinguish Symptomatic From Presymptomatic Stages of Bulbar Decline in Amyotrophic Lateral Sclerosis

5. Bandini, A. , Green, J. R. , Zinman, L. , & Yunusova, Y. (2017). Classification of bulbar ALS from kinematic features of the jaw and lips: Towards computer-mediated assessment. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2017 (pp. 1819–1823). https://doi.org/10.21437/Interspeech.2017-478

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Validation of Camera Networks Used for the Assessment of Speech Movements;The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences;2024-06-11

2. A multimodal approach to automated hierarchical assessment of bulbar involvement in amyotrophic lateral sclerosis;Frontiers in Neurology;2024-05-21

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3