Affiliation:
1. University Health Network
2. Winterlight Labs, Inc.
3. University of Toronto
Abstract
Abstract
Home-based speech assessments have the potential to dramatically improve ALS clinical practice and facilitate patient stratification for ALS clinical trials. Acoustic speech analysis has demonstrated the ability to capture a variety of relevant speech motor impairments, but implementation has been hindered by both the nature of lab-based assessments (requiring travel and time for patients) and also by the opacity of some acoustic feature analysis methods. Furthermore, these challenges and others have obscured the ability to distinguish different ALS disease stages/severities. Validation of remote-capable acoustic analysis tools could enable detection of early signs of ALS, and these tools could be deployed to screen and monitor patients without requiring clinic visits. Here, we sought to determine whether acoustic features gathered using a remote-capable assessment app could detect ALS as well as different levels of speech impairment severity resulting from ALS. Speech samples (readings of a standardized, 99-word passage) from 119 ALS patients with varying degrees of disease severity as well as 22 neurologically healthy participants were analyzed, and 53 acoustic features were extracted. Patients were stratified into early and late stages of disease (ALS-early/ALS-E and ALS-late/ALS-L) based on the ALS Functional Ratings Scale - Revised bulbar score (FRS-bulb). Data were analyzed using a sparse Bayesian logistic regression classifier. It was determined that the current relatively small set of acoustic features could distinguish between ALS and controls well (area under receiver operating characteristic curve/AUROC = 0.85), that the ALS-E patients could be separated well from control participants (AUROC = 0.78), and that ALS-E and ALS-L patients could be reasonably separated (AUROC = 0.70). These results highlight the potential for remote acoustic analyses to detect and stratify ALS.
Publisher
Research Square Platform LLC
Reference34 articles.
1. The diagnostic utility of patient-report and speech-language pathologists’ ratings for detecting the early onset of bulbar symptoms due to ALS;Allison KM;Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration,2017
2. Balagopalan, A., Kaufman, L., Novikova, J., Siddiqui, O., Paul, R., Ward, M., & Simpson, W. (2019). Early development of a unified, speech and language composite to assess clinical severity of frontotemporal lobar degeneration (FLTD). Clinical Trials for Alzheimer’s Disease. https://www.embase.com/search/results?subaction=viewrecord&id=L631884520&from=export%0Ahttp://dx.doi.org/10.14283/jpad.2019.48
3. Boersma, P., & Weenink, D. (2021). Praat: doing phonetics by computer [Computer program]. Version 6.1.50.
4. Suppression of Acoustic Noise in Speech Using Spectral Subtraction;Boll SF;IEEE Transactions on Acoustics, Speech, and Signal Processing,1979
5. Prognostic factors in ALS: A critical review;Chio A;Amyotrophic Lateral Sclerosis: Official Publication of the World Federation of Neurology Research Group on Motor Neuron Diseases,2009