BACKGROUND
With the aging global population and the rising burden of Alzheimer’s disease and related dementias (ADRD), there is a growing focus on identifying mild cognitive impairment (MCI) to enable timely interventions that could potentially slow down the onset of clinical dementia. The production of speech by an individual is a cognitively complex task that engages various cognitive domains. The ease of audio data collection highlights the potential cost-effectiveness and noninvasive nature of using human speech as a tool for cognitive assessment.
OBJECTIVE
This study aimed to construct a machine learning pipeline that incorporates speaker diarization, feature extraction, feature selection, and classification, to identify a set of acoustic features derived from voice recordings that exhibit strong MCI detection capability.
METHODS
The study included 100 MCI cases and 100 cognitively normal (CN) controls matched for age, sex, and education from the Framingham Heart Study. Participants' spoken responses to neuropsychological test questions were recorded, and the recorded audio was processed to identify segments of each participant's voice from recordings that included voices of both testers and participants. A comprehensive set of 6385 acoustic features was then extracted from these voice segments using the OpenSMILE and Praat softwares. Subsequently, a random forest model was constructed to classify cognitive status using the features that exhibited significant differences between the MCI and CN groups. The MCI detection performance of various audio lengths was further examined.
RESULTS
An optimal subset of 29 features were identified that resulted in an area under the receiver operating characteristic curve (AUC) of 0.87, with a 90% confidence interval from 0.82 to 0.93. The most important acoustic feature for MCI classification was the number of filled pauses (importance score = 0.09). There was no substantial difference in performance of the model trained on the acoustic features derived from different lengths of voice recordings.
CONCLUSIONS
This study showcases the potential of monitoring changes to non-semantic and acoustic features of speech as a way of early ADRD detection and motivates future opportunities for using human speech as a measure of brain health.