Affiliation:
1. Department of Mechanical Engineering, San Diego State University, San Diego, CA 92182, USA
Abstract
This study aims to demonstrate the feasibility of using a new wireless electroencephalography (EEG)–electromyography (EMG) wearable approach to generate characteristic EEG-EMG mixed patterns with mouth movements in order to detect distinct movement patterns for severe speech impairments. This paper describes a method for detecting mouth movement based on a new signal processing technology suitable for sensor integration and machine learning applications. This paper examines the relationship between the mouth motion and the brainwave in an effort to develop nonverbal interfacing for people who have lost the ability to communicate, such as people with paralysis. A set of experiments were conducted to assess the efficacy of the proposed method for feature selection. It was determined that the classification of mouth movements was meaningful. EEG-EMG signals were also collected during silent mouthing of phonemes. A few-shot neural network was trained to classify the phonemes from the EEG-EMG signals, yielding classification accuracy of 95%. This technique in data collection and processing bioelectrical signals for phoneme recognition proves a promising avenue for future communication aids.
Reference39 articles.
1. Prevalence and causes of paralysis—United States, 2013;Armour;Am. J. Public Health,2016
2. Functional MRI assessment of orofacial articulators: Neural correlates of lip, jaw, larynx, and tongue movements;Grabski;Hum. Brain Mapp.,2012
3. Tongue movements in feeding and speech;Hiiemae;Crit. Rev. Oral Biol. Med.,2003
4. Guenther, F.H. (2016). Neural Control of Speech, MIT Press.
5. Vocal tract anatomy and the neural bases of talking;Lieberman;J. Phon.,2012