Abstract
AbstractBrain-Computer Interfaces (BCI) will revolutionize the way people with impaired speech production can communicate. While recent studies confirm the possibility of decoding imagined speech based on pre-recorded intracranial neurophysiological signals, current efforts focus on collecting vast amounts of data to train classifiers, rather than exploring how the individual’s brain adapts to improve BCI control, an important aspect given the known problem of “BCI illiteracy”, the inability of some individuals to operate a BCI. This issue can be investigated by providing real-time feedback to allow users to identify the best control strategy. In this study, we trained 15 healthy participants to operate a simple binary BCI system based on electroencephalography (EEG) signals through syllable imagery for five consecutive days. We explored whether BCI-control improves with training and characterized the underlying neural dynamics, both in terms of EEG power changes and of the neural features contributing to real-time classification. Despite considerable interindividual variability in performance and learning, a significant improvement in BCI control was observed from day 1 to 5. Performance improvement was associated with a global EEG power increase in frontal theta and a focal increase in temporal low-gamma, showing that learning to operate an imagined-speech BCI involves global and local dynamical changes involving low- and high-frequency neural features, respectively. These findings indicate that both machine and human learning must be considered to reach optimal controllability of imagined-speech BCI, and that non-invasive BCI-learning can help predict the individual benefit from an invasive speech BCI and guide both the electrode implantation and decoding strategies.
Publisher
Cold Spring Harbor Laboratory
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献