Abstract
Functional near-infrared spectroscopy (fNIRS) technology offers a promising avenue for assessing brain function across participant groups. Despite its numerous advantages, the fNIRS technique often faces challenges such as noise contamination and motion artifacts from data collection. Methods for improving fNIRS signal quality are urgently needed, especially with the development of wearable fNIRS equipment and corresponding applications in natural environments. To solve these issues, we propose a generative deep learning approach to recover damaged fNIRS signals from one or more measurement channels. The model could capture spatial and temporal variations in the time series of fNIRS data by integrating multiscale convolutional layers, gated recurrent units (GRUs), and linear regression analyses. Several extensive experiments were conducted on a dataset of healthy elderly individuals to assess the performance of the model. Collectively, the results demonstrate that the proposed model can accurately reconstruct damaged time series for individual channels while preserving intervariable relationships. Under two simulated scenarios of multichannel damage, the model maintains robust reconstruction accuracy and consistency in terms of functional connectivity. Our findings underscore the potential of generative deep learning techniques in reconstructing damaged fNIRS signals, offering a novel perspective for accurate data provision in clinical diagnosis and brain research.