Author:
Mody Sandeep K.,Rangarajan Govindan
Abstract
AbstractConventional Vector Autoregressive (VAR) modelling methods applied to high dimensional neural time series data result in noisy solutions that are dense or have a large number of spurious coefficients. This reduces the speed and accuracy of auxiliary computations downstream and inflates the time required to compute functional connectivity networks by a factor that is at least inversely proportional to the true network density. As these noisy solutions have distorted coefficients, thresholding them as per some criterion, statistical or otherwise, does not alleviate the problem. Thus obtaining a sparse representation of such data is important since it provides an efficient representation of the data and facilitates its further analysis. We propose a fast Sparse Vector Autoregressive Greedy Search (SVARGS) method that works well for high dimensional data, even when the number of time points is relatively low, by incorporating only statistically significant coefficients. In numerical experiments, our methods show high accuracy in recovering the true sparse model. The relative absence of spurious coefficients permits accurate, stable and fast evaluation of derived quantities such as power spectrum, coherence and Granger causality. Consequently, sparse functional connectivity networks can be computed, in a reasonable time, from data comprising tens of thousands of channels/voxels. This enables a much higher resolution analysis of functional connectivity patterns and community structures in such large networks than is possible using existing time series methods. We apply our method to EEG data where computed network measures and community structures are used to distinguish emotional states as well as to ADHD fMRI data where it is used to distinguish children with ADHD from typically developing children.
Funder
Tata Trusts
University Grants Commission
Publisher
Springer Science and Business Media LLC
Reference91 articles.
1. Candes, E. & Tao, T. Statistical estimation when p is much larger than n. Ann. Stat. 35, 2313–2351 (2007).
2. Donoho, D. L. For most large underdetermined systems of linear equations, the l1 norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59, 797–829 (2006).
3. Reshef, D. N. et al. Detecting novel associations in large data sets. Science 334, 1518–1524. https://doi.org/10.1126/science.1205438 (2011).
4. Wolfe, P. J. Making sense of big data. PNAS USA 110, 18031–18032 (2013).
5. Aflaloa, Y. & Kimmel, R. Spectral multidimensional scaling.. PNAS USA 110, 18052–18057 (2013).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献