Abstract
AbstractNeural spiking activity is generally variable, non-stationary, and exhibits complex dependencies on covariates, such as sensory input or behavior. These dependencies have been proposed to be signatures of specific computations, and so characterizing them with quantitative rigor is critical for understanding neural computations. Approaches based on point processes provide a principled statistical framework for modeling neural spiking activity. However, currently, they only allow the instantaneous mean, but not the instantaneous variability, of responses to depend on covariates. To resolve this limitation, we propose a scalable Bayesian approach generalizing modulated renewal processes using sparse variational Gaussian processes. We leverage pathwise conditioning for computing nonparametric priors over conditional interspike interval distributions and rely on automatic relevance determination to detect lagging interspike interval dependencies beyond renewal order. After systematically validating our method on synthetic data, we apply it to two foundational datasets of animal navigation: head direction cells in freely moving mice and hippocampal place cells in rats running along a linear track. Our model exhibits competitive or better predictive power compared to state-of-the-art baselines, and outperforms them in terms of capturing interspike interval statistics. These results confirm the importance of modelingcovariate-dependentspiking variability, and further analyses of our fitted models reveal rich patterns of variability modulation beyond the temporal resolution of flexible count-based approaches.
Publisher
Cold Spring Harbor Laboratory
Reference131 articles.
1. Nonrenewal spike train statistics: causes and functional consequences on neural coding
2. Construction and analysis of non-Poisson stimulus-response models of neural spiking activity
3. Fitting summary statistics of neural data with a differentiable spiking network simulator;Advances in Neural Information Processing Systems,2021
4. Bradbury, J. , Frostig, R. , Hawkins, P. , Johnson, M. J. , Leary, C. , Maclaurin, D. , Necula, G. , Paszke, A. , VanderPlas, J. , Wanderman-Milne, S. , and Zhang, Q. (2018). JAX: composable transformations of Python+NumPy programs.
5. Brown, E. N. , Barbieri, R. , Eden, U. T. , and Frank, L. M. (2003). Likelihood methods for neural spike train data analysis. Computational neuroscience: A comprehensive approach, pages 253–286.