Author:
Schneider Aidan,Azabou Mehdi,McDougall-Vigier Louis,Parks David,Ensley Sahara,Bhaskaran-Nair Kiran,Nowakowski Tom,Dyer Eva L.,Hengen Keith B.
Abstract
SUMMARYCell type is hypothesized to be a key determinant of the role of a neuron within a circuit. However, it is unknown whether a neuron’s transcriptomic type influences the timing of its activity in the intact brain. In other words, can transcriptomic cell type be extracted from the time series of a neuron’s activity? To address this question, we developed a new deep learning architecture that learns features of interevent intervals across multiple timescales (milliseconds to >30 min). We show that transcriptomic cell class information is robustly embedded in the timing of single neuron activity recorded in the intact brain of behaving animals (calcium imaging and extracellular electrophysiology), as well as in a bio-realistic model of visual cortex. In contrast, we were unable to reliably extract cell identity from summary measures of rate, variance, and interevent interval statistics. We applied our analyses to the question of whether transcriptomic subtypes of excitatory neurons represent functionally distinct classes. In the calcium imaging dataset, which contains a diverse set of excitatory Cre lines, we found that a subset of excitatory cell types are computationally distinguishable based upon their Cre lines, and that excitatory types can be classified with higher accuracy when considering their cortical layer and projection class. Here we address the fundamental question of whether a neuron, within a complex cortical network, embeds a fingerprint of its transcriptomic identity into its activity. Our results reveal robust computational fingerprints for transcriptomic types and classes across diverse contexts, defined over multiple timescales.
Publisher
Cold Spring Harbor Laboratory
Reference76 articles.
1. Neural machine translation by jointly learning to align and translate;arXiv preprint,2014
2. Characterization of Neocortical Principal Cells and Interneurons by Network Interactions and Extracellular Features
3. Relational inductive biases, deep learning, and graph networks;arXiv preprint,2018
4. Bergstra, J. , Yamins, D. , & Cox, D. (2013, February). Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. International conference on machine learning (pp. 115–123). PMLR.
5. Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献