Abstract
AbstractModels of neural architecture and organization are critical for the study of disease, aging, and development. Unfortunately, automating the process of building maps of microarchitectural differences both within and across brains still remains a challenge. In this paper, we present a way to build data-driven representations of brain structure using deep learning. With this model we can build meaningful representations of brain structure within an area, learn how different areas are related to one another anatomically, and use this model to discover new regions of interest within a sample that share similar characteristics in terms of their anatomical composition. We start by training a deep convolutional neural network to predict the brain area that it is in, using only small snapshots of its immediate surroundings. By requiring that the network learn to discriminate brain areas from these local views, it learns a rich representation of the underlying anatomical features that allow it to distinguish different brain areas. Once we have the trained network, we open up the black box, extract features from its last hidden layer, and then factorize them. After forming a low-dimensional factorization of the network’s representations, we find that the learned factors and their embeddings can be used to further resolve biologically meaningful subdivisions within brain regions (e.g., laminar divisions and barrels in somatosensory cortex). These findings speak to the potential use of neural networks to learn meaningful features for modeling neural architecture, and discovering new patterns in brain anatomy directly from images.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献