Abstract
Bow-tie or hourglass architecture is commonly found in biological neural networks. Recently, artificial neural networks with bow-tie architecture have been widely used in various machine-learning applications. However, it is unclear how bow-tie architecture in neural circuits can be formed. We address this by training multi-layer neural network models to perform classification tasks. We demonstrate that during network learning and structural changes, non-negative connections amplify error signals and quench neural activity particularly in the hidden layer, resulting in the emergence of the network’s bow-tie architecture. We further show that such architecture has low wiring cost, robust to network size, and generalizable to different discrimination tasks. Overall, our work suggests a possible mechanism for the emergence of bow-tie neural architecture and its functional advantages.
Publisher
Cold Spring Harbor Laboratory