Affiliation:
1. Department of Electrical Engineering, Edinburgh University, EH9 3JL, UK
Abstract
This article introduces the concept of optimally distributed computation in feedforward neural networks via regularization of weight saliency. By constraining the relative importance of the parameters, computation can be distributed thinly and evenly throughout the network. We propose that this will have beneficial effects on fault-tolerance performance and generalization ability in large network architectures. These theoretical predictions are verified by simulation experiments on two problems: one artificial and the other a real-world task. In summary, this article presents regularization terms for distributing neural computation optimally.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献