Affiliation:
1. National Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
Abstract
Neural network ensemble is a learning paradigm where a collection of neural networks is trained for the same task. In this paper, the relationship between the generalization ability of the neural network ensemble and the correlation of the individual neural networks constituting the ensemble is analyzed in the context of combining neural regression estimators, which reveals that ensembling a selective subset of trained networks is superior to ensembling all the trained networks in some cases. Based on such recognition, an approach named GASEN is proposed. GASEN trains a number of individual neural networks at first. Then it assigns random weights to the individual networks and employs a genetic algorithm to evolve those weights so that they can characterize to some extent the importance of the individual networks in constituting an ensemble. Finally it selects an optimum subset of individual networks based on the evolved weights to make up the ensemble. Experimental results show that, comparing with a popular ensemble approach, i.e., averaging all, and a theoretically optimum selective ensemble approach, i.e. enumerating, GASEN has preferable performance in generating ensembles with strong generalization ability in relatively small computational cost. This paper also analyzes the working mechanism of GASEN from the view of error-ambiguity decomposition, which reveals that GASEN improves generalization ability mainly through reducing the average generalization error of the individual neural networks constituting the ensemble.
Publisher
World Scientific Pub Co Pte Lt
Subject
Computer Science Applications,Theoretical Computer Science,Software
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献