Affiliation:
1. School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu, Fukushima 965-8580, Japan
Abstract
Ensemble learning systems could lower down the risk of overfitting that often appears in a single learning model. Different to those ensemble learning approaches by re-sampling, negative correlation learning trains all learners in an ensemble simultaneously and cooperatively. However, overfitting had sometimes been observed in negative correlation learning. Two error bounds are therefore introduced into negative correlation learning for preventing overfitting. One is the upper bound of error output (UBEO) which divides the training data into two groups based on the distances between the data and the formed decision boundary. The other is the lower bound of error rate (LBER) which is set as a learning switch. Before the performance measured by error rates is higher than LBER, negative correlation learning is applied on the whole training set. As soon as the performance is lower than LBER, negative correlation learning will only be applied to the group of data whose distances to the current decision boundary are within the range of UBEO. The other group of data outside of this range will not be learned anymore. Further learning on the data points in the later group would make the learned decision boundary too complex to classify the unseen data well. Experimental results would explore how LBER and UBEO would lead negative correlation learning towards a robust decision boundary.
Funder
Kahenhi Grant by Japan Society for the Promotion of Science
Publisher
World Scientific Pub Co Pte Lt
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献