Affiliation:
1. College of Sciences, ShangHai University, Shanghai, China
2. School of Communication and Information Engineering, ShangHai University, Shanghai, China
Abstract
For the pattern recognition, most classification models are solved iteratively, except for Linear LDA, KLDA and ELM etc. In this paper, a nonlinear classification network model based on predefined evenly-distributed class centroids (PEDCC) is proposed. Its analytical solution can be obtained and has good interpretability. Using the characteristics of maximizing the inter-class distance of PEDCC and derivative weighted minimum mean square error loss function to minimize the intra-class distance, we can not only realize the effective nonlinearity of the network, but also obtain the analytical solution of the network weight. Then, the sample is classified based on GDA. In order to further improve the performance of classification, PCA is used to reduces the dimensionality of the original sample, meanwhile, the CReLU activation function are adopted to enhances the expression ability of the features. The network transforms the samples into the higher dimensional feature space through the weighted minimum mean square error, so as to find a better separation hyperplane. In experiments, the feasibility of the network structure is verified from pure linear 𝑾, 𝑾+Tanh, and PCA+𝑾+Tanh respectively on many small data sets and large data sets, and compared with SVM and ELM in terms of training speed and recognition rate. The results show that, in general, this model has advantages on small data set both in recognition accuracy and training speed, while it has advantages in training speed on large data sets. Finally, by introducing a multi-stage network structure based on the latent feature norm, the classifier network can further significantly improve the classification performance, the recognition rate of small data sets is effectively improved and much higher than that of existing methods, while the recognition rate of large data sets is similar to that of SVM.