Affiliation:
1. School of Information Science and Engineering Dalian Polytechnic University Dalian 116034 China
2. Department of Basic Courses Teaching Dalian Polytechnic University Dalian 116034 China
3. School of International Education Dalian Polytechnic University Dalian 116034 China
Abstract
AbstractAs a universal approximator, the first order Takagi–Sugeno fuzzy system possesses the capability to approximate widespread nonlinear systems through a group of IF THEN fuzzy rules. Although group lasso regularization has the advantage of inducing group sparsity and handling variable selection issues, it can lead to numerical oscillations and theoretical challenges in calculating the gradient at the origin when employed directly during training. The paper addresses the aforementioned obstacle by invoking a smoothing function to approximate group lasso regularization. On this basis, a gradient‐based neuro fuzzy learning algorithm with smoothing group lasso regularization for the first order Takagi–Sugeno fuzzy system is proposed. The convergence of the proposed algorithm is rigorously proved under gentle conditions. In addition, experimental outcomes acquired on two approximations and two classification simulations demonstrate that the proposed algorithm outperforms the algorithm with original group lasso regularization and L2 regularization in terms of error, pruned neurons, and accuracy. This is particularly evident in significant advancements in pruned neurons due to group sparsity. In comparison to the algorithm with L2 regularization, the proposed algorithm exhibits improvements of 6.3, 5.3, and 142.6 in pruned neurons during sin function, Gabor function, and Sonar benchmark dataset simulations, respectively.
Funder
National Natural Science Foundation of China
National Key Research and Development Program of China
Subject
Multidisciplinary,Modeling and Simulation,Numerical Analysis,Statistics and Probability