Author:
Pandya K.,Dabhi D.,Mochi P.,Rajput V.
Abstract
An Artificial Neural Network (ANN) is one of the most powerful tools to predict the behavior of a system with unforeseen data. The feedforward neural network is the simplest, yet most efficient topology that is widely used in computer industries. Training of feedforward ANNs is an integral part of an ANN-based system. Typically an ANN system has inherent non-linearity with multiple parameters like weights and biases that must be optimized simultaneously. To solve such a complex optimization problem, this paper proposes the Levy Enhanced Cross Entropy (LE-CE) method. It is a population-based meta-heuristic method. In each iteration, this method produces a "distribution" of prospective solutions and updates it by updating the parameters of the distribution to obtain the optimal solutions, unlike traditional meta-heuristic methods. As a result, it reduces the chances of getting trapped into local minima, which is the typical drawback of any AI method. To further improve the global exploration capability of the CE method, it is subjected to the Levy flight which consists of a large step length during intermediate iterations. The performance of the LE-CE method is compared with state-of-the-art optimization methods. The result shows the superiority of LE-CE. The statistical ANOVA test confirms that the proposed LE-CE is statistically superior to other algorithms.
Publisher
Engineering, Technology & Applied Science Research
Reference27 articles.
1. C.-J. Lin, C.-H. Chen, and C.-Y. Lee, "A self-adaptive quantum radial basis function network for classification applications," in International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), Budapest, Hungary, Jul. 2004, vol. 4, pp. 3263–3268.
2. S. Mirjalili, S. Z. Mohd Hashim, and H. Moradian Sardroudi, "Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm," Applied Mathematics and Computation, vol. 218, no. 22, pp. 11125–11137, Jul. 2012.
3. K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural Networks, vol. 2, no. 5, pp. 359–366, Jan. 1989.
4. J.-R. Zhang, J. Zhang, T.-M. Lok, and M. R. Lyu, "A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training," Applied Mathematics and Computation, vol. 185, no. 2, pp. 1026–1037, Feb. 2007.
5. M. Gori and A. Tesi, "On the Problem of Local Minima in Backpropagation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 76–86, Jan. 1992.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献