Abstract
Architectures of neural networks affect the training performance of artificial neural networks. For more consistent performance evaluation of training algorithms, hard-to-train benchmarking architectures should be used. This study introduces a benchmark neural network architecture, which is called pipe-like architecture, and presents training performance analyses for popular Neural Network Backpropagation Algorithms (NNBA) and well-known Metaheuristic Search Algorithms (MSA). The pipe-like neural architectures essentially resemble an elongated fraction of a deep neural network and form a narrowed long bottleneck for the learning process. Therefore, they can significantly complicate the training process by causing the gradient vanishing problems and large training delays in backward propagation of parameter updates throughout the elongated pipe-like network. The training difficulties of pipe-like architectures are theoretically demonstrated in this study by considering the upper bound of weight updates according to an aggregated one-neuron learning channels conjecture. These analyses also contribute to Baldi et al.'s learning channel theorem of neural networks in a practical aspect. The training experiments for popular NNBA and MSA algorithms were conducted on the pipe-like benchmark architecture by using a biological dataset. Moreover, a Normalized Overall Performance Scoring (NOPS) was performed for the criterion-based assessment of overall performance of training algorithms.
Publisher
Muhendislik Bilimleri ve Tasarim Dergisi
Reference76 articles.
1. Aliev, R.A., Fazlollahi, B., Guirimov, B.G., Aliev, R.R., 2008. Recurrent Fuzzy Neural Networks and Their Performance Analysis. in: Recurr. Neural Networks, InTech. https://doi.org/10.5772/5540.
2. Arifovic, J., Gençay, R., 2001. Using genetic algorithms to select architecture of a feedforward artificial neural network. Phys. A Stat. Mech. Its Appl., 289:574–594. https://doi.org/10.1016/S0378-4371(00)00479-9.
3. Awolusi, T.F., Oke, O.L., Akinkurolere, O.O., Sojobi, A.O., Aluko, O.G., 2019. Performance comparison of neural network training algorithms in the modeling properties of steel fiber reinforced concrete. Heliyon 5:e01115. https://doi.org/10.1016/j.heliyon.2018.e01115.
4. Bahrami, M., Akbari, M., Bagherzadeh, S.A., Karimipour, A., Afrand, M., Goodarzi, M., 2019. Develop 24 dissimilar ANNs by suitable architectures & training algorithms via sensitivity analysis to better statistical presentation: Measure MSEs between targets & ANN for Fe–CuO/Eg–Water nanofluid. Phys. A Stat. Mech. Its Appl. 519:159–168. https://doi.org/10.1016/j.physa.2018.12.031.
5. Bala, J.W., Analytics, D., Bloedorn, E., Bratko, I., 1992. The MONK’s Problems A Performance Comparison of Different Learning Algorithms. http://robots.stanford.edu/papers/thrun.MONK.html Accessed 05 August 2021.