Affiliation:
1. The University of Tokyo
Abstract
Abstract
This research significantly advances Component-Wise Natural Gradient Descent (CW-NGD), a network training method that facilitates efficient parameter updates by approximating the curvature Fisher Information Matrix. By the investigation of the exponential moving average integration, and appropriate hyperparameters selection obtained from the comprehensive analysis results, significant enhancements in CW-NGD's performance have been achieved. Particularly we enhance CW-NGD to operate across multiple GPUs, bypassing the memory constraints when working with large-scale models. These improvements enable CW-NGD to attain state-of-the-art accuracy on deep networks, which prior work could not achieve In an expansive comparison across four diverse datasets and models, CW-NGD achieves similar or superior accuracy while outperforming all other established network training methods, encompassing Adam, Stochastic Gradient Descent, and Kronecker-factored Approximate Curvature, in terms of convergence speed and stability. This study establishes CW-NGD as a robust and versatile network training technique, showcasing its adaptability and potential applications across various domains.
Publisher
Research Square Platform LLC
Reference88 articles.
1. {Martens, James} and {Grosse, Roger} (2015) Optimizing Neural Networks with Kronecker-factored Approximate Curvature. PMLR, 07--09 Jul, Proceedings of the 32nd ICML, 37
2. Martens, James (2010) Deep learning via Hessian-free optimization. 735--742, 08, Proceedings of the 27nd ICML
3. Liu, Dong C and Nocedal, Jorge (1989) On the limited memory BFGS method for large scale optimization. Mathematical programming 45(1): 503--528 Springer
4. Amari, Shun-ichi (1993) Backpropagation and stochastic gradient descent method. Neurocomputing 5(4-5): 185--196 Elsevier
5. Vinyals, Oriol and Povey, Daniel (2012) Krylov Subspace Descent for Deep Learning. PMLR, PMLR, 22, Proceedings of the 15th International Conference on AISTATS