Affiliation:
1. Brigham Young University, Computer Science Department, 3361 TMCB, P.O. Box 26576, Provo Utah 84602-6576, USA
Abstract
Backpropagation, which is frequently used in Neural Network training, often takes a great deal of time to converge on an acceptable solution. Momentum is a standard technique that is used to speed up convergence and maintain generalization performance. In this paper we present the Windowed Momentum algorithm, which increases speedup over Standard Momentum. Windowed Momentum is designed to use a fixed width history of recent weight updates for each connection in a neural network. By using this additional information, Windowed Momentum gives significant speedup over a set of applications with same or improved accuracy. Windowed Momentum achieved an average speedup of 32% in convergence time on 15 data sets, including a large OCR data set with over 500,000 samples. In addition to this speedup, we present the consequences of sample presentation order. We show that Windowed Momentum is able to overcome these effects that can occur with poor presentation order and still maintain its speedup advantages.
Publisher
World Scientific Pub Co Pte Lt
Subject
Computer Networks and Communications,General Medicine
Cited by
32 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献