Abstract
In this paper, a stochastic quasi-Newton algorithm for nonconvex stochastic optimization is presented. It is derived from a classical modified BFGS formula. The update formula can be extended to the framework of limited memory scheme. Numerical experiments on some problems in machine learning are given. The results show that the proposed algorithm has great prospects.
Subject
Physics and Astronomy (miscellaneous),General Mathematics,Chemistry (miscellaneous),Computer Science (miscellaneous)
Reference18 articles.
1. A Stochastic Approximation Method
2. On a Stochastic Approximation Method
3. Acceleration of Stochastic Approximation by Averaging
4. A method of aggregate stochastic subgradients with online stepsize rules for convex stochastic programming problems;Ruszczyǹski,1986
5. Numerical Optimization;Wright,1999