Abstract
A new algorithm for unconstrained optimization based on the cubic regularization in two dimensional subspace is developed. Different strategies for search direction are also discussed. The stepsize is computed by means of the weak Wolfe line search. Under classical assumptions it is proved that the algorithm is convergent. Intensive numerical experiments with 800 unconstrained optimization test functions with the number of variables in the range [1000 - 10,000] show that the suggested algorithm is more efficient and more robust than the well established conjugate gradient algorithms CG-DESCENT, CONMIN and L-BFGS (m=5). Comparisons of the suggested algorithm versus CG-DESCENT for solving five applications from MINPACK-2 collection, each of them with 40,000 variables, show that CUBIC is 3.35 times faster than CG-DESCENT.
Publisher
Academia Oamenilor de Stiinta din Romania
Reference26 articles.
1. [1.] Griewank, A., (1981). The modification of Newton's method for unconstrained optimization by bounding cubic terms. Technical Report NA/12, Department of Applied Mathematics and Theorerical Physics, University of Cambridge.
2. Cubic regularization of Newton's method and its global performance;Nesterov;Mathematical Programming 108,2006
3. [3.] Cartis, C., Gould, N.I.M., & Toint, Ph.L., (2011a). Adaptive cubic overestimation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Mathematical Programming Series A, 127, 245-295.
4. [4.] Cartis, C., Gould, N.I.M., & Toint, Ph.L., (2011b). Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case function-evaluation complexity. Mathematical Programming Series A, 130, 295-319.
5. Updating the regularization parameter in the adaptive cubic regularization algorithm;Gould;Computational Optimization and Applications 53,2012