Author:
Tansri Kanjanaporn,Chansangiam Pattrawut
Abstract
<abstract><p>Consider a linear system $ Ax = b $ where the coefficient matrix $ A $ is rectangular and of full-column rank. We propose an iterative algorithm for solving this linear system, based on gradient-descent optimization technique, aiming to produce a sequence of well-approximate least-squares solutions. Here, we consider least-squares solutions in a full generality, that is, we measure any related error through an arbitrary vector norm induced from weighted positive definite matrices $ W $. It turns out that when the system has a unique solution, the proposed algorithm produces approximated solutions converging to the unique solution. When the system is inconsistent, the sequence of residual norms converges to the weighted least-squares error. Our work includes the usual least-squares solution when $ W = I $. Numerical experiments are performed to validate the capability of the algorithm. Moreover, the performance of this algorithm is better than that of recent gradient-based iterative algorithms in both iteration numbers and computational time.</p></abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference23 articles.
1. W. D. James, Applied numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics, 1997.
2. P. J. Olver, C. Shakiban, Applied linear algebra, New York: Springer, 2018.
3. D. M. Young, Iterative solution of large linear systems, New York: Academic Press, 1971.
4. P. Albrechtt, M. P. Klein, Extrapolated iterative methods for linear systems, SIAM J. Numer. Anal., 21 (1984), 192–201. https://doi.org/10.1137/0721014
5. A. J. Hughes-Hallett, The convergence of accelerated overrelaxation iterations, Math. Comput., 47 (1986), 219–223.