Affiliation:
1. Carnegie Mellon University, Pittsburgh, PA, USA
2. Georgia Institute of Technology, Atlanta, GA, USA
Abstract
Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph-structured linear systems, in the general setting, the bit complexity of solving an
n
×
n
linear system
Ax
=
b
is
Õ
(
n
ω
), where ω is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly(
n
) condition number.
In this paper, we present an algorithm that solves linear systems with sparse coefficient matrices asymptotically faster than matrix multiplication for any ω > 2. This speedup holds for any input matrix
A
with
o
(
n
ω−1
/ log (κ(
A
))) non-zeros, where κ(
A
) is the condition number of
A
.
Our algorithm can be viewed as an efficient, randomized implementation of the block Krylov method via recursive low displacement rank factorization. It is inspired by an algorithm of Eberly et al. for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in the eigenvalues of semi-random matrices.
Publisher
Association for Computing Machinery (ACM)