In this thesis we consider error estimates for a family of iterative algorithms for solving the least-squares problem \min_x \|Ax-b\|_2 based on the Golub-Kahan-Lanczos bidiagonalization process. Given a lower bound on the smallest singular value of A, we show how to compute upper bounds on the Euclidean error \|x_k-x_*\|_2 as well as the backward error. In the case of the Euclidean error, we demonstrate that our bound is sharp given the information we use about the matrix A.
We also present two new algorithms aimed at minimizing our error estimates. The first, LS-Craig, is constructed to minimize our estimate of \|x_k-x_*\|_2 with every iteration. The second, LSMB, minimizes an objective function closely related to our backward error estimate. We find that at each step the iterate x_k produced by LS-Craig is a convex combination of those produced by LSQR (which minimizes \|r_k\|_2 = \|b-Ax_k\|_2 over a Krylov subspace) and Craig's method. The iterates produced by LSMB, in turn, are a convex combination of those produced by LSQR and LSMR (which minimizes \|A\ts r_k\|_2 over the same subspace). Experiments on test cases from the University of Florida Sparse Matrix Collection show that in practice LS-Craig and LSMB perform at least as well as the existing algorithms, although never by more than a small margin. This suggests that LS-Craig could replace LSQR when stopping rules are based on the Euclidean error and LSMB could replace both LSQR and LSMR when stopping rules are based on the backward error.
Finally, we extend the definition of the backward error to the case \min_X\|AX-B\|_F where B may have any number of columns. We derive a formula for the backward error and propose two estimates that experiments suggest are highly reliable. We prove that the estimates are reliable for a few special cases, although the general problem remains open.