Lenka Cízková, Pavel Cízek
Many methods of computational statistics lead to matrix-algebra or numerical-mathematics problems. For example, the least squares method in linear regression reduces to solving a system of linear equations, see Chap. III.8. The principal components method is based on finding eigenvalues and eigenvectors of a matrix, see Chap. III.6. Nonlinear optimization methods such as Newton's method often employ the inversion of a Hessian matrix. In all these cases, we need numerical linear algebra.
Usually, one has a data matrix
of (explanatory) variables, and
in the case of regression, a data vector
for dependent
variable. Then the matrix defining a system of equations, being
inverted or decomposed typically corresponds to
or
. We refer to the matrix being analyzed as
and to its columns as
. In the case of linear
equations,
represents the
right-hand side throughout this chapter. Further, the eigenvalues and
singular values of
are denoted by
and
,
respectively, and the corresponding eigenvectors
,
. Finally, we denote the
identity and zero
matrices by
and
, respectively.
In this chapter, we first study various matrix decompositions (Sect. 4.1), which facilitate numerically stable algorithms for solving systems of linear equations and matrix inversions. Next, we discuss specific direct and iterative methods for solving linear systems (Sects. 4.2 and 4.3). Further, we concentrate on finding eigenvalues and eigenvectors of a matrix in Sect. 4.4. Finally, we provide an overview of numerical methods for large problems with sparse matrices (Sect. 4.5).
Let us note that most of the mentioned methods work under specific conditions given in existence theorems, which we state without proofs. Unless said otherwise, the proofs can be found in [31], for instance. Moreover, implementations of the algorithms described here can be found, for example, in [2] and [52].