next up previous contents index
Next: 4.1 Matrix Decompositions Up: References Previous: References


4. Numerical Linear Algebra

Lenka Cízková, Pavel Cízek

Many methods of computational statistics lead to matrix-algebra or numerical-mathematics problems. For example, the least squares method in linear regression reduces to solving a system of linear equations, see Chap. III.8. The principal components method is based on finding eigenvalues and eigenvectors of a matrix, see Chap. III.6. Nonlinear optimization methods such as Newton's method often employ the inversion of a Hessian matrix. In all these cases, we need numerical linear algebra.

Usually, one has a data matrix $ \boldsymbol {X}$ of (explanatory) variables, and in the case of regression, a data vector $ \boldsymbol {y}$ for dependent variable. Then the matrix defining a system of equations, being inverted or decomposed typically corresponds to $ \boldsymbol {X}$ or $ \boldsymbol{X}^\top
\boldsymbol{X}$. We refer to the matrix being analyzed as $ \boldsymbol{A} =
\{A_{ij}\}_{i=1,j=1}^{m,n} \in {\mathbb{R}}^{m \times n}$ and to its columns as $ \boldsymbol{A}_k = \{A_{ik}\}_{i=1}^{m}, k = 1,\ldots,n$. In the case of linear equations, $ \boldsymbol{b} = \{b_{i}\}_{i=1}^{n} \in {\mathbb{R}}^n$ represents the right-hand side throughout this chapter. Further, the eigenvalues and singular values of $ \boldsymbol{A}$ are denoted by $ \lambda_{i}$ and $ \sigma_i$, respectively, and the corresponding eigenvectors $ \boldsymbol{g}_i$, $ i=1,\ldots,n$. Finally, we denote the $ n \times n$ identity and zero matrices by $ \boldsymbol{I}_{n}$ and $ \boldsymbol{0}_{n}$, respectively.

In this chapter, we first study various matrix decompositions (Sect. 4.1), which facilitate numerically stable algorithms for solving systems of linear equations and matrix inversions. Next, we discuss specific direct and iterative methods for solving linear systems (Sects. 4.2 and 4.3). Further, we concentrate on finding eigenvalues and eigenvectors of a matrix in Sect. 4.4. Finally, we provide an overview of numerical methods for large problems with sparse matrices (Sect. 4.5).

Let us note that most of the mentioned methods work under specific conditions given in existence theorems, which we state without proofs. Unless said otherwise, the proofs can be found in [31], for instance. Moreover, implementations of the algorithms described here can be found, for example, in [2] and [52].



Subsections
next up previous contents index
Next: 4.1 Matrix Decompositions Up: References Previous: References