Next: 4.5 Sparse Matrices Up: 4. Numerical Linear Algebra Previous: 4.3 Iterative Methods for

Subsections

# 4.4 Eigenvalues and Eigenvectors

In this section, we deal with methods for computing eigenvalues and eigenvectors of a matrix . First, we discuss a simple power method for computing one or few eigenvalues (Sect. 4.4.1). Next, we concentrate on methods performing the complete eigenanalysis, that is, finding all eigenvalues (the Jacobi, QR, and LR methods in Sects. 4.4.2-4.4.5). Finally, we briefly describe a way to improve already computed eigenvalues and to find the corresponding eigenvector. Additionally, note that eigenanalysis can be also done by means of SVD, see Sect. 4.1.4. For more details on the described as well as some other methods, one can consult monographs by [15,24,52] and [60].

Before discussing specific methods, let us describe the principle common to most of them. We assume that has eigenvalues . To find all eigenvalues, we transform the original matrix to a simpler matrix  such that it is similar to (recall that matrices  and are similar if there is a matrix such that ). The similarity of and  is crucial since it guarantees that both matrices have the same eigenvalues and their eigenvectors follow simple relation: if is an eigenvector of corresponding to its eigenvalue , then is an eigenvector of corresponding to the same eigenvalue .

There are two basic strategies to construct a similarity transformation of the original matrix  . First, one can use a series of simple transformations, such as GRs, and eliminate elements of one by one (see the Jacobi method, Sect. 4.4.2). This approach is often used to transform to its tridiagonal or upper Hessenberg forms. (Matrix  has the upper Hessenberg form if it is an upper triangular except for the first subdiagonal; that is, for , where ). Second, one can also factorize into and switch the order of factors, (similarity of and follows from ). This is used for example by the LR method (Sect. 4.4.5). Finally, there are methods combining both approaches.

## 4.4.1 Power Method

In its basic form, the power method aims at finding only the largest eigenvalue of a matrix  and the corresponding eigenvector. Let us assume that the matrix  has a dominant eigenvalue ( ) and linearly independent eigenvectors.

The power method constructs two series and that converge to and to the corresponding eigenvector , respectively. Starting from a vector that is not orthogonal to , one only has to iteratively compute and split it to its norm and the normalized vector , see Algorithm 9. Usually, the Euclidian ( ) and maximum ( ) norms are used.

Algorithm 9

 do




 until a stop criterion holds

Although assessing the validity of assumptions is far from trivial, one can usually easily recognize whether the method converges from the behaviour of the two constructed series.

Furthermore, the power method can be extended to search also for other eigenvalues; for example, the smallest one and the second largest one. First, if is nonsingular, we can apply the power method to to find the smallest eigenvalue because is the largest eigenvalue of . Second, if we need more eigenvalues and is already known, we can use a reduction method to construct a matrix that has the same eigenvalues and eigenvectors as except for , which is replaced by zero eigenvalue. To do so, we need to find a (normalized) eigenvector of corresponding to ( and have the same eigenvalues) and to set . Naturally, this process can be repeated to find the third and further eigenvalues.

Finally, let us mention that the power method can be used also for some matrices without dominant eigenvalue (e.g., matrices with for some ). For further extensions of the power method see [56], for instance.

## 4.4.2 Jacobi Method

For a symmetric matrix , the Jacobi method constructs a series of orthogonal matrices , such that the matrix converges to a diagonal matrix . Each matrix is a GR matrix defined in (4.5), whereby the angle is chosen so that one nonzero element becomes zero in . Formulas for computing given the element to be zeroed are described in [15], for instance. Once the matrix  is diagonalized this way, the diagonal of contains the eigenvalues of  and the columns of matrix represent the associated eigenvectors.

There are various strategies to choose an element which will be zeroed in the next step. The classical Jacobi method chooses the largest off-diagonal element in absolute value and it is known to converge. (Since searching the maximal element is time consuming, various systematic schemes were developed, but their convergence cannot be often guaranteed.) Because the Jacobi method is relatively slow, other methods are usually preferred (e.g., the QR method). On the other hand, it has recently become interesting again because of its accuracy and easy parallelization ([35,67]).

## 4.4.3 Givens and Householder Reductions

The Givens and Householder methods use a similar principle as the Jacobi method. A series of GRs or HRs, designed such that they form similarity transformations, is applied to a symmetric matrix  in order to transformed it to a tridiagonal matrix. (A tridiagonal matrix is the Hessenberg form for symmetric matrices.) This tridiagonal matrix is then subject to one of the iterative methods, such as the QR or LR methods discussed in the following paragraphs. Formulas for Givens and Householder similarity transformations are given in [52], for instance.

## 4.4.4 QR Method

The QR method is one of the most frequently used methods for the complete eigenanalysis of a nonsymmetric matrix, despite the fact that its convergence is not ensured. A typical algorithm proceeds as follows. In the first step, the matrix is transformed into a Hessenberg matrix using Givens or Householder similarity transformations (see Sects. 4.1.3 and 4.4.3). In the second step, this Hessenberg matrix is subject to the iterative process called chasing. In each iteration, similarity transformations, such as GRs, are first used to create nonzero entries in positions , and for . Next, similarity transformations are repeatedly used to zero elements and and to move these ''nonzeros'' towards the lower right corner of the matrix (i.e., to elements , and for ). As a result of chasing, one or two eigenvalues can be extracted. If becomes zero (or negligible) after chasing, element is an eigenvalue. Consequently, we can delete the th row and column of the matrix and apply chasing to this smaller matrix to find another eigenvalue. Similarly, if becomes zero (or negligible), the two eigenvalues of the submatrix in the lower right corner are eigenvalues of . Subsequently, we can delete last two rows and columns and continue with the next iteration.

Since a more detailed description of the whole iterative process goes beyond the extent of this contribution, we refer a reader to [15] for a shorter discussion and to [24] and [52] for a more detailed discussion of the QR method.

## 4.4.5 LR Method

The LR method is based on a simple observation that decomposing a matrix into and multiplying the factors in the inverse order results in a matrix similar to . Using the LU decomposing (Sect. 4.1.2), the LR method constructs a matrix series for , where and

where is a lower triangular matrix and is an upper triangular matrix with ones on its diagonal. For a wide class of matrices, including symmetric positive definite matrices, and are proved to converge to the same lower triangular matrix  , whereby the eigenvalues of form the diagonal of and are ordered by the decreasing absolute value.

## 4.4.6 Inverse Iterations

The method of inverse iterations can be used to improve an approximation of an eigenvalue of a matrix . The method is based on the fact that the eigenvector  associated with is also an eigenvector of associated with the eigenvalue . For an initial approximation close to , is the dominant eigenvalue of . Thus, it can be computed by the power method described in Sect. 4.4.1, whereby could be modified in each iteration in order to improve the approximation of .

This method is not very efficient without a good starting approximation, and therefore, it is not suitable for the complete eigenanalysis. On the other hand, the use of the power method makes it suitable for searching of the eigenvector  associated with . Thus, the method of inverse iterations often complements methods for complete eigenanalysis and serves then as a tool for eigenvector analysis. For this purpose, one does not have to perform the iterative improvement of initial : applying the power method on suffices. See [40,52] and [60] for more details.

Next: 4.5 Sparse Matrices Up: 4. Numerical Linear Algebra Previous: 4.3 Iterative Methods for