16.6 Decompositions

Matrix decompositions are useful in numerical problems, in particular for solving systems of linear equations. All XploRe examples for this section can be found in 37645 XLGmatrix08.xpl .


16.6.1 Spectral Decomposition


{values,vectors} = 37728 eigsm (x)
computes the eigenvalues and vectors of a symmetric matrix x
{values,vectors} = 37731 eiggn (x)
computes the eigenvalues and vectors of a quadratic matrix x

Let $ A$ be a square matrix of dimension $ n \times n$. A scalar $ \lambda$ is an eigenvalue and a nonzero vector $ v$ is an eigenvector of $ A$ if $ Av = \lambda v$.

The eigenvalues are the roots of the characteristic polynomial of order $ n$ defined as $ (\vert A\vert-\lambda I_n) = 0$, where $ I_n$ denotes the $ n$-dimensional identity matrix. The determinant of the matrix $ A$ is equal to the product of its $ n$ eigenvalues: $ \vert A\vert = \Pi_{i=1}^n
\lambda_i$.

The function 37734 eigsm calculates the eigenvectors and eigenvalues of a given symmetric matrix. We evaluate the eigenvalues and eigenvectors of nonsymmetric matrices with the function 37737 eiggn .

The function 37740 eigsm has as its unique argument the matrix and returns the eigenvalues and vectors. The returned arguments are unsorted with respect to the eigenvalues. Consider the following example:

  x = #(1, 2)~#(2, 3)
  y = eigsm(x)
  y
in which we define a matrix x, and calculate its eigenvalues and eigenvectors. XploRe stores them in a variable of list type, y: The variable y.values contains the eigenvalues, while the variable y.vectors contains the corresponding eigenvectors:
  Contents of y.values
  [1,] -0.23607 
  [2,]   4.2361 
  Contents of y.vectors
  [1,]  0.85065  0.52573 
  [2,] -0.52573  0.85065
We verify that the determinant of the matrix x is equal to the product of the eigenvalues of x:
  det(x) - y.values[1] * y.values[2]
gives
  Contents of _tmp
  [1,]  4.4409e-16
i.e. something numerically close to zero.

If the $ n$ eigenvalues of the matrix $ A$ are different, this matrix can be decomposed as $ A = P\Lambda P^T$, where $ \Lambda$ is the diagonal matrix the diagonal elements of which are the eigenvalues, and $ P$ is the matrix obtained by the concatenation of the eigenvectors. The transformation matrix $ P$ is orthonormal, i.e. $ P^T P= P P^T = I$. This decomposition of the matrix $ A$ is called the spectral decomposition.

We check that the matrix of concatenated eigenvectors is orthonormal:

  y.vectors'*y.vectors
yields a matrix numerically close to the identity matrix:
  Contents of _tmp
  [1,]        1 -1.0219e-17 
  [2,] -1.0219e-17        1

We verify the spectral decomposition of our example:

  z = y.vectors *diag(y.values) *y.vectors'
  z
which gives the original matrix x:
  Contents of z
  [1,]        1        2 
  [2,]        2        3

If the matrix $ A$ can be decomposed as $ A = P\Lambda P^T$, then $ A^n = P\Lambda^n P^T$. In particular, $ A^{-1} = P\Lambda^{-1} P^T$. Therefore, the inverse of x could be calculated as

  xinv = y.vectors*inv(diag(y.values))*y.vectors' 
  xinv
which gives
  Contents of xinv
  [1,]       -3        2 
  [2,]        2       -1
which is equal to the inverse of x
  inv(x)
  Contents of cinv
  [1,]       -3        2 
  [2,]        2       -1


16.6.2 Singular Value Decomposition


{u, l, v} = 37898 svd (x)
computes the singular value decomposition of a matrix x

Let $ B$ be a $ n\times p$ matrix, with $ n\ge p$, and rank $ r$, with $ r
\le p$. The singular value decomposition of the matrix $ B$ decomposes this matrix as $ B = U L V$, where $ U$ is the $ n\times r$ orthonormal matrix of eigenvectors of $ B\,B^T$, $ V$ is the $ p\times r$ orthonormal matrix of eigenvectors of $ B^TB$ associated with the nonzero eigenvalues, and $ L$ is a $ r\times r$ diagonal matrix.

The function 37901 svd computes the singular value decomposition of a $ n\times p$ matrix x. This function returns the matrices $ u$, $ v$ and $ \Lambda$ in the form of a list.

  x = #(1, 2, 3)~#(2, 3, 4)
  y = svd(x)
  y
XploRe returns the matrix $ U$ in the variable y.u, the diagonal elements of $ L$ in the variable y.l, and the matrix $ V$ in the variable y.v:
  Contents of y.u
  [1,]  0.84795   0.3381 
  [2,]  0.17355  0.55065 
  [3,] -0.50086   0.7632 
  Contents of y.l
  [1,]  0.37415 
  [2,]   6.5468 
  Contents of y.v
  [1,] -0.82193  0.56959 
  [2,]  0.56959  0.82193

We test that y.u *diag(y.l) *y.v' equals x with the commands

  xx = y.u *diag(y.l) *y.v'
  xx
This displays the matrix x:
  Contents of xx
  [1,]        1        2 
  [2,]        2        3 
  [3,]        3        4


16.6.3 LU Decomposition


{l, u, index} = 38024 ludecomp (x)
computes the LU decomposition of a matrix x

The LU decomposition of an $ n$-dimensional square matrix $ A$ is defined as

$\displaystyle A = LU,$

where $ U$ is an $ n$-dimensional upper triangular matrix, with typical element $ u_{ij}$ such that $ u_{ij} = 0\; \forall i>j$, and $ L$ is an $ n$-dimensional lower triangular matrix, with typical element $ l_{ij}$ such that $ l_{ij} = 0\; \forall j > i$. The LU decomposition is used for solving linear equations and inverting matrices. The function 38027 ludecomp performs the LU decomposition of a square matrix. It takes the matrix as its argument, returns in the form of a list the upper and lower triangular matrices, and an index vector which records the row permutations in the LU decomposition:
  x = #(1, 2)~#(2, 3)
  lu = ludecomp(x)
  lu
gives
  Contents of lu.l
  [1,]        1        0 
  [2,]      0.5        1 
  Contents of lu.u
  [1,]        2        3 
  [2,]        0      0.5 
  Contents of lu.index
  [1,]        2 
  [2,]        1

We re-obtain the original matrix x by using the function 38030 index , which takes as its argument the row-permutations in the LU decomposition. The instruction

  index(lu.l*lu.u,lu.index)
returns the matrix x
  Contents of index
  [1,]        1        2 
  [2,]        2        3


16.6.4 Cholesky Decomposition


bd = 38114 chold (x, l)
calculates the triangularization of a matrix x, such that b'*d*b=x; l is the number of lower nonzero subdiagonals including the diagonal

Let $ A$ be an $ n$-dimensional square matrix, symmetric, i.e. $ A_{ij} =
A_{ji}$, and positive definite, i.e. $ x^TA x > 0,\; \forall$ $ n$-dimensional vectors $ v$. The Cholesky decomposition decomposes the matrix $ A$ as $ A = L\,L^T$, where $ L$ is a lower triangular matrix.

The function 38117 chold computes a triangularization of the input matrix. The following steps are necessary to find the lower triangular $ L$:

  library("xplore")          ; unit is in library xplore!
  x = #(2,2)~#(2,3)
  tmp = chold(x, rows(x))
  d = diag(xdiag(tmp))       ; diagonal matrix
  b = tmp-d+diag(matrix(rows(tmp)))  ; lower triangular
  L = b'*sqrt(d)             ; Cholesky triangular
We can check that the decomposition works by comparing x and L*L'. The instruction
  x - L*L'
displays
  Contents of _tmp
  [1,] -4.4409e-16 -4.4409e-16 
  [2,] -4.4409e-16 -4.4409e-16
which means that the difference between both matrices is practically zero.

The Cholesky decomposition is used in computational statistics for inverting the Hessian matrices and the matrices $ X^TX$ in regression analysis.