2.5 Partitioned Matrices
Very often we will have to consider certain groups of rows and columns of a matrix
.
In the case of two groups, we have
where
and
.
If
is partitioned accordingly, we have:
An important particular case is the square matrix
, partitioned
such that
and
are both square matrices (i.e.,
).
It can be verified that when
is
non-singular (
):
![\begin{displaymath}
\data{A}^{-1}=\left(\begin{array}{ll}
\data{A}^{11} & \data{A}^{12}\\
\data{A}^{21} & \data{A}^{22}
\end{array}\right)
\end{displaymath}](mvahtmlimg507.gif) |
(2.26) |
where
An alternative expression can be obtained by reversing the positions of
and
in the original matrix.
The following results will be useful if
is non-singular:
![\begin{displaymath}
\vert\data{A}\vert=\vert\data{A}_{11}\vert\vert\data{A}_{22}...
...}\vert
=\vert\data{A}_{11}\vert\vert\data{A}_{22\cdot 1}\vert.
\end{displaymath}](mvahtmlimg509.gif) |
(2.27) |
If
is non-singular, we have that:
![\begin{displaymath}
\vert\data{A}\vert=\vert\data{A}_{22}\vert\vert\data{A}_{11}...
...}\vert
=\vert\data{A}_{22}\vert\vert\data{A}_{11\cdot 2}\vert.
\end{displaymath}](mvahtmlimg510.gif) |
(2.28) |
A useful formula is derived from the alternative expressions
for the inverse and the determinant. For instance let
where
and
are
vectors and
is
non-singular. We then have:
![\begin{displaymath}
\vert\data{B}\vert=\vert \data{A}-ab^{\top} \vert= \vert \data{A}\vert\vert 1-b^{\top}\data{A}^{-1}a \vert
\end{displaymath}](mvahtmlimg513.gif) |
(2.29) |
and equating the two expressions for
, we obtain the following:
![\begin{displaymath}
(\data{A}-ab^{\top})^{-1}=\data{A}^{-1}+\frac{\data{A}^{-1}ab^{\top}\data{A}^{-1}}{1-b^{\top}\data{A}^{-1}a}.
\end{displaymath}](mvahtmlimg515.gif) |
(2.30) |
EXAMPLE 2.9
Let's consider the matrix
We can use formula (
2.26) to calculate the inverse of a
partitioned matrix, i.e.,
![${\data A}^{11}=-1,
{\data A}^{12}={\data A}^{21}=1, {\data A}^{22}=-1/2$](mvahtmlimg517.gif)
.
The inverse of
![${\data A}$](mvahtmlimg292.gif)
is
It is also easy to calculate the determinant of
![${\data A}$](mvahtmlimg292.gif)
:
Let
and
be any two
matrices and suppose that
. From (2.27)
and (2.28) we can conclude that
![\begin{displaymath}
\left\vert
\begin{array}{cc} -\lambda\data{I}_n&-\data{A}\\ ...
...ata{I}_p\vert
=
\vert\data{A}\data{B}-\lambda \data{I}_n\vert.
\end{displaymath}](mvahtmlimg521.gif) |
(2.31) |
Since both determinants on the right-hand side of (2.31)
are polynomials in
, we find that the
eigenvalues of
yield the
eigenvalues of
plus the eigenvalue
,
times.
The relationship between the eigenvectors is described in the next theorem.
THEOREM 2.6
For
![$\data{A}(n\times p)$](mvahtmlimg284.gif)
and
![$\data{B}(p \times n)$](mvahtmlimg301.gif)
,
the non-zero eigenvalues of
![$\data{A}\data{B}$](mvahtmlimg522.gif)
and
![$\data{B}\data{A}$](mvahtmlimg523.gif)
are the same and have the same multiplicity.
If
![$x$](mvahtmlimg117.gif)
is an eigenvector of
![$\data{A}\data{B}$](mvahtmlimg522.gif)
for an eigenvalue
![$\lambda\neq 0$](mvahtmlimg525.gif)
, then
![$y=\data{B}x$](mvahtmlimg526.gif)
is an eigenvector of
![$\data{B}\data{A}$](mvahtmlimg523.gif)
.
COROLLARY 2.2
For
![$\data{A}(n\times p)$](mvahtmlimg284.gif)
,
![$\data{B} (q\times n)$](mvahtmlimg527.gif)
,
![$a (p\times 1)$](mvahtmlimg528.gif)
, and
![$b (q\times 1)$](mvahtmlimg529.gif)
we have
The non-zero eigenvalue, if it exists, equals
![$b^{\top}\data{B}\data{A}a$](mvahtmlimg531.gif)
(with eigenvector
![$\data{A}a$](mvahtmlimg532.gif)
).
PROOF:
Theorem 2.6 asserts that the eigenvalues of
are the same as those
of
.
Note that the matrix
is a scalar and hence it is
its own eigenvalue
.
Applying
to
yields