We will now show that the estimator
defined by
(13.12) is asymptotically normally distributed. For this we
will assume several technical conditions for the model. These will
ensure, among other things, that the process
is ergodic.
It holds that:
- (A1)
-
, and
- (A2)
has a probability density
, that
for every compact subset
.
- (A3)
- There exist constants
, such that
- (A4)
- For the function
it holds that
for every compact subset
.
- (A5)
-
.
With (A2) and (A4) it is certain that the process
does not
die out, whereas conditions (A3) and (A5) ensure that
does not explode. These simply formed conditions can be relaxed at
a large technical cost as in Franke, Kreiss, Mammen and Neumann (2003). In particular
the linear growth condition (A3) must only hold asymptotically for
.
The model (13.1) implies that
is a Markov
chain. From the following lemma from Ango Nze (1992) it
follows that the chain is ergodic. It is based on applications of
the results given in Nummelin and Tuominen (1982) and
Tweedie (1975).
Lemma 14.1
Under conditions (A1) - (A5) the Markov chain

is
geometrically ergodic, i.e.,

is ergodic with a stationary
probability density

, and there exists a

, so
that for almost all

it holds that
Here
represents the conditional distribution of

given

, and
is the total variation of a signed measure

of the
Borel

-Algebra

on

.
To derive the asymptotic normality from
at a
fixed point
we require additional conditions. To
simplify notation
.
- (A6)
- The functions
and
are at the point
-times continuously differentiable, and the one sided
derivative
of
-th order exists.
- (A7)
- The stationary distribution
has a bounded,
continuous probability density
, which is strictly
positive in a small region around
.
- (A8)
- The kernel
is bounded with compact support and
it holds that
for a set of positive Lebesgue measures.
- (A9)
- The bandwidth
is of the form
, where
.
- (A10)
- The initial value
is a real number and is constant.
According to lemma 1 in Tsybakov (1986) it follows from (A8),
that the matrices
are positive definite. Let
and |
|
|
|
 |
|
|
|
With this we define the asymptotic errors
Furthermore, let

and
The assertions of the following theorem is the central result of
this chapter.
Theorem 14.1
Under assumptions (A1) - (A10) it holds that
 |
(14.15) |
and
N |
(14.16) |
for

, where
and
Here

represents the Kronecker product of matrices

and

.
Proof:
The normal equation for the first least squares problem
in (13.10) is given by
 |
(14.17) |
with the matrix
On the other hand it holds under the definition of
 |
(14.18) |
from which together with (13.17) we get
 |
(14.19) |
From the model assumptions (13.1) it follows that
 |
(14.20) |
with
According to the definition of
and
it holds
that
.
Through a Taylor expansion of
we obtain by using
the integral representation of the remainder
 |
(14.21) |
From (13.19), (13.20) and (13.21) we obtain
 |
(14.22) |
with
and
In an analogous fashion one obtains
 |
(14.23) |
with
and
where
has been substituted in.
Referring back to the representations (13.22)
and (13.23) the remaining steps of the proof of
Theorem 13.1 are as follows:
- a)
- First we show that
for
 |
(14.24) |
is fulfilled for each element. Here the matrix
is positive definite.
- b)
- Next we prove the relationships
for
 |
(14.25) |
and
for
. |
(14.26) |
- c)
- The common random vector
is asymptotically normally distributed:
with the covariance matrix
- d)
- It holds that
 |
(14.28) |
for
.
With statements a) to d) proven, the statement of the theorem can
be shown in the following way:
from b) and d) it follows that
for
. Because of a) and the definite results
of the boundary matrix this implies that
Similarly one can show that
.
The asymptotic Normality (13.16) can be seen in a similar way:
because of b) and c) it holds that
from which, according to a) the validity of (13.16) follows.
It remains to prove a) to d). To do this we need a couple of
helpful results.
Let
be the canonical filter of the
process
, i.e.
represents the generated
-algebra from
.
Lemma 14.3
(
Liptser and Shirjaev (1980), Corollary 6)
For every
the series
is a quadratic integrable Martingale difference, i.e.,
![$\displaystyle {\mathop{\text{\rm\sf E}}}[\eta_{nk} \,\vert\, {\cal F}_{k-1}] = 0, \quad {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2] < \infty, \quad 1 \le k \le n,$](sfehtmlimg2859.gif) |
(14.30) |
and it holds that
![$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2] = 1, \quad \forall \; n \ge n_{0} > 0.$](sfehtmlimg2860.gif) |
(14.31) |
Then the conditions
![$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2 \,\vert\, {\c...
..._{k-1}] \stackrel{\P}{\longrightarrow}
1 \quad\text{for}\ n \rightarrow \infty,$](sfehtmlimg2861.gif) |
|
|
(14.32) |
 |
|
|
(14.33) |
are sufficient for the distribution convergence
Lemma 14.4
Let

be a continuous, bounded function and let

be a bounded function.
Under conditions (A1) through (A10) it holds for every process

which fulfils (
13.1)
for

.
Proof:
We will first prove this for the case where the Markov chain begin
in equilibrium and then work our way back to the general case.
For this let
be a Markov chain, which
fulfils (13.1) and which for
has the
stationary distribution
of
introduced in
Lemma 13.1. This chain is constructed to be stationary, and
by applying Lemma 13.2 we get that
is a
geometric strong mixing process. From this it follows that
![$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \phi_{1}(Y_{i-1}^\ast)\,\phi_{2...
...t)\,\phi_{2}(u_{1n}^\ast)\,K(u_{1n}^\ast)\big] \stackrel{\P}{\longrightarrow} 0$](sfehtmlimg2873.gif) |
(14.35) |
for
, where
was substituted in. For the second term
in (13.35) it holds that
for
. Together with (13.35) it follows
that for
(13.34) is fulfilled.
Define
and choose a series
with
and
. It follows that
for
. From the geometric ergodicity of
, according to Lemma 13.1 we obtain for the left
hand side of the last expression
for
, where
represents the
density of
. Thus is holds that
From this it follows with the help of the Markov inequality
that (13.34) also applies to
.
Proof:
(for Theorem 13.1, continuation)
If remains to prove conditions a) to d).
- a)
- Using the definition of
it holds for the elements of this matrix
These take on the form defined in Lemma 13.4,
and it follows
according to the definition of matrix
this is the same as
.
The definiteness of
carries over to
.
- b)
- With
and
fulfilled,
holds, condition (A6).
For the remainder from the Taylor expansion of
it holds
that
with
With this
can be rewritten as
i.e., the elements of
fulfil the requirements of Lemma 13.4.
Once again we choose
as in the proof to Lemma 13.4
and set
.
From (13.37) and (13.38) we obtain
 |
(14.39) |
for
.
Since
is
-mixing, as in (13.35) we
get
for
. The right term of this expression can be rewritten as
Furthermore, it holds that
 |
(14.40) |
for every
. Together with (13.40) and (A7),
it follows that
With this (13.25) has been shown. The proof for (13.26) follows analogously.
- c)
- We define the matrices
and construct the block matrix
The elements of
,
and
fulfil the requirements of Lemma 13.4. In
particular, the combined functions
that appear
there are in this case given by
for which (A1) has been used. One observes that the corresponding
functions
are, due to (A6), in a small region around
continuous and restricted. Since
disappears outside of a
compact set, this is sufficient for Lemma 13.4. With this
we obtain
and![$\displaystyle \quad {\mathop{\text{\rm\sf E}}}[\Sigma_{n}] \longrightarrow \Sigma_{0}$](sfehtmlimg2933.gif) |
(14.41) |
for
.
To prove (13.27) it is sufficient to show using the theorem
from Cramér-Wold that
for every vector
with a Euclidian norm
is fulfilled. In addition we choose according
to (13.41) a
, so that
holds for all
, and
substitute in for
,
Then
and (13.42) is equivalent to
We will now show that
fulfills the
requirements (13.30) to (13.33) from
Lemma 13.3, from which (13.43) follows.
First notice that
a.s.
and
a.s. hold, from which
(13.30) follows. Furthermore, one can easily show that
Therefore (13.31) if fulfilled and from (13.41)
we obtain (13.32).
We still have to show (13.33). For
,
with an appropriate constant
and
Since
is restricted and has compact support, and since
and
are locally bounded, there exists a constant
,
so that
From this it follows that
for
, where
is independent of
.
With this we have
 |
(14.44) |
According to Lemma 13.4 it holds for the last term that
From (13.44) and (13.45), (13.33) follows, i.e.,
the requirements of Lemma 13.3 are actually fulfilled, and
thus (13.42) is also shown.
- d)
- It is
According to (A8) the kernel
is bounded, and it holds that
.
Thus there exists a constant
, such that
If
is sufficiently large, then for the last term in the last sum it holds that
Thus
is shown. Similarly it can be shown that
As a direct consequence of Theorem 13.1 we have:
Theorem 14.2
Under conditions (A1) through (A10) it holds that

N

for $n &rarr#rightarrow;&infin#infty;$,
where
Proof:
From
,
,
and the construction of
we
obtain
It also holds that
with the transformations matrix
According to (13.15) it holds that
for
, from which together with (13.16)
follows. The limiting distribution of
is thus given by the first term of the right side of (13.46). For this we get using
(13.16) that

N
for
.
A simple calculation gives
as well
as
, with which the claim is shown.
Going beyond the asymptotic normality shown in Theorem 13.2,
Franke et al. (2002) have shown that bootstrap methods for
nonparametric volatility estimators can also be used. They
consider routine kernel estimators, i.e., the special case LP
estimator with
in (13.4), but the results can be
directly applied to the general LP estimators, see also
Kreiss (2000).
To illustrate consider the case where
. We assume that
and
are twice differentiable and that the kernel
satisfies
the condition

and
Then it holds that
and thus
and
In particular, from the normalized quadratic errors of
that is calculated from the asymptotic distribution, we have that
Minimizing these expressions with respect to
and
results in the Epanechnikov-Kernel
and the following values for
:
With this we obtain