14.3 Asymptotic Normality

We will now show that the estimator $ \hat{v}_n(x)$ defined by  (13.12) is asymptotically normally distributed. For this we will assume several technical conditions for the model. These will ensure, among other things, that the process $ (Y_i)$ is ergodic. It holds that:

(A1)
$ {\mathop{\text{\rm\sf E}}}[\xi_{1}^2] =1, \; {\mathop{\text{\rm\sf E}}}[\xi_{1}] =
{\mathop{\text{\rm\sf E}}}[\xi_{1}^3] = 0$, and

$\displaystyle m_{4} = {\mathop{\text{\rm\sf E}}}\big[ (\xi_{1}^2 - 1)^2 \big] < \infty. $

(A2)
$ \xi_1$ has a probability density $ p$, that

$\displaystyle \inf_{x \in {\cal K}} p(x) > 0 $

for every compact subset $ {\cal K} \subset \mathbb{R}$.
(A3)
There exist constants $ C_{1},\; C_{2}>0$, such that
$\displaystyle \vert f(y)\vert$ $\displaystyle \le$ $\displaystyle C_{1}(1+\vert y\vert),$ (14.13)
$\displaystyle \vert s(y)\vert$ $\displaystyle \le$ $\displaystyle C_{2}(1+\vert y\vert), \quad y \in \mathbb{R}.$ (14.14)

(A4)
For the function $ s$ it holds that

$\displaystyle \inf_{y \in {\cal K}} s(y) > 0, $

for every compact subset $ {\cal K} \subset \mathbb{R}$.
(A5)
$ C_{1} + C_{2} E\vert\xi_{1}\vert < 1$.

With (A2) and (A4) it is certain that the process $ (Y_i)$ does not die out, whereas conditions (A3) and (A5) ensure that $ (Y_{i})$ does not explode. These simply formed conditions can be relaxed at a large technical cost as in Franke, Kreiss, Mammen and Neumann (2003). In particular the linear growth condition (A3) must only hold asymptotically for $ \vert y\vert \rightarrow \infty$.

The model (13.1) implies that $ (Y_{i})$ is a Markov chain. From the following lemma from Ango Nze (1992) it follows that the chain is ergodic. It is based on applications of the results given in Nummelin and Tuominen (1982) and Tweedie (1975).

Lemma 14.1   Under conditions (A1) - (A5) the Markov chain $ (Y_{i})$ is geometrically ergodic, i.e., $ (Y_i)$ is ergodic with a stationary probability density $ \pi$, and there exists a $ \rho \in [0,1)$, so that for almost all $ y$ it holds that

$\displaystyle \Vert P^n(\,\cdot\,\vert y) - \pi \Vert _{TV} = {\mathcal{O}}(\rho^n). $

Here

$\displaystyle P^n (B\,\vert\,y) = P(Y_{n} \in B \,\vert\, Y_{0} = y ), \quad B \in {\cal B},
$

represents the conditional distribution of $ Y_n$ given $ Y_0 = y$, and

$\displaystyle \Vert \,\nu\, \Vert _{TV} =
\sup\left\{ \sum_{i=1}^k \vert\nu(B_i...
...\in\mathbb{N},\,
B_1,\dots,B_k \in {\cal B} \text{ pairwise disjunct }\right\} $

is the total variation of a signed measure $ \nu$ of the Borel $ \sigma$-Algebra $ {\cal B}$ on $ \mathbb{R}$.

To derive the asymptotic normality from $ \hat{v}_{n}(x)$ at a fixed point $ x \in \mathbb{R}$ we require additional conditions. To simplify notation $ l = p+1$.

(A6)
The functions $ f$ and $ s$ are at the point $ x$ $ (l-1)$-times continuously differentiable, and the one sided derivative $ f^{l}_{\pm}(x), \; s^{l}_{\pm}(x)$ of $ l$-th order exists.
(A7)
The stationary distribution $ \pi$ has a bounded, continuous probability density $ \gamma$, which is strictly positive in a small region around $ x$.
(A8)
The kernel $ K:
\mathbb{R}\longrightarrow \mathbb{R}^+$ is bounded with compact support and it holds that $ K>0$ for a set of positive Lebesgue measures.
(A9)
The bandwidth $ h_n$ is of the form $ h_{n} = \beta
n^{-1/(2l+1)}$, where $ \beta>0$.
(A10)
The initial value $ Y_{0}$ is a real number and is constant.
According to lemma 1 in Tsybakov (1986) it follows from (A8), that the matrices
$\displaystyle A$ $\displaystyle =$ $\displaystyle \int F(u) \: F(u)^\top \: K(u) \, du$   and  
$\displaystyle Q$ $\displaystyle =$ $\displaystyle \int F(u) \: F(u)^\top \: K^2(u) \, du$  

are positive definite. Let
$\displaystyle {\cal D} = A^{-1} Q A^{-1}$   and      
$\displaystyle f^{(l)}(x;u) = \left\{ \begin{array}{ll}
f_{+}^{(l)}(x), & u \ge 0, \\
f_{-}^{(l)}(x), & u < 0,
\end{array} \right.$      

With this we define the asymptotic errors
$\displaystyle b_{f} (x)$ $\displaystyle =$ $\displaystyle A^{-1} \frac{\beta^l}{l!} \int F(u) \, u^l \: K(u) \,
f^{(l)}(x;u)\,du$   and  
$\displaystyle b_{g} (x)$ $\displaystyle =$ $\displaystyle A^{-1} \frac{\beta^l}{l!} \int F(u) \, u^l \: K(u) \,
g^{(l)}(x;u)\,du.$  

Furthermore, let

$\displaystyle c(x) = \left( \begin{array}{c}
f(x) \\
f'(x) h_{n} \\
\vdots \\
f^{(l-1)}(x) \frac{h_{n}^{l-1}}{(l-1)!}
\end{array} \right)$   and$\displaystyle \quad
\bar{c}(x) = \left( \begin{array}{c}
g(x) \\
g'(x) h_{n} \\
\vdots \\
g^{(l-1)}(x) \frac{h_{n}^{l-1}}{(l-1)!}
\end{array} \right) .$

The assertions of the following theorem is the central result of this chapter.

Theorem 14.1   Under assumptions (A1) - (A10) it holds that

$\displaystyle \big \{ \bar{c}_{n}(x) - \bar{c}(x) \big \}^\top \, F(0) \stackre...
...d \big \{ c_{n}(x) - c(x) \big \}^\top \, F(0) \stackrel{\P}{\longrightarrow} 0$ (14.15)

and

$\displaystyle n^{l/(2l+1)} \left( \begin{array}{c} \bar{c}_{n}(x)-\bar{c}(x) \\ c_{n}(x)-c(x) \end{array} \right) \stackrel{{\cal L}}{\longrightarrow}$   N$\displaystyle \big( b(x), \Sigma(x) \big)$ (14.16)

for $ n \rightarrow
\infty$, where

$\displaystyle b(x) = \left( \begin{array}{c} b_{g}(x) \\ b_{f}(x) \end{array} \right)$    

and

$\displaystyle \Sigma(x) = \frac{s^2(x)}{\beta \gamma(x)}\; \left( \begin{array}...
...^2(x) + s^2(x) m_{4} & 2f(x) \\ 2f(x) & 1 \end{array} \right) \otimes {\cal D}.$    

Here $ {\cal D}' \otimes {\cal D}$ represents the Kronecker product of matrices $ {\cal D}'$ and $ {\cal D}$.

Proof:
The normal equation for the first least squares problem in (13.10) is given by

$\displaystyle n^{\frac{l}{2l+1}} B_{n} \bar{c}_{n}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n Y_{i}^2 \, U_{in}\, K(u_{in})$ (14.17)

with the matrix

$\displaystyle B_{n} = n^{-\frac{2l}{2l+1}} \sum_{i=1}^n U_{in}\,U_{in}^\top \,K(u_{in}).$

On the other hand it holds under the definition of $ B_n$

$\displaystyle n^{\frac{l}{2l+1}} B_{n} \bar{c}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n U_{in}\,U_{in}^\top \,\bar{c}(x)\,K(u_{in}),$ (14.18)

from which together with (13.17) we get

$\displaystyle n^{\frac{l}{2l+1}} B_{n}\,\big\{\bar{c}_{n}(x) - \bar{c}(x)\big\}...
...um_{i=1}^n \big\{ Y_{i}^2 - U_{in}^\top \,\bar{c}(x)\big\} \,U_{in}\,K(u_{in}).$ (14.19)

From the model assumptions (13.1) it follows that

\begin{displaymath}\begin{split}Y_{i}^2 &= \Big( f\big(Y_{i-1}\big) + s\big(Y_{i...
... + s^2\big(Y_{i-1}\big) \\ &= g(Y_{i-1}) + \alpha_i \end{split}\end{displaymath} (14.20)

with

$\displaystyle \unskip
\alpha_{i} = 2f(Y_{i-1})\,s(Y_{i-1})\,\xi_{i} +
s^2(Y_{i-1})(\xi^2_{i}-1). $

According to the definition of $ U_{in}$ and $ \bar{c}(x)$ it holds that
$ U_{in}^\top \, \bar{c}(x)
= \sum_{j=0}^{l-1} \frac{1}{j!}g^{(j)}(x) \big( Y_{i-1} - x \big)^j$. Through a Taylor expansion of $ g = f^2 + s^2$ we obtain by using the integral representation of the remainder

\begin{displaymath}\begin{split}g(Y_{i-1}) - U_{in}^\top \,\bar{c}(x) &= \frac{(...
...i-1}-x)\big)(1-t)^{l-1}\:dt \\ &= r_{g}(Y_{i-1},x). \end{split}\end{displaymath} (14.21)

From (13.19), (13.20) and (13.21) we obtain

\begin{displaymath}\begin{split}&n^{\frac{l}{2l+1}} B_{n}\,\big(\bar{c}_{n}(x) -...
...,K(u_{in}) \\ = & \ \bar{b}_{n}(x) + \bar{q}_{n}(x) \end{split}\end{displaymath} (14.22)

with

$\displaystyle \bar{b}_{n}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n r_{g} (Y_{i-1},x)\,U_{in}\,K(u_{in})$    

and

$\displaystyle \bar{q}_{n}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n \alpha_{i}\,U_{in}\,K(u_{in}),$    

In an analogous fashion one obtains

$\displaystyle n^{\frac{l}{2l+1}} B_{n}\,\big\{c_{n}(x) - c(x)\big\} = b_{n}(x) + q_{n}(x)$ (14.23)

with

$\displaystyle b_{n}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n r_{f} (Y_{i-1},x)\,U_{in}\,K(u_{in})$    

and

$\displaystyle q_{n}(x) = n^{-\frac{l}{2l+1}} \sum_{i=1}^n \beta_{i}\,U_{in}\,K(u_{in}),$    

where $ \beta_{i} = s(Y_{i-1}) \xi_{i}$ has been substituted in.

Referring back to the representations (13.22) and (13.23) the remaining steps of the proof of Theorem 13.1 are as follows:

a)
First we show that

$\displaystyle B_{n} \stackrel{\P}{\longrightarrow} B$   for $ n \rightarrow
\infty$ (14.24)

is fulfilled for each element. Here the matrix $ B = \beta\,\gamma(x)\,A$ is positive definite.
b)
Next we prove the relationships

$\displaystyle \unskip \bar{b}_{n}(x) \stackrel{\P}{\longrightarrow} B\,b_{g}(x)$   for $ n \rightarrow
\infty$ (14.25)

and

$\displaystyle \unskip b_{n}(x) \stackrel{\P}{\longrightarrow} B\,b_{f}(x)$   for $ n \rightarrow
\infty$. (14.26)

c)
The common random vector $ \big( \bar{q}_{n}(x), q_{n}(x) \big)^\top $ is asymptotically normally distributed:

$\displaystyle \left( \begin{array}{c} \bar{q}_{n}(x) \\ q_{n}(x) \end{array} \right) \stackrel{\cal L}{\longrightarrow}$   N$\displaystyle (0, \Sigma_{0})$   for $ n \rightarrow
\infty$ (14.27)

with the covariance matrix

$\displaystyle \Sigma_{0} = s^2(x) \beta \gamma(x)
\left( \begin{array}{cc}
4f^2(x) + s^2(x) m_{4} & 2f(x) \\
2f(x) & 1
\end{array} \right)
\otimes Q. $

d)
It holds that

\begin{displaymath}\begin{array}{l} n^{-l/(2l+1)} q_{n}^\top (x) \, F(0) \stackr...
...}^\top (x) \, F(0) \stackrel{\P}{\longrightarrow} 0 \end{array}\end{displaymath} (14.28)

for $ n \rightarrow
\infty$.
With statements a) to d) proven, the statement of the theorem can be shown in the following way:
from b) and d) it follows that
    $\displaystyle B_{n}\,\big\{\bar{c}_{n}(x) - \bar{c}(x)\big\}^\top \, F(0)$ (14.29)
  $\displaystyle =$ $\displaystyle n^{-l/(2l+1)} \bar{b}_{n}(x) \, F(0) + n^{-l/(2l+1)} \bar{q}_{n}(x) \,
F(0)\stackrel{\P}{\longrightarrow} 0$  

for $ n \rightarrow
\infty$. Because of a) and the definite results of the boundary matrix this implies that $ \big\{\bar{c}_{n}(x) -
\bar{c}(x)\big\}^\top \,$ $ F(0) \stackrel{\P}{\longrightarrow} 0.$ Similarly one can show that $ \big\{c_n (x) - c(x) \big\}^\top \,
F(0) \stackrel{\P}{\longrightarrow} 0$.

The asymptotic Normality (13.16) can be seen in a similar way:
because of b) and c) it holds that

$\displaystyle n^{\frac{l}{2l+1}}\,B_{n}
\left( \begin{array}{c}
\bar{c}_{n}(x) - \bar{c}(x) \\
c_{n}(x) - c(x)
\end{array} \right)$ $\displaystyle =$ $\displaystyle \left( \begin{array}{c}
\bar{b}_{n}(x) \\
b_{n}(x)
\end{array} \right)
+ \left( \begin{array}{c}
\bar{q}_{n}(x) \\
q_{n}(x)
\end{array} \right)$  
  $\displaystyle \stackrel{\cal L}{\longrightarrow}$ N$\displaystyle \left( \left( \begin{array}{c}
B\, b_g(x) \\
B\, b_f(x)
\end{array} \right), \Sigma_{0} \right),$  

from which, according to a) the validity of (13.16) follows.
$ {\Box}$

It remains to prove a) to d). To do this we need a couple of helpful results.

Lemma 14.2   (Davydov (1973))

Let $ (Y_{i})$ be a geometric ergodic Markov chain, so that $ Y_{0}$ is distributed according to the stationary measures $ \pi$ of the chain. Then the chain is geometric strongly mixed, i.e., it is strongly mixing ($ \alpha$-mixing) with mixing coefficients $ \alpha(n)$, where $ \alpha(n) \le c_{0}\,\rho_{0}^n$ for particular $ 0<\rho_{0}<1$ and $ c_{0}>0$ is fulfilled.

Let $ ( {\cal F}_k )$ be the canonical filter of the process $ (Y_k)$, i.e. $ {\cal F}_{k} = \sigma(Y_{k}, Y_{k-1},
\ldots, Y_{0})$ represents the generated $ \sigma$-algebra from $ Y_{0}, \ldots, Y_{k}$.

Lemma 14.3   (Liptser and Shirjaev (1980), Corollary 6)

For every $ n>0$ the series $ (\eta_{nk}, {\cal F}_{k})$ is a quadratic integrable Martingale difference, i.e.,

$\displaystyle {\mathop{\text{\rm\sf E}}}[\eta_{nk} \,\vert\, {\cal F}_{k-1}] = 0, \quad {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2] < \infty, \quad 1 \le k \le n,$ (14.30)

and it holds that

$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2] = 1, \quad \forall \; n \ge n_{0} > 0.$ (14.31)

Then the conditions

$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2 \,\vert\, {\c...
..._{k-1}] \stackrel{\P}{\longrightarrow}
1 \quad\text{for}\ n \rightarrow \infty,$     (14.32)
$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}\Big[\eta_{nk}^2 \boldsymb...
...grightarrow} 0 \quad\text{for}\ n \rightarrow
\infty,\; \forall \varepsilon > 0$     (14.33)

are sufficient for the distribution convergence

$\displaystyle \sum_{k=1}^n \eta_{nk} \stackrel{\cal{L}}{\longrightarrow} {\text{\rm N}}(0,1)
\quad \text{\rm for} \; n \rightarrow \infty. $

Lemma 14.4   Let $ \phi_{1}$ be a continuous, bounded function and let $ \phi_{2}$ be a bounded function. Under conditions (A1) through (A10) it holds for every process $ Y_i, i \ge 0,$ which fulfils (13.1)


$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \phi_{1}(Y_{i-1})\,\phi_{2}(u_{in})\,
K(u_{in})$      
$\displaystyle \stackrel{\P}{\longrightarrow} \beta \gamma(x)\,\phi_{1}(x)
\int \phi_{2}(u)\,K(u)\,du$     (14.34)
$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n {\mathop{\text{\rm\sf E}}}\big[ \phi_{1}(Y_{i-1})\,\phi_{2}
(u_{in})\,K(u_{in})\big]$      
$\displaystyle {\longrightarrow} \beta \gamma(x)\,\phi_{1}(x)
\int \phi_{2}(u)\,K(u)\,du$      

for $ n \rightarrow
\infty$.

Proof:
We will first prove this for the case where the Markov chain begin in equilibrium and then work our way back to the general case.

For this let $ (Y_{i}^{\ast})$ be a Markov chain, which fulfils (13.1) and which for $ Y_{0}^\ast$ has the stationary distribution $ \pi$ of $ (Y_i)$ introduced in Lemma 13.1. This chain is constructed to be stationary, and by applying Lemma 13.2 we get that $ (Y_{i}^{\ast})$ is a geometric strong mixing process. From this it follows that

$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \phi_{1}(Y_{i-1}^\ast)\,\phi_{2...
...t)\,\phi_{2}(u_{1n}^\ast)\,K(u_{1n}^\ast)\big] \stackrel{\P}{\longrightarrow} 0$ (14.35)

for $ n \rightarrow
\infty$, where $ u_{in}^\ast =
(Y_{i-1}^\ast-x)/h_{n}$ was substituted in. For the second term in (13.35) it holds that
$\displaystyle n^{\frac{1}{2l+1}} {\mathop{\text{\rm\sf E}}}\big[ \phi_{1}(Y_{1}^\ast)\,\phi_{2}(u_{1n}^\ast)
\,K(u_{1n}^\ast)\big]$      
$\displaystyle \qquad = \beta \frac{1}{h_{n}} \int \phi_{1}(y)\,
\phi_{2} \left( \frac{y-x}{h_{n}} \right)\,K \left( \frac{y-x}{h_{n}}
\right) \gamma(y)\,dy$      
$\displaystyle \qquad = \beta \gamma(x)\,\phi_{1}(x) \int \phi_{2}(u)\,K(u)\,du \;
(1+{\scriptstyle \mathcal{O}}(1))$     (14.36)

for $ n \rightarrow
\infty$. Together with (13.35) it follows that for $ (Y_{i}^{\ast})$ (13.34) is fulfilled.
Define

$\displaystyle \zeta_{i} = \phi_{1}(Y_{i-1})\,\phi_{2}(u_{in})\,K(u_{in}), \quad \zeta_{i}^\ast = \phi_{1}(Y_{i-1}^\ast)\,\phi_{2}(u_{in}^\ast)\, K(u_{in}^\ast),$    

and choose a series $ \{\delta_n\}$ with $ \delta_{n} =
{\scriptstyle \mathcal{O}}(n^{\frac{2l}{2l+1}})$ and $ \lim_{n\rightarrow\infty}
\delta_{n} = \infty$. It follows that
$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \big\vert {\mathop{\text{\rm\sf E}}}[\zeta_{i}-\zeta_{i}^\ast]\big\vert$ $\displaystyle \le$ $\displaystyle n^{-\frac{2l}{2l+1}} \bigg[ \sum_{i=1}^{\delta_{n}-1} \big\vert
{...
...\big\vert
{\mathop{\text{\rm\sf E}}}[\zeta_{i}-\zeta_{i}^\ast] \big\vert \bigg]$  
  $\displaystyle \le$ $\displaystyle 2n^{-\frac{2l}{2l+1}} \delta_{n}\;\Vert\phi_{1}\phi_{2}
\,K\Vert ...
..._{n}}^n \big\vert
{\mathop{\text{\rm\sf E}}}[\zeta_{i}-\zeta_{i}^\ast]\big\vert$  
  $\displaystyle =$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=\delta_{n}}^n \big\vert {\mathop{\text{\rm\sf E}}}[\zeta_{i}-
\zeta_{i}^\ast]\big\vert + {\scriptstyle \mathcal{O}}(1)$ (14.37)

for $ n \rightarrow
\infty$. From the geometric ergodicity of $ (Y_{i})$, according to Lemma 13.1 we obtain for the left hand side of the last expression
$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=\delta_{n}}^n \big\vert {\mathop{\text{\rm\sf E}}}[\zeta_{i}-
\zeta_{i}^\ast]\big\vert$ $\displaystyle =$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=\delta_{n}}^n
\big\vert {\mathop{\text{\rm\sf E}}}\big[ \phi_{1}(Y_{i-1})\,\phi_{2}(u_{in})\,K(u_{in})$  
    $\displaystyle \hspace*{2cm} - \phi_{1}(Y_{i-1}^\ast)\,\phi_{2}(u_{in}^\ast)
\,K(u_{in}^\ast)\big] \big\vert$  
  $\displaystyle \le$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=\delta_{n}}^n \Vert\phi_{1}\phi_{2}
\,K\Vert _{\infty} \int \big\vert \gamma_{i}(y) - \gamma(y) \big\vert\,dy$  
  $\displaystyle =$ $\displaystyle {\mathcal{O}}\left( n^{-\frac{2l}{2l+1}} \sum_{i=\delta_{n}}^n \rho^i
\right) = {\scriptstyle \mathcal{O}}(1)$ (14.38)

for $ n \rightarrow
\infty$, where $ \gamma_{i}$ represents the density of $ Y_{i-1}$. Thus is holds that

$\displaystyle \lim_{n\rightarrow\infty}
n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \big\vert {\mathop{\text{\rm\sf E}}}[\zeta_{i}-\zeta_{i}^\ast]\big \vert
= 0. $

From this it follows with the help of the Markov inequality that (13.34) also applies to $ (Y_i)$. $ {\Box}$

Proof:
(for Theorem 13.1, continuation)
If remains to prove conditions a) to d).

a)
Using the definition of $ B_n$ it holds for the elements of this matrix

$\displaystyle (B_n)_{j,\,k} =
n^{-\frac{2l}{2l+1}} \sum_{i=1}^n
\frac{u_{in}^{k+j-2}}{(k-1)!(j-1)!} \,
K ( u_{in} ) . $

These take on the form defined in Lemma 13.4, and it follows

$\displaystyle (B_n)_{j,k} \stackrel{\P}{\longrightarrow}
\frac{\beta\, \gamma(x)}{(k-1)!(j-1)!} \int u^{k+j-2}\, K(u) \, du ,$

according to the definition of matrix $ A$ this is the same as $ B_n \stackrel{\P}{\longrightarrow} \beta \gamma(x) A = B$. The definiteness of $ A$ carries over to $ B$.
b)
With $ f$ and $ s$ fulfilled, $ g = f^2 + s^2$ holds, condition (A6). For the remainder from the Taylor expansion of $ g$ it holds that
$\displaystyle r_{g} (Y_{i-1},x)$ $\displaystyle =$ $\displaystyle u_{in}^l h_{n}^l \frac{1}{(l-1)!}
\int_{0}^1 g^{(l)} \big(x+t(Y_{i-1}-x)\big) (1-t)^{l-1} \,dt$  
  $\displaystyle =$ $\displaystyle u_{in}^l\,n^{-\frac{l}{2l+1}}\;\phi_{3}(Y_{i-1})$  

with

$\displaystyle \phi_{3}(Y_{i-1}) = \frac{\beta^l}{(l-1)!} \int_{0}^1 g^{(l)}
\big(x+t(Y_{i-1}-x)\big)(1-t)^{l-1}\,dt. $

With this $ \bar{b}_{n}(x)$ can be rewritten as

$\displaystyle \bar{b}_{n}(x) = n^{-\frac{2l}{2l+1}} \sum_{i=1}^n
\phi_{3}(Y_{i-1})\,u_{in}^l\,U_{in}\,K(u_{in}), $

i.e., the elements of $ \bar{b}_{n}(x)$ fulfil the requirements of Lemma 13.4.

Once again we choose $ (Y_{i}^{\ast})$ as in the proof to Lemma 13.4 and set $ U_{in}^\ast = F(u_{in}^\ast)$. From (13.37) and (13.38) we obtain

$\displaystyle \bar{b}_{n}(x) - n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \phi_{3}(Y_{i-...
...\,(u_{in}^\ast)^l\,U_{in}^\ast\,K(u_{in}^\ast) \stackrel{\P}{\longrightarrow} 0$ (14.39)

for $ n \rightarrow
\infty$. Since $ (Y_{i}^\ast)$ is $ \alpha$-mixing, as in (13.35) we get
$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n \phi_{3}(Y_{i-1}^\ast)\,
(u_{in}^\ast)^l\,U_{in}^\ast\,K(u_{in}^\ast)-$      
$\displaystyle n^{\frac{1}{2l+1}}
{\mathop{\text{\rm\sf E}}}\Big[ \phi_{3}(Y_{1}...
...n}^\ast)^l\,U_{1n}^\ast\,
K(u_{in}^\ast) \Big]
\stackrel{\P}{\longrightarrow} 0$      

for $ n \rightarrow
\infty$. The right term of this expression can be rewritten as
$\displaystyle n^{\frac{1}{2l+1}} {\mathop{\text{\rm\sf E}}}\Big[ \phi_{3}(Y_{1}^\ast)\,(u_{1n}^\ast)^l\,
U_{in}^\ast\,K(u_{in}^\ast) \Big]$      
$\displaystyle = \beta \int \phi_{3}(x+uh_{n})
\,u^lF(u)\,K(u)\,\gamma(x+uh_{n})\,du.$      

Furthermore, it holds that

$\displaystyle \lim_{n \rightarrow \infty} \phi_{3}(x+uh_{n}) = \beta^l\,g^{(l)} (x;u)/l$ (14.40)

for every $ u \in \mathbb{R}$. Together with (13.40) and (A7), it follows that
    $\displaystyle \lim_{n \rightarrow \infty} \beta \int \phi_{3}(x+uh_{n})
\,u^lF(u)\,K(u)\,\gamma(x+uh_{n})\,du$  
    $\displaystyle \hspace*{1cm} = \frac{\beta^{l+1}}{l!} \Big( \int F(u)\,u^l\,K(u)\,g^{(l)}
(x;u)\,du \Big)\,\gamma(x)$  
    $\displaystyle \hspace*{1cm} = A\,\gamma(x) \,\beta\,b_{g}(x) = B\,b_{g}(x).$  

With this (13.25) has been shown. The proof for (13.26) follows analogously.
c)
We define the matrices
$\displaystyle \Sigma_{n}^{11}$ $\displaystyle =$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n
{\mathop{\text{\rm\sf E}}}[ \alpha_{i}^2 \,\vert\, {\cal F}_{i-1} ] \, U_{in}\,U_{in}^\top \, K^2(u_{in}),$  
$\displaystyle \Sigma_{n}^{12}$ $\displaystyle =$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n
{\mathop{\text{\rm\sf E}}}[ \alpha_{i} \beta_{i} \,\vert\, {\cal F}_{i-1} ] \,U_{in}\,U_{in}^\top \, K^2(u_{in}),$  
$\displaystyle \Sigma_{n}^{22}$ $\displaystyle =$ $\displaystyle n^{-\frac{2l}{2l+1}} \sum_{i=1}^n
{\mathop{\text{\rm\sf E}}}[ \beta_{i}^2\,\vert\,{\cal F}_{i-1} ] \, U_{in}\,U_{in}^\top \, K^2(u_{in})$  

and construct the block matrix

$\displaystyle \Sigma_{n} = \left( \begin{array}{cc}
\Sigma_{n}^{11} & \Sigma_{n}^{12} \\
\Sigma_{n}^{12} & \Sigma_{n}^{22}
\end{array} \right). $

The elements of $ \Sigma_{n}^{11}$, $ \Sigma_{n}^{12}$ and $ \Sigma_{n}^{22}$ fulfil the requirements of Lemma 13.4. In particular, the combined functions $ \phi_{1}(Y_{i-1})$ that appear there are in this case given by
$\displaystyle {\mathop{\text{\rm\sf E}}}[ \alpha_{i}^2\,\vert\,{\cal F}_{i-1} ]$ $\displaystyle =$ $\displaystyle 4 f^2(Y_{i-1})\,s^2(Y_{i-1}) + s^4(Y_{i-1})m_{4},$  
$\displaystyle {\mathop{\text{\rm\sf E}}}[\alpha_{i}\beta_{i}\,\vert\,{\cal F}_{i-1}]$ $\displaystyle =$ $\displaystyle 2 f(Y_{i-1})\,s^2(Y_{i-1})$   respectively  
$\displaystyle {\mathop{\text{\rm\sf E}}}[\beta_{i}^2\,\vert\,{\cal F}_{i-1}]$ $\displaystyle =$ $\displaystyle s^2(Y_{i-1}),$  

for which (A1) has been used. One observes that the corresponding functions $ \phi_1$ are, due to (A6), in a small region around $ x$ continuous and restricted. Since $ K$ disappears outside of a compact set, this is sufficient for Lemma 13.4. With this we obtain

$\displaystyle \Sigma_{n} \stackrel{\P}{\longrightarrow} \Sigma_{0}$   and$\displaystyle \quad {\mathop{\text{\rm\sf E}}}[\Sigma_{n}] \longrightarrow \Sigma_{0}$ (14.41)

for $ n \rightarrow
\infty$.

To prove (13.27) it is sufficient to show using the theorem from Cramér-Wold that

$\displaystyle a^\top \left( \begin{array}{c} \bar{q}_{n}(x) \\ q_{n}(x) \end{array} \right) \stackrel{\cal L}{\longrightarrow}$   N$\displaystyle (0, a^\top \Sigma_{0}a)$   for $ n \rightarrow
\infty$ (14.42)

for every vector $ a \in \mathbb{R}^{2l}$ with a Euclidian norm $ \Vert a\Vert =
1$ is fulfilled. In addition we choose according to (13.41) a $ n_{0} \in \mathbb{N}$, so that $ {\mathop{\text{\rm\sf E}}}[\Sigma_{n}]
> \frac{1}{2} \Sigma_{0}$ holds for all $ n \ge n_{0}$, and substitute in for $ n \ge n_{0}$,

$\displaystyle \eta_{ni}
= \frac{n^{-\frac{l}{2l+1}}}{\sqrt{a^\top {\mathop{\tex...
...{c}
\alpha_{i}\,U_{in} \\
\beta_{i}\,U_{in}
\end{array} \right)
\; K(u_{in}). $

Then

$\displaystyle \sum_{i=1}^n \eta_{ni}
= \frac{1}{\sqrt{a^\top {\mathop{\text{\rm...
...^\top \left( \begin{array}{c}
\bar{q}_{n}(x) \\
q_{n}(x)
\end{array} \right), $

and (13.42) is equivalent to

$\displaystyle \sum_{k=1}^n \eta_{nk} \stackrel{\cal L}{\longrightarrow}$   N$\displaystyle (0,1)$   for $ n \rightarrow
\infty$. (14.43)

We will now show that $ (\eta_{nk})$ fulfills the requirements (13.30) to (13.33) from Lemma 13.3, from which (13.43) follows.

First notice that $ {\mathop{\text{\rm\sf E}}}[\alpha_{i}\,\vert\,{\cal F}_{i-1}] = 0$ a.s. and $ {\mathop{\text{\rm\sf E}}}[\beta_{i}\,\vert\,{\cal F}_{i-1}] = 0$ a.s. hold, from which  (13.30) follows. Furthermore, one can easily show that

$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2\,\vert\,{\cal...
...}]
= \frac{a^\top \Sigma_{n}a}{a^\top {\mathop{\text{\rm\sf E}}}[\Sigma_{n}]a}.$

Therefore (13.31) if fulfilled and from (13.41) we obtain (13.32).

We still have to show (13.33). For $ n \ge n_{0}$,

$\displaystyle \eta_{nk}^2
\le \frac{n^{-\frac{2l}{2l+1}}}{a^\top {\mathop{\text...
...}a} (a^\top Z_{nk})^2
\le \kappa_{1} n^{-\frac{2l}{2l+1}} \vert Z_{nk}\vert^2, $

with an appropriate constant $ \kappa_{1}>0$ and

$\displaystyle Z_{nk} = \left( \begin{array}{c}
\alpha_{k}\,U_{kn} \\
\beta_{k}\,U_{kn}
\end{array} \right)
\; K(u_{kn}). $

Since $ K$ is restricted and has compact support, and since $ f$ and $ s$ are locally bounded, there exists a constant $ \kappa_{2}>0$, so that
$\displaystyle \eta_{nk}^2$ $\displaystyle \le$ $\displaystyle \kappa_{1} n^{-\frac{2l}{2l+1}}
(\alpha_{k}^2 + \beta_{k}^2)\,\vert U_{kn}\vert^2\,K^2(u_{kn})$  
  $\displaystyle \le$ $\displaystyle \kappa_{2} n^{-\frac{2l}{2l+1}} (1 + \vert\xi_{k}\vert^4)\,K(u_{kn}).$  

From this it follows that
    $\displaystyle {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2\;\boldsymbol{1}(\vert\eta_{nk}\vert\ge\varepsilon)\,\vert\,{\cal F}_{k-1}]$  
    $\displaystyle \hspace*{1cm} \le \kappa_{2} n^{-\frac{2l}{2l+1}} \,K(u_{kn})$  
    $\displaystyle \hspace*{15mm} E\Big[(1+\vert\xi_{1}\vert^4)\;\boldsymbol{1}\big(...
...n\,
n^{\frac{l}{2l+1}}\,\kappa_{2}^{-1}\,\Vert K\Vert _{\infty}^{-1}\big) \Big]$  
    $\displaystyle \hspace*{1cm} = \kappa_{2} n^{-\frac{2l}{2l+1}}\,K(u_{kn}) \cdotp {\scriptstyle \mathcal{O}}(1)$  

for $ n \rightarrow
\infty$, where $ {\scriptstyle \mathcal{O}}(1)$ is independent of $ k$. With this we have

$\displaystyle \sum_{k=1}^n {\mathop{\text{\rm\sf E}}}[\eta_{nk}^2\;\boldsymbol{...
...{k=1}^n n^{-\frac{2l}{2l+1}}\,K(u_{kn}) \quad\text{for $n \rightarrow \infty$}.$ (14.44)

According to Lemma 13.4 it holds for the last term that

$\displaystyle n^{-\frac{2l}{2l+1}} \sum_{k=1}^n K(u_{kn}) \stackrel{\P}{\longrightarrow} \beta \gamma(x) \int K(u)\,du$   for $ n \rightarrow
\infty$$\displaystyle .$ (14.45)

From (13.44) and (13.45), (13.33) follows, i.e., the requirements of Lemma 13.3 are actually fulfilled, and thus (13.42) is also shown.
d)
It is
$\displaystyle n^{-l/(2l+1)} q_{n}^\top (x) \, F(0)$ $\displaystyle =$ $\displaystyle n^{-2l/(2l+1)} \sum_{i=1}^n \beta_{i} \, U_{in}^\top \, F(0) \, K(u_{in})$  
  $\displaystyle =$ $\displaystyle n^{-2l/(2l+1)} \sum_{i=1}^n \beta_{i} \, u_{in} \, K(u_{in})$  
  $\displaystyle =$ $\displaystyle n^{-2l/(2l+1)} \sum_{i=1}^n \big( \beta_{i} - {\mathop{\text{\rm\sf E}}}[\beta_{i}\,\vert\,{\cal
F}_{i-1}] \big) \, u_{in} \, K(u_{in}).$  

According to (A8) the kernel $ K$ is bounded, and it holds that $ d^\ast = \max\{\vert u\vert:\, u \in \mathrm{supp}\,K\} < \infty$. Thus there exists a constant $ \kappa_{0}>0$, such that
    $\displaystyle {\mathop{\text{\rm\sf E}}}\big[ (n^{-l/(2l+1)} q_{n}^\top (x) \, F(0) )^2 \big]$  
  $\displaystyle =$ $\displaystyle n^{-\frac{4l}{2l+1}} \; {\mathop{\text{\rm\sf E}}}\Big[ \big( \su...
...rm\sf E}}}[\beta_{i}\,\vert\,{\cal F}_{i-1}]) \; u_{in} K(u_{in}) \big)^2 \Big]$  
  $\displaystyle \le$ $\displaystyle \kappa_{0} n^{-\frac{4l}{2l+1}} \, \sum_{i=1}^n {\mathop{\text{\r...
...,{\cal F}_{i-1}]\big)^2 \; \boldsymbol{1}(\vert u_{in}\vert
\le d^\ast ) \Big].$  

If $ n$ is sufficiently large, then for the last term in the last sum it holds that
$\displaystyle {\mathop{\text{\rm\sf E}}}\Big[ \big( \beta_{i} - {\mathop{\text{...
...\,{\cal F}_{i-1}]\big)^2 \;
\boldsymbol{1}(\vert u_{in}\vert \le d^\ast ) \Big]$      
$\displaystyle \qquad = {\mathop{\text{\rm\sf E}}} \left[ s^2 (Y_{i-1})\, \xi_{i...
...dsymbol{1}\left( \frac{\vert Y_{i-1}-x\vert}
{h_{n}} \le d^\ast \right) \right]$      
$\displaystyle \qquad = {\mathop{\text{\rm\sf E}}} \left[ s^2 (Y_{i-1}) \, \boldsymbol{1}\left( \frac{\vert Y_{i-1}-x\vert}{h_{n}}
\le d^\ast \right) \right]$      
$\displaystyle \qquad \le \sup_{\vert y-x\vert \le h_{n}d^\ast} s^2(y) \quad < \infty.$      

Thus $ n^{-l/(2l+1)} q_{n}^\top (x) \, F(0) \stackrel{\P}{\longrightarrow} 0$ is shown. Similarly it can be shown that

$\displaystyle n^{-l/(2l+1)} \bar{q}_{n}^\top (x) \, F(0) \stackrel{\P}{\longrightarrow} 0 .$

$ {\Box}$

As a direct consequence of Theorem 13.1 we have:

Theorem 14.2   Under conditions (A1) through (A10) it holds that

$\displaystyle n^{l/(2l+1)} \big\{ \hat{v}_{n}(x) - v(x) \big\} \stackrel{{\cal L}}
{\longrightarrow}$   N$\displaystyle \big( b_{v}(x), \sigma^2_{v}(x) \big)$   for $n &rarr#rightarrow;&infin#infty;$,$\displaystyle $

where
$\displaystyle b_{v}(x)$ $\displaystyle =$ $\displaystyle F^\top (0)\, (b_{g}(x) - 2f(x)\,b_{f}(x))$   and  
$\displaystyle \sigma_{v}^2 (x)$ $\displaystyle =$ $\displaystyle \frac{s^4(x)m_{4}}{\beta \gamma(x)} \;
F^\top (0)\,{\cal D}\,F(0).$  

Proof:
From $ g(x) = \bar{c}(x)^\top \, F(0)$, $ f(x) = c(x)^\top \, F(0)$, $ v(x) = g(x) - f^2(x)$ and the construction of $ \hat{v}_n$ we obtain

$\displaystyle \hat{v}_{n}(x) - v(x)$ $\displaystyle =$ $\displaystyle \left\{\bar{c}_{n}(x) - \bar{c}(x)\right\}^\top \, F(0)$  
  $\displaystyle -$ $\displaystyle \left[ 2c(x)^\top \, F(0) + \left\{ c_{n}(x)-c(x) \right\}^\top \, F(0) \right]$  
    $\displaystyle \left[ \left\{ c_{n}(x) - c(x) \right\} ^\top \, F(0) \right].$  

It also holds that
$\displaystyle n^{l/(2l+1)} \big ( \hat{v}_{n}(x) - v(x) \big )
=$   $\displaystyle n^{l/(2l+1)} \, \Psi(x) \left( \begin{array}{c}
\bar{c}_{n}(x) - \bar{c}(x) \\
c_{n}(x) - c(x)
\end{array} \right)$ (14.46)
    $\displaystyle +\,\ n^{l/(2l+1)} \big( (c_{n}(x)-c(x))^\top \, F(0) \big)^2$  

with the transformations matrix

$\displaystyle \Psi(x) = \left( \begin{array}{c}
F(0) \\
-2 f(x) \, F(0) \\
\end{array} \right)^\top .$

According to (13.15) it holds that $ \{c_{n}(x)-c(x)\}^\top \, F(0) \stackrel{\P}{\longrightarrow} 0$ for $ n \rightarrow
\infty$, from which together with (13.16) $ n^{l/(2l+1)} \big\{ [c_{n}(x)-c(x)]^\top \, F(0) \big\}^2 \stackrel{\P}{\longrightarrow} 0$ follows. The limiting distribution of $ n^{l/(2l+1)} \big \{ \hat{v}_{n}(x) - v(x) \big \} $ is thus given by the first term of the right side of (13.46). For this we get using  (13.16) that

$\displaystyle n^{l/(2l+1)} \big\{ \hat{v}_{n}(x) - v(x) \big\} \stackrel{{\cal L}}
{\longrightarrow}$   N$\displaystyle \big\{ \Psi(x) b(x), \Psi(x) \Sigma(x) \Psi(x)^\top \big\} $

for $ n \rightarrow
\infty$. A simple calculation gives $ \Psi(x) b(x) = b_{v}(x)$ as well as
$ \Psi(x) \Sigma(x) \Psi(x)^\top = \sigma_{v}^2 (x)$, with which the claim is shown. $ {\Box}$

Going beyond the asymptotic normality shown in Theorem 13.2, Franke et al. (2002) have shown that bootstrap methods for nonparametric volatility estimators can also be used. They consider routine kernel estimators, i.e., the special case LP estimator with $ p=0$ in (13.4), but the results can be directly applied to the general LP estimators, see also Kreiss (2000).

To illustrate consider the case where $ l=2$. We assume that $ f$ and $ s$ are twice differentiable and that the kernel $ K$ satisfies the condition

$\displaystyle \int K(u)\,du = 1$   and$\displaystyle \qquad K(u) = K(-u). $

Then it holds that
$\displaystyle A$ $\displaystyle =$ $\displaystyle \left( \begin{array}{cc}
1 & 0 \\
0 & \sigma_{K}^2
\end{array} \right)$   mit $\displaystyle \sigma_{K}^2 = \int u^2\,K(u)\,du,$  
$\displaystyle Q$ $\displaystyle =$ $\displaystyle \left( \begin{array}{cc}
\int K^2(u)\,du & 0 \\
0 & \int u^2\,K^2(u)\,du
\end{array} \right) ,$  
$\displaystyle b_{f} (x)$ $\displaystyle =$ $\displaystyle A^{-1} \frac{\beta^2\,f''(x)}{2}
\left( \begin{array}{c}
\sigma_{...
...left( \begin{array}{c}
\sigma_{K}^2 \beta^2 f''(x)/2 \\
0
\end{array} \right),$  
$\displaystyle b_{g} (x)$ $\displaystyle =$ $\displaystyle A^{-1} \frac{\beta^2\,g''(x)}{2}
\left( \begin{array}{c}
\sigma_{...
...left( \begin{array}{c}
\sigma_{K}^2 \beta^2 g''(x)/2 \\
0 \end{array} \right),$  
$\displaystyle {\cal D}$ $\displaystyle =$ $\displaystyle \left( \begin{array}{cc}
\int K^2(u)\,du & 0 \\
0 & \frac{1}{\sigma_{K}^4} \int u^2\,K^2(u)\,du
\end{array} \right),$  

and thus

$\displaystyle b_{v}(x)
= \frac{\sigma_{K}^2 \beta^2}{2} \Big\{ \big( f^2(x) + s...
...\} = \frac{\sigma_{K}^2 \beta^2}{2}
\Big[ v''(x) + 2\big\{ f'(x)\big\}^2 \Big] $

and

$\displaystyle \sigma_{v}^2(x) = \frac{s^4(x) m_{4}}{\beta \gamma(x)} \; \int K^2(u)\,du = \frac{v^2(x) m_{4}}{\beta \gamma(x)} \; \int K^2(u)\,du.$    


In particular, from the normalized quadratic errors of $ \hat{v}_n$ that is calculated from the asymptotic distribution, we have that

$\displaystyle {\mathop{\text{\rm\sf E}}}\big[n^{2l/2l+1}\big(\hat{v}_{n}(x) - v(x)\big)^2\big]$ $\displaystyle \approx$ $\displaystyle b_{v}^2(x) + \sigma_{v}^2(x)$  
  $\displaystyle =$ $\displaystyle \frac{v^2(x) m_{4}}{\beta \gamma(x)} \; \int K^2(u)\,du$  
    $\displaystyle + \frac{\sigma_{K}^4 \beta^4}{4} \Big\{ v''(x) + 2\big(f'(x)\big)^2
\Big\}^2.$  

Minimizing these expressions with respect to $ K$ and $ \beta$ results in the Epanechnikov-Kernel

$\displaystyle K(u) = K^\ast (u) = \frac{3}{4} \boldsymbol{1}( 1 - u^2 >0)
$

and the following values for $ \beta$:

$\displaystyle \beta(K) = \left( \frac{v^2(x) \, m_{4} \, \int K^2(u)\,du}
{\gamma(x) \, \sigma_{K}^4 \, \big[ v''(x) + 2 \{f'(x)\}^2 \big]^2}
\right) ^{1/5}.
$

With this we obtain

$\displaystyle \beta^\ast = \beta(K^\ast) = \left( \frac{125 \, v^2(x) \, m_{4}}
{4 \gamma(x) \, \big[ v''(x) + 2 \{f'(x)\}^2 \big]^2} \right) ^{1/5}.
$