12.5 Estimation Moments

In the following we assume a stationary stochastic process $ X_t$, i.e., $ \mathop{\text{\rm\sf E}}[X_t]=\mu$ and $ \mathop{\text{\rm Cov}}(X_t,X_{t+\tau})=\gamma_{\tau}$. In the previous sections, we have assumed that we knew the process and thus the moment generating function was also known. In practice one observes only a realization of the process, $ X_1,\ldots,X_n$, and thus there is the problem of estimating the moment generating function.

12.5.1 Estimation of the Mean Function

The parameter $ \mu = \mathop{\text{\rm\sf E}}[X_t]$ can be estimated with the simple arithmetic sample mean:

$\displaystyle \bar{X}_n = 1/n \sum_{i=1}^n X_i.$ (12.20)

The estimator $ \bar{X}_n$ is unbiased since it holds that $ \mathop{\text{\rm\sf E}}[\bar{X}_n]=\mu$, and its variance is
$\displaystyle \mathop{\text{\rm Var}}(\bar{X}_n)$ $\displaystyle =$ $\displaystyle \mathop{\text{\rm Var}}(1/n \sum_{i=1}^n X_i)$  
  $\displaystyle =$ $\displaystyle 1/n^2 \sum_{t=1}^n \sum_{s=1}^n \mathop{\text{\rm Cov}}(X_t,X_s)$  
  $\displaystyle =$ $\displaystyle 1/n^2 \sum_{t=1}^n \sum_{s=1}^n \gamma_{t-s}$  
  $\displaystyle =$ $\displaystyle 1/n \sum_{\tau=-(n-1)}^{n-1} \frac{n-\vert\tau\vert}{n} \gamma_\tau$  

When the autocovariance function $ \gamma_\tau$ is absolutely summable, it holds that $ \mathop{\text{\rm Var}}(\bar{X}_n) < \infty$ and $ \lim_{n\rightarrow \infty} \mathop{\text{\rm Var}}(\bar{X}_n)=0$. The estimator $ X$ is then also a consistent estimator for $ \mu$. In many cases there are more efficient estimators which take advantage of the correlation structure of the process.

The asymptotic variance

$\displaystyle \lim_{n\rightarrow \infty}
n\mathop{\text{\rm Var}}(\bar{X}_n) = \gamma_0 + 2\sum_{\tau=1}^{\infty}
\gamma_\tau$

is denoted as $ f(0)$, since this is exactly the spectral density at frequency zero. Under the absolute summability of $ \gamma_\tau$ the following asymptotic distribution for the estimator holds:

$\displaystyle \sqrt{n}(\bar{X}_n - \mu) \stackrel{{\cal L}}{\longrightarrow}$   N$\displaystyle (0,f(0)).$ (12.21)

12.5.2 Estimation of the Covariance Function

A possible estimator of the covariance function $ \gamma_\tau$ is

$\displaystyle \hat{\gamma}_{\tau,n} = \frac{1}{n} \sum_{t=1}^{n-\tau} (X_t-\bar{X}_n) (X_{t+\tau}-\bar{X}_n)$ (12.22)

with the mean estimator $ \bar{X}_n$ from (11.20). Instead of dividing by $ n$ in (11.22) one could also divide by $ n-\tau$, although the estimator would then have less favorable properties. The estimator $ \hat{\gamma}_{\tau,n}$ is no longer unbiased, since the following can be shown.

$\displaystyle \mathop{\text{\rm\sf E}}[\hat{\gamma}_{\tau,n}] =
\left(1-\frac{\...
...rac{\tau}{n}\right)\mathop{\text{\rm Var}}(\bar{X}_n) + {\mathcal{O}}(n^{-2}).
$

Positive autocovariances are in general underestimated with $ \hat{\gamma}_{\tau,n}$. Asymptotically $ \hat{\gamma}_{\tau,n}$ is nevertheless unbiased: $ \lim_{n\rightarrow \infty}
E[\hat{\gamma}_{\tau,n}] = \gamma_\tau.$ For the variance when terms of higher order are ignored it holds that

$\displaystyle \mathop{\text{\rm Var}}(\hat{\gamma}_{\tau,n}) = \frac{1}{n}
\sum...
...\scriptstyle \mathcal{O}}(n^{-1})%%\frac{1}{n}\Var(\hat{\gamma}_{\tau,\infty})
$

and since $ \lim_{n\rightarrow
\infty}\mathop{\text{\rm Var}}(\hat{\gamma}_{\tau,n})=0$ holds, $ \hat{\gamma}_{\tau,n}$ is a consistent estimator for $ \gamma_\tau$. Furthermore, it can be shown that the covariance estimator behaves asymptotically like a normally distributed random variable:

$\displaystyle \sqrt{n}(\hat{\gamma}_{\tau,n} - \gamma_\tau) \stackrel{{\cal
L}}{\longrightarrow} \text{\rm N}(0, \sigma^2_{\tau,\infty}).
$

12.5.3 Estimation of the ACF

An obvious estimator for the ACF $ \rho_\tau$ is

$\displaystyle \hat{\rho}_{\tau,n} = \frac{\hat{\gamma}_{\tau,n}}{\hat{\gamma}_{0,n}}.$ (12.23)

Once again we have a bias of order $ 1/n$, i.e.,

$\displaystyle \mathop{\text{\rm\sf E}}(\hat{\rho}_{\tau,n}) = \rho_\tau + {\mathcal{O}}(n^{-1})
$

and $ \hat{\rho}_{\tau,n}$ is asymptotically unbiased. For the variance it holds that

$\displaystyle \mathop{\text{\rm Var}}(\hat{\rho}_{\tau,n}) = \frac{1}{n} \Sigma_{\rho,\tau\tau} +
{\mathcal{O}}(n^{-2}) .
$

The estimator $ \hat{\rho}_{\tau,n}$ is consistent, since $ \lim_{n\rightarrow \infty} \mathop{\text{\rm Var}}(\hat{\rho}_{\tau,n}) = 0$. For the asymptotic distribution of the vector $ \hat{\rho}_{(k),n}=
(\hat{\rho}_{1,n},\ldots,\hat{\rho}_{k,n})^\top $ it can be shown that

$\displaystyle \sqrt{n}(\hat{\rho}_{(k),n}-\rho_{(k)}) \stackrel{{\cal
L}}{\longrightarrow} \text{\rm N}(0,\Sigma_\rho)
$

with the covariance matrix $ \Sigma_\rho$ with the typical element
$\displaystyle \Sigma_{\rho,kl}$ $\displaystyle =$ $\displaystyle \sum_{j=-\infty}^{\infty}\rho_j\rho_{j+k+l} +
\sum_{j=-\infty}^{\infty}\rho_j\rho_{j+k-l}$  
  $\displaystyle +$ $\displaystyle 2\rho_k\rho_l \sum_{j=-\infty}^{\infty}\rho_j^2 -
2\rho_l\sum_{j=...
...}^{\infty}\rho_j \rho_{j+k} -
2\rho_k\sum_{j=-\infty}^{\infty}\rho_j\rho_{j+l}.$  

In particular for the asymptotic variance of $ \sqrt{n}(\hat{\rho}_{\tau,n}-\rho_{\tau})$, it holds that
$\displaystyle \Sigma_{\rho,\tau\tau}$ $\displaystyle =$ $\displaystyle \sum_{j=-\infty}^{\infty}\rho_j\rho_{j+2\tau}
+ \sum_{j=-\infty}^{\infty}\rho_j^2$  
  $\displaystyle +$ $\displaystyle 2\rho_\tau^2 \sum_{j=-\infty}^{\infty}\rho_j^2 -
4\rho_\tau\sum_{j=-\infty}^{\infty}\rho_j \rho_{j+\tau}.$  

Example 12.5 (MA(q))  
For the MA(q) process in (11.1) we know that $ \rho_\tau=0$ for all $ \tau>q$. Thus the asymptotic variance can be simplified from $ \sqrt{n}(\hat{\rho}_{\tau,n}-\rho_{\tau})$ for $ \tau>q$ to

$\displaystyle \Sigma_{\rho,\tau\tau} = 1 + 2 \sum_{i=1}^q \rho_i^2.
$

Example 12.6 (white noise)  
If $ X_t$ is white noise, it holds that

$\displaystyle \mathop{\text{\rm\sf E}}(\hat{\rho}_{\tau,n})=-\frac{1}{n}+{\mathcal{O}}(n^{-2})$

and

$\displaystyle \mathop{\text{\rm Var}}(\hat{\rho}_{\tau,n})=\frac{1}{n}+{\mathcal{O}}(n^{-2})$

for $ \tau \ne 0$. The asymptotic covariance matrix of $ \sqrt{n}(\hat{\rho}_{(k),n}-\rho_{(k)})$ is the identity matrix. Using this we can build approximately 95% confidence intervals for the ACF: $ [-\frac{1}{n}\pm \frac{2}{\sqrt{n}}]$.