12.3 ARMA Models

The ARMA($ p,q$) model is defined as

$\displaystyle X_t = \nu + \alpha_1 X_{t-1}+\ldots+\alpha_p X_{t-p} + \beta_1 \varepsilon_{t-1} + \ldots + \beta_q \varepsilon_{t-q} + \varepsilon_t,$ (12.16)

or as

$\displaystyle \alpha(L) X_t = \nu + \beta(L)\varepsilon_t
$

with the moving average lag-polynomial $ \beta(L) = 1+ \beta_1 L +
\ldots + \beta_q L^q$ and the autoregressive lag-polynomial $ \alpha(L) = 1-\alpha_1 L - \ldots - \alpha_p L^p$. So that the process (11.16) has an explicit parameterization, it is required that the characteristic polynomials $ \alpha(z)$ and $ \beta(z)$ do not have any common roots. The process (11.16) is stationary when all the roots of the characteristic equation (11.6) lie outside of the unit circle. In this case (11.16) has the MA($ \infty$) representation

$\displaystyle X_t = \alpha^{-1}(L)\beta(L)\varepsilon_t.
$

The process $ X_t$ in (11.16) is invertible when all the roots of the characteristic equation (11.4) lie outside of the unit circle. In this case (11.16) can be written as

$\displaystyle \beta^{-1}(L)\alpha(L)X_t = \varepsilon_t,
$

that is an AR($ \infty$) process. Thus we can approximate every stationary, invertible ARMA($ p,q$) process with a pure AR or MA process of sufficiently large order. Going the other direction, an ARMA($ p,q$) process offers the possibility of parsimonious parameterization.