The ARCH process defined in the previous sections is used as a tool to capture the behaviour of the volatility when it is time-varying in a high-frequency data. However, in a wide variety of contexts, the information set could also be determinant in specifying a time-varying mean. In this section, we will define the information set in terms of the distribution of the errors of a dynamic linear regression model.
The ARCH regression model is obtained by assuming that the mean of
the endogenous variable is given by
,
a linear combination of lagged endogenous and exogenous variables
included in the information set
, with
a
vector of unknown parameters.
That is to say:
![]() |
![]() |
![]() |
(6.17) |
Under these assumptions and considering that the regressors
include no lagged endogenous variables, the unconditional mean and
variance can be derived as:
![]() |
![]() |
![]() |
(6.18) |
![]() |
![]() |
![]() |
(6.19) |
Thus, the Gauss-Markov assumptions are satisfied and ordinary least squares is the best linear unbiased estimator for the model and the variance estimates are unbiased and consistent. However, OLS estimators do not achieve the Cramer-Rao bound.
By using maximum likelihood techniques, it is possible to find a nonlinear estimator that is asymptotically more efficient than OLS estimators.
The log-likelihood function for
and
can be
written, ignoring a constant factor as
The maximum likelihood estimator is found by solving the first
order conditions. The derivative with respect to is
![]() |
(6.20) |
![]() |
(6.21) |
The Hessian matrix for is given by
![]() |
(6.22) |
Taking into account (6.3) and as the conditional perturbations are uncorrelated, the information matrix is given by
![]() |
(6.23) |
Simple calculus then reveals,
![]() |
(6.24) |
![]() |
(6.25) |
The off-diagonal block of the information matrix is zero (see
Engle (1982) for the conditions and the proof of this
result). As a consequence, we can separately estimate vectors
and
.
The usual method used in the estimation is a two-stage procedure.
Initially, we find the OLS estimate
![]() |
(6.26) |
Secondly, given these residuals, we find an initial estimate of
, replacing
by
in the maximum likelihood variance equations (6.6). In
this way, we have an approximation of the parameters
and
.
The previous two steps are repeated until the convergence on
and
is obtained.
Additionally, the Hessian matrix must be calculated and conditional expectations taken on it.
If an ARCH regression model is symmetric and regular, the off-diagonal blocks of the information matrix are zero [(see theorem 4, in Engle; 1982)].
Because of the block diagonality of the information matrix, the
estimation of
and
can be
considered separately without loss of asymptotic efficiency.
Alternatively, we can use an asymptotic estimator that is based on the scoring algorithm and which can be found using most least squares computer programs.
A homoscedastic test for this model is follows by a general LM
test, where under the restricted model the conditional variance
does not depend on the . For a more detailed derivation
of this test (see section 4.4, in Gouriéroux ; 1997).