The locally time homogeneous approach appears to be also appropriate for the estimation of the
volatility of financial time series.
In order to provide some motivation we first describe the stylized facts of financial time series.
Let define the price process of a financial asset such as
stocks or exchange rates, then the returns are defined as follows:
The returns of financial time series are usually modeled by the following equation:
A common feature to all the models which have been cited in the previous section is that
they completely describe the volatility process by a finite set of parameters.
The availability of very large samples of financial data has given the possibility
of constructing models which display quite complicated parameterizations in order to
explain all the observed stylized facts. Obviously those models rely on the assumption
that the parametric structure of the process remains constant through the whole
sample.
This is a nontrivial and possibly dangerous assumption in particular
as far as forecasting is concerned as pointed out in Clements and Hendry (1998).
Furthermore checking for parameter
instability becomes quite difficult if the model is nonlinear, and/or the number
of parameters is large. Whereby those characteristics of the returns which are often
explained by the long memory and (fractal) integrated nature of the volatility
process, could also depend on the parameters being time varying.
We want to suggest an alternative approach which relies on a
locally time homogeneous parameterization, i.e.
we assume that the volatility follows a jump process and is constant
over some unknown interval of time homogeneity.
The adaptive algorithm, which has been presented in the previous sections, also
applies in this case; its aim consists in the data-driven estimation of the
interval of time homogeneity, after which the estimate of the volatility can
be simply obtained by local averaging.
Let
be an observed asset process in discrete time,
and
are the corresponding returns:
.
We model this process via the conditional heteroscedasticity
assumption
The model equation (15.10) links the volatility
with the observations
via the multiplicative errors
.
In order to apply the theory presented in Section 15.1 we need a
regression like model with additive errors.
For this reason we consider the power transformation,
which leads to
a regression with additive noise and so that the noise is close
to a Gaussian one, see Carroll and Ruppert (1988).
Due to (15.10) the random variable
is
conditionally on
Gaussian and it holds
![]() |
![]() |
|||
![]() |
The assumption of local time homogeneity means that
the function
is constant within an interval
, and
the process
follows the regression-like equation
(15.11) with the constant trend
which can be estimated by averaging over this interval
:
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
A probability bound analogous to the one in Section 15.1 holds also in this case.
Let the volatility coefficient
satisfy the condition
with some constants
.
Then there exists
such that
it holds for every
For practical application one has to substitute the unknown conditional standard deviation
with its estimate:
Under the assumption of time homogeneity within an interval
equation (15.16)
allows to bound
by
for any
,
provided that
and
are sufficiently large.
Therefore we can apply the same algorithm described in Section 15.1
in order to estimate the largest interval of time homogeneity and the related
value of
.
Here, as in the previous section, we are faced with the choice of three tuning
parameters:
,
, and
.
Simulation studies and repeated trying on real data by Mercurio and Spokoiny (2000) have shown
that the choice of
is not particularly critical and it can be selected between
10 and 50 without affecting the overall results of the procedure.
As described in Section 15.2.2, the choice of and
is more delicate.
The influence of
and
is similar to the one of the smoothing
parameters in the nonparametric regression. The likelihood of rejecting a time homogeneous interval
decreases with increasing
and/or
. This is clear from equation (15.6).
Therefore if
and
are too large this would make the algorithm too conservative,
increasing the bias of the estimator, while too small values of
and
would lead to a frequent rejection and to a high variability of the estimate.
Once again, a way of choosing the optimal values of
and
can be made
through the minimization of the squared forecast error. One has to define a finite
set
of the admissible pair of
and
. Then for each
pair belonging to
one can compute the corresponding estimate:
and then select the
optimal pair and the corresponding estimate by the following criterion:
![]() |
![]() ![]() ![]() |