next up previous contents index
Next: 2.4 Finite Mixture Models Up: 2. Econometrics Previous: 2.2 Limited Dependent Variable

Subsections



2.3 Stochastic Volatility and Duration Models

Stochastic volatility (SV) models may be used as an alternative to generalized autoregressive conditonal heteroskedastic (GARCH) models as a way to model the time-varying volatility of asset returns. Time series of asset returns feature stylized facts, the most important being volatility clustering, which produces a slowly decreasing positive autocorrelation function of the squared returns, starting at a low value (about $ 0.15$). Another stylized fact is excess kurtosis of the distribution (with respect to the Gaussian distribution). See [7] for a detailed list of the stylized facts and a survey of GARCH models, [47] for a comparative survey of GARCH and SV models, and [26] for a survey of SV models focused on their theoretical foundations and their applications in finance. The first four parts of this section deal with SV models while in Sect. 2.3.5 we survey similar models for dynamic duration analysis.


2.3.1 Canonical SV Model

The simplest version of a SV model is given by

\begin{displaymath}\begin{split}
 &y_t = \exp(h_t/2) \, u_t\;, \quad u_t \sim N(...
... h_{t-1} + \sigma v_t\;, \quad v_t \sim N(0,1)\;, 
 \end{split}\end{displaymath} (2.19)

where $ y_t$ is a return measured at $ t$, $ h_t$ is the unobserved log-volatility of $ y_t$, $ \{u_t\}$ and $ \{v_t\}$ are mutually independent sequences, $ (\omega,\, \beta,\, \sigma)$ are parameters to be estimated, jointly denoted  $ \boldsymbol {\theta }$. The parameter space is $ \mathbb{R} \times (-1,1) \times \mathbb{R}_+$. The restriction on $ \beta$ ensures the strict stationarity of $ y_t$. Estimates of $ \beta$ are typically quite close to $ 1$ (in agreement with the first stylized fact), thus $ \beta$ is a `persistence' parameter of the volatility. The unconditonal mean of $ h_t$ is $ \mu=\omega/(1-\beta)$ and the second equation may be parametrized using $ \mu$ by writing $ h_t = \mu + \beta (h_{t-1}-\mu) +
\sigma v_t$. Another parametrization removes $ \omega$ from the second equation while writing the first as $ y_t = \tau \exp(h_t/2) \, u_t$ where $ \tau = \exp(\omega/2)$. These different parametrizations are in one-to-one correspondance. Which one to choose is mainly a matter of convenience and numerical efficiency of estimation algorithms.

For further use, let $ y$ and $ h$ denote the $ n\times 1$ vectors of observed returns and unobserved log-volatilities, respectively.


2.3.2 Estimation

Estimation of the parameters of the canonical SV model may be done by the maximum likelihood (ML) method or by Bayesian inference. Other methods have been used but they will not be considered here. We refer to [26], Sect. 5, for a review. ML and, in principle, Bayesian estimation require to compute the likelihood function of an observed sample, which is a difficult task. Indeed, the density of $ y$ given $ \boldsymbol {\theta }$ and an initial condition $ h_0$ (not explicitly written in the following expressions) requires to compute a multiple integral which has a dimension equal to the sample size:

$\displaystyle f(y\vert\boldsymbol{\theta})$ $\displaystyle = \int f(y,h\vert\boldsymbol{\theta}) \, {\mathrm{d}}\boldsymbol{h}{}$ (2.20)
  $\displaystyle = \int f(y\vert h,\boldsymbol{\theta}) f(h\vert\boldsymbol{\theta}) \, {\mathrm{d}}\boldsymbol{h}{}$ (2.21)
  $\displaystyle = \int \prod_{t=1}^n
 f(y_t,h_t\vert\boldsymbol{Y}_{t-1},\boldsymbol{H}_{t-1},\boldsymbol{\theta}) \, {\mathrm{d}}\boldsymbol{h}\;,{}$ (2.22)

where $ \boldsymbol{Y}_t=\{y_i\}_{i=1}^t$ and $ \boldsymbol{H}_t=\{h_i\}_{i=0}^t$. For model (2.19), this is

$\displaystyle \int \prod_{t=1}^n f_N\left(y_t\vert,\mathrm{e}^{h_t}\right) f_N\...
..._t\vert\omega + \beta
 h_{t-1},\sigma^2\right) \, {\mathrm{d}}\boldsymbol{h}\;,$ (2.23)

where $ f_N(x\vert\mu,\sigma^2)$ denotes the normal density function of $ x$, with parameters $ \mu$ and $ \sigma ^2$. An analytical solution to the integration problem is not available. Even a term by term numerical approximation by a quadrature rule is precluded: the integral of $ N(0,\exp(h_n)) \times N(\omega + \beta h_{n-1},\sigma^2)$ with respect to $ h_n$ depends on $ h_{n-1}$, and has to be carried over in the previous product, and so on until $ h_1$. This would result in an explosion of the number of function evaluations. Simulation methods are therefore used.

Two methods directly approximate (2.22): efficient importance sampling (EIS), and Monte Carlo maximum likelihood (MCML). Another approach, which can only be used for Bayesian inference, works with $ f(\boldsymbol{y},\boldsymbol{h}\vert\boldsymbol{\theta})$ as data density, and produces a posterior joint density for $ \boldsymbol{\theta},\boldsymbol{h}$ given $ \boldsymbol {y}$. The posterior density is simulated by a Monte Carlo Markov chain (MCMC) algorithm, which produces simulated values of $ \boldsymbol {\theta }$ and  $ \boldsymbol{h}$. Posterior moments and marginal densities of $ \boldsymbol {\theta }$ are then estimated by their simulated sample counterparts. We pursue by describing each method.

2.3.2.1 EIS ([36])

A look at (2.23) suggests to sample $ R$ sequences $ \{h_t^r \sim
N(\omega + \beta h_{t-1},\sigma^2)\}_{t=1}^n$, $ r=1\ldots R$, and to approximate it by $ (1/R)\sum_{r=1}^R \prod_{t=1}^n
N(0,\exp(h_t^r))$. This direct method proves to be inefficient. Intuitively, the sampled sequences of $ h_t$ are not linked to the observations $ y_t$. To improve upon this, the integral (2.22), which is the convenient expression to present EIS, is expressed as

$\displaystyle \int \prod_{t=1}^n
 \frac{f(y_t,h_t\vert\boldsymbol{Y}_{t-1},\bol...
...\vert\boldsymbol{H}_{t-1},\boldsymbol{\phi}_t) \, {\mathrm{d}}\boldsymbol{h}\;,$ (2.24)

where $ \{m(h_t\vert\boldsymbol{H}_{t-1},\boldsymbol{\phi}_t)\}_{t=1}^n$ is a sequence of importance density functions, indexed by parameters $ \{\boldsymbol{\phi}_t\}$. These importance functions serve to generate $ R$ random draws $ \{h_t^1,h_t^2 \ldots
h_t^R\}_{t=1}^n$, such that the integral is approximated by the sample mean

$\displaystyle \frac{1}{R}\sum_{r=1}^R \prod_{t=1}^n
 \frac{f(y_t,h_t^r\vert\bol...
...oldsymbol{\theta})}{m(h_t^r\vert\boldsymbol{H}_{t-1}^r,\boldsymbol{\phi}_t)}\;.$ (2.25)

The essential point is to choose the form of $ m()$ and its auxiliary parameters $ \boldsymbol{\phi}_t$ so as to secure a good match between the product of $ m(h_t\vert\boldsymbol{H}_{t-1},\boldsymbol{\phi}_t)$ and the product of $ f(y_t,h_t\vert\boldsymbol{Y}_{t-1},\boldsymbol{H}_{t-1},\boldsymbol{\theta})$ viewed as a function of  $ \boldsymbol{h}$. A relevant good match criterion is provided by a choice of  $ \{\boldsymbol{\phi}_t\}$, for a given family of densities for $ m()$, based on the minimization of the Monte Carlo variance of the mean (2.25). The choice of $ \{\boldsymbol{\phi}_t\}$ is explained below, after the choice of $ m()$.

A convenient choice for $ m()$ is the Gaussian family of distributions. A Gaussian approximation to $ f()$, as a function of $ h_t$, given $ y_t$ and $ h_{t-1}$, turns out to be efficient. It can be expressed as proportional to $ \exp(\phi_{1,t}h_t+\phi_{2,t}h_t^2 )$, where $ (\phi_{1,t},\phi_{2,t})=\boldsymbol{\phi}_t$, the auxiliary parameters. It is convenient to multiply it with $ \exp[-$0.5 $ \sigma^{-2}(-2m_th_t+h_t^2+m_t^2)]$, where $ m_t=\omega+\beta h_{t-1}$, which comes from the $ N(m_t,\sigma^2)$ term included in $ f(y_t,h_t\vert\boldsymbol{Y}_{t-1},\boldsymbol{H}_{t-1},\boldsymbol{\theta})$. The product of these two exponential functions can be expressed as a Gaussian density $ N(\mu_t,\sigma_t^2)$, where

$\displaystyle \mu_t = \sigma_t^2 (m_t/\sigma^2 + \phi_{1,t})\;, \quad \sigma_t^2 =
 \sigma^2/(1-2\sigma^2\phi_{2,t})\;.$ (2.26)

The choice of the auxiliary parameters can be split into $ n$ separate problems, one for each $ t$. It amounts to minimize the sum of squared deviations between $ \ln
f(y_t\vert\boldsymbol{Y}_{t-1},\boldsymbol{H}_t^r,\boldsymbol{\theta})$ plus a correction term, see (2.27), and $ \phi_{0,t}+\phi_{1,t}h_t^r+\phi_{2,t}(h_t^r)^2$ where $ \phi_{0,t}$ is an auxiliary intercept term. This problem is easily solved by ordinary least squares. See [36] for a detailed explanation.

Let us summarize the core of the EIS algorithm in three steps (for given $ \boldsymbol {\theta }$ and $ y$):

Step 1: Generate $ R$ trajectories $ \{h_t^r\}$ using the `natural' samplers $ \{N(m_t,\sigma^2)\}$.

Step 2: For each $ t$ (starting from $ t=n$ and ending at $ t=1$), using the $ R$ observations generated in the previous step, estimate by OLS the regression

$\displaystyle -\frac{1}{2}\left[h_t^r+y_t^2
 \mathrm{e}^{-h_t^r}+\left(\frac{\m...
...ght)^2\right]
 = \phi_{0,t}+\phi_{1,t}h_t^r+\phi_{2,t}(h_t^r)^2+\epsilon_t^r\;,$ (2.27)

where $ \epsilon_t^r$ is an error term. For $ t=n$, the dependent variable does not include the last two terms in the square brackets. The superscript $ r$ on $ \mu_{t+1}$, $ \sigma_{t+1}$ and $ m_{t+1}$ indicates that these quantities are evaluated using the $ r$-th trajectory.

Step 3: Generate $ R$ trajectories $ \{h_t^r\}$ using the efficient samplers $ \{N(\mu_t,\sigma_t^2)\}$ and finally compute (2.25).

Steps 1 to 3 should be iterated about five times to improve the efficiency of the approximations. This is done by replacing the natural sampler in Step 1 by the importance functions built in the previous iteration. It is also possible to start Step 1 of the first iteration with a more efficient sampler than the natural one. This is achieved by multiplying the natural sampler by a normal approximation to $ f(y_t\vert h_t,h_{t-1},\boldsymbol{\theta})\propto\exp\{-$0.5 $ [y_t^2\exp(-h_t)+h_t]\}$. The normal approximation is based on a second-order Taylor series expansion of the argument of the exponential in the previous expression around $ h_t=0$. In this way, the initial importance sampler links $ y_t$ and $ h_t$. This enables one to reduce to three (instead of five) the number of iterations over the three steps. In practical implementations, $ R$ can be fixed to $ 50$. When computing (2.25) for different values of $ \theta$, such as in a numerical optimizer, it is important to use common random numbers to generate the set of $ R$ trajectories $ \{h_t^r\}$ that serve in the computations.

It is also easy to compute by EIS filtered estimates of functions of $ h_t$, such as the conditional standard deviation  $ \exp(h_t/2)$, conditional on the past returns (but not on the lagged unobserved $ h_t$), given a value of $ \boldsymbol {\theta }$ (such as the ML estimate). Diagnostics on the model specification are then obtained as a byproduct: if the model is correctly specified, $ y_t$ divided by the filtered estimates of $ \exp(h_t/2)$ is a residual that has zero mean, unit variance, and is serially uncorrelated (this also holds for the squared residual).

[43] contains a general presentation of EIS and its properties.

2.3.2.2 MCML ([20])

The likelihood to be computed at $ \boldsymbol {y}$ (the data) and any given $ \boldsymbol {\theta }$ is equal to $ f(\boldsymbol{y}\vert\boldsymbol{\theta})$ and is conveniently expressed as (2.21) for this method. This quantity is approximated by importance sampling with an importance function defined from an approximating model. The latter is obtained by using the state space representation of the canonical SV model (parametrized with $ \tau$):

$\displaystyle \ln y_t^2 = \ln \tau^2 + h_t + \epsilon_t\;,$ (2.28)
$\displaystyle h_t = \beta h_{t-1} + \sigma v_t\;.$ (2.29)

In the canonical SV model, $ \epsilon_t=\ln u_t^2$ is distributed as the logarithm of a $ \chi^2(1)$ random variable. However the approximating model replaces this with a Gaussian distribution (defined below), keeping the state equation unchanged. Therefore, the whole machinery of the Kalman filter is applicable to the approximating model, which is a Gaussian linear state space model. If we denote by $ g(\boldsymbol{h}\vert y,\boldsymbol{\theta})$ the importance function that serves to simulate $ \boldsymbol{h}$ (see below), we have

$\displaystyle f(\boldsymbol{y}\vert\boldsymbol{\theta})$ $\displaystyle =
 \int \frac{f(\boldsymbol{y}\vert\boldsymbol{h},\boldsymbol{\th...
...mbol{h}\vert\boldsymbol{y},\boldsymbol{\theta}) \, {\mathrm{d}}\boldsymbol{h}{}$ (2.30)
  $\displaystyle = g(\boldsymbol{y}\vert\boldsymbol{\theta}) \int
 \frac{f(\boldsy...
...h}\vert\boldsymbol{y},\boldsymbol{\theta}) \, 
 {\mathrm{d}}\boldsymbol{h}\;,{}$ (2.31)

where the second equality results from $ g(\boldsymbol{h}\vert\boldsymbol{y},\boldsymbol{\theta})
g(\boldsymbol{y}\ver...
...t\boldsymbol{h},\boldsymbol{\theta})
g(\boldsymbol{h}\vert\boldsymbol{\theta})$ and $ g(\boldsymbol{h}\vert\boldsymbol{\theta}) =
f(\boldsymbol{h}\vert\boldsymbol{\theta})$. All the densities $ g(.)$ and $ g(.\vert.)$ are defined from the approximating Gaussian model. In particular, $ g(\boldsymbol{y}\vert\boldsymbol{\theta})$ is the likelihood function of the Gaussian linear state space model and is easy to compute by the Kalman filter (see the appendix to [46] for all details). Likewise, $ g(\boldsymbol{y}\vert\boldsymbol{h},\boldsymbol{\theta})$ obtains from the Gaussian densities $ g(\ln y_t^2\vert h_t,\theta)$ resulting from (2.28) with $ \epsilon_t \sim N(a_t,s_t^2)$ where $ a_t$ and $ s_t^2$ are chosen so that $ g(\boldsymbol{y}\vert\boldsymbol{h},\boldsymbol{\theta})$ is as close as possible to $ f(\boldsymbol{y}\vert\boldsymbol{h},\boldsymbol{\theta})$. The parameters $ a_t$ and $ s_t^2$ are chosen so that $ \ln g(\ln y_t^2\vert\widehat h_t,\boldsymbol{\theta})$ and $ \ln f(\ln y_t^2\vert\widehat h_t,\boldsymbol{\theta})$ have equal first and second derivatives, where $ \widehat h_t$ is the smoothed value of $ h_t$ provided by the Kalman filter applied to the approximating model. Remark that this is a different criterion from that used in EIS. Finally, $ g(\boldsymbol{h}\vert\boldsymbol{y},\boldsymbol{\theta})$ can be simulated with the Gaussian simulation smoother of [17].

In brief, the likelihood function is approximated by

$\displaystyle g(y\vert\boldsymbol{\theta}) \frac{1}{R}\sum_{r=1}^R
 \frac{f(\bo...
...symbol{\theta})}{g(\boldsymbol{y}\vert\boldsymbol{h}^r,\boldsymbol{\theta})}\;,$ (2.32)

where $ \boldsymbol{h}^r=\{h_t^r\}_{t=1}^T$ is simulated independently $ R$ times with the importance sampler and $ g(\boldsymbol{y}\vert\boldsymbol{\theta})$ is computed by the Kalman filter. Equations (2.31) and (2.32) show that importance sampling serves to evaluate the departure of the actual likelihood from the likelihood of the approximating model. $ R$ is fixed to $ 250$ in practice.

For SML estimation, the approximation in (2.32) is transformed in logarithm. This induces a bias since the expectation of the log of the sample mean is not the log of the corresponding integral in (2.31). The bias is corrected by adding $ s_w^2/(2R\bar{w})$ to the log of (2.32), where $ s_w^2$ is the sample variance of the ratios $ w^r=f(\boldsymbol{y}\vert\boldsymbol{h}^r,\boldsymbol{\theta})/g(\boldsymbol{y}\vert\boldsymbol{h}^r,\boldsymbol{\theta})$ and $ \bar{w}$ is the sample mean of the same ratios, i.e. $ \bar{w}$ is the sample mean appearing in (2.32). Moreover, [20] use antithetic and control variables to improve the efficiency of the estimator of the log-likelihood function.

[21] present several generalizations of MCML (e.g. the case where the state variable in non-Gaussian) and develop analogous methods for Bayesian inference.

2.3.2.3 MCMC ([35])

We present briefly the `Mixture Sampler', one of the three algorithms added by [35] to the six algorithms already in the literature at that time (see their paper for references). They approximate the density of $ \epsilon_t=\ln u_t^2$ by a finite mixture of seven Gaussian densities, such that in particular the first four moments of both densities are equal. The approximating density can be written as

$\displaystyle f_a(\epsilon_t) = \sum_{i=1}^7 {\mathrm{Pr}}[s_t=i] f(\epsilon_t\...
...i=1}^7 {\mathrm{Pr}}[s_t=i] f_N\left(\epsilon_t\vert b_i-1.2704,c_i^2\right)\;,$ (2.33)

where $ s_t$ is a discrete random variable, while $ {\mathrm{Pr}}[s_t=i]$, $ b_i$ and $ c_i$ are known constants (independent of $ t$). The constant $ -1.2704$ is the expected value of a  $ \ln \chi^2(1)$ variable.


Table 2.7: Summary of `Mixture Sampler' algorithm
Parameter Conditional posterior or sampling method
$ \boldsymbol{h}$ Gaussian simulation smoother
$ \boldsymbol{s}$ Univariate discrete distribution for each $ s_t$
$ \sigma ^2$ Inverted gamma distribution
$ \beta$ Rejection or Metropolis-Hastings sampler
$ \mu$ Normal distribution

The crux of the algorithm is to add $ \boldsymbol{s}=\{s_t\}_{t=1}^n$ to $ \boldsymbol {\theta }$ and $ \boldsymbol{h}$ in the MCMC sampling space. This makes it possible to sample $ \boldsymbol{h}\vert\boldsymbol{s},\boldsymbol{\theta},\boldsymbol{y}$, $ \boldsymbol{s}\vert\boldsymbol{h},\boldsymbol{y}$ and $ \boldsymbol{\theta}\vert\boldsymbol{h},\boldsymbol{y}$ within a Gibbs sampling algorithm. Remark that $ \boldsymbol{s}$ and $ \boldsymbol {\theta }$ are independent given $ \boldsymbol{h}$ and  $ \boldsymbol {y}$. Moreover, $ \boldsymbol{h}$ can be sampled entirely as a vector. The intuition behind this property is that, once $ \boldsymbol{s}$ is known, the relevant term of the mixture (2.33) is known for each observation, and since this is a Gaussian density, the whole apparatus of the Kalman filter can be used. Actually, this a bit more involved since the relevant Gaussian density depends on $ t$, but an augmented Kalman filter is available for this case.

Sampling $ \boldsymbol{h}$ as one block is a big progress over previous algorithms, such as in [33], where each element $ h_t$ is sampled individually given the other elements of $ \boldsymbol{h}$ (plus $ \boldsymbol {\theta }$ and  $ \boldsymbol {y}$). The slow convergence of such algorithms is due to the high correlations between the elements of  $ \boldsymbol{h}$.

[35] write the model in state space form, using $ \mu$ rather than $ \omega$ or $ \tau$ as a parameter, i.e.

$\displaystyle \ln y_t^2 = h_t + \epsilon_t\;,$ (2.34)
$\displaystyle h_t - \mu = \beta (h_{t-1}-\mu) + \sigma v_t\;.$ (2.35)

The `Mixture Sampler' algorithm is summarized in Table 2.7. Notice that once $ \boldsymbol {\theta }$ has been sampled, it is easy to transform the draws of $ \mu$ into equivalent draws of $ \omega$ or $ \tau$ by using the relationships between these parameters. Since inference is Bayesian, prior densities must be specified. For $ \sigma ^2$, an inverted gamma prior density is convenient since the conditional posterior is in the same class and easy to simulate. For $ \beta$, any prior can be used since the conditional posterior is approximated and rejection sampling is used. A beta prior density is advocated by [35]. For $ \mu$, a Gaussian or uninformative prior results in a Gaussian conditional posterior.

[35] also propose an algorithm to compute filtered estimates of $ h_t$, from which model diagnostics can be obtained as described above for EIS.


2.3.3 Application

For illustration, estimates of the canonical SV model parameters are reported in Table 2.8 for a series of $ 6107$ centred daily returns of the Standard and Poor's 500 (SP500) composite price index (period: 02/01/80-30/05/03, source: Datastream). Returns are computed as 100 times the log of the price ratios. The sample mean and standard deviation are equal to $ 0.03618$ and $ 1.0603$, respectively.


Table 2.8: ML and Bayesian estimates of SV model (2.19)
  EIS ($ \omega$) MCML ($ \tau$) MCMC ($ \tau$)
$ \omega/\tau$ $ -0.00524$ ($ 0.00227$) $ 0.863$ ($ 0.0469$) $ 0.864$ ($ 0.0494$)
$ \beta$ $ 0.982$ ($ 0.00385$) $ 0.982$ ($ 0.00389$) $ 0.983$ ($ 0.00382$)
$ \sigma $ $ 0.149$ ($ 0.0138$) $ 0.147$ ($ 0.0135$) $ 0.143$ ($ 0.0139$)
llf $ -8023.98$ $ -8023.80$    
Time 2.36 min 7.56 min 6.23 min
Code Gauss Ox Ox

llf: value of log-likelihood function at the reported estimate;
EIS, MCML, and MCMC are defined in Sect. 2.3.2

We used computer codes provided by the authors cited above. For EIS, we received the code from R. Liesenfeld, for MCML and MCMC we downloaded them from the web site staff.feweb.vu.nl/koopman/sv.

For SML estimation by EIS or MCML, identical initial values ( $ \beta=0.96$, $ \sigma=0.15$, $ \omega=0.02$ or $ \tau=0.01$) and optimization algorithms (BFGS) are used, but in different programming environments. Therefore, the computing times are not fully comparable, although a careful rule of thumb is that Ox is two to three times faster than Gauss (see [15]). Reported execution times imply that EIS appears to be at least six times faster than MCML. This is a reversal of a result reported by Sandman and Koopman (1998, p.289), but they compared MCML with a precursor of EIS implemented by [16]. More importantly, the two methods deliver quasi-identical results.

MCMC results are based on $ 18{,}000$ draws after dropping $ 2000$ initial draws. The posterior means and standard deviations are also quite close to the ML results. The posterior density of $ \sigma $ (computed by kernel estimation) is shown in Fig. 2.1 together with the large sample normal approximation to the density of the ML estimator using the EIS results. The execution time for MCMC is difficult to compare with the other methods since it depends on the number of Monte Carlo draws. It is however quite competitive since reliable results are obtained in no more time than MCML in this example.

Figure 2.1: Posterior density of $ \sigma $ and normal density of the MLE
\includegraphics[width=6.7cm]{text/4-2/sveden.eps}


2.3.4 Extensions of the Canonical SV Model

The canonical model presented in (2.19) is too restrictive to fit the excess kurtosis of many return series. Typically, the residuals of the model reveal that the distribution of $ u_t$ has fatter tails than the Gaussian distribution. The assumption of normality is most often replaced by the assumption that $ u_t \sim t(0,1,\nu)$, which denotes Student-$ t$ distribution with zero mean, unit variance, and degrees of freedom parameter $ \nu>2$. SML estimates of $ \nu $ are usually between $ 5$ and $ 15$ for stock and foreign currency returns using daily data. Posterior means are larger because the posterior density of has a long tail to the right.

Several other extensions of the simple SV model presented in (2.19) exist in the literature. The mean of $ y_t$ need not be zero and may be a function of explanatory variables $ x_t$ (often a lag of $ y_t$ and an intercept term). Similarly $ h_t$ may be a function of observable variables ($ z_t$) in addition to its own lags. An extended model along these lines is

\begin{displaymath}\begin{split}
 y_t = x_t^T\gamma + \exp(h_t/2) \, u_t\;, \\ 
...
...mega + z_t^T\alpha + \beta h_{t-1} + \sigma v_t\;.
 \end{split}\end{displaymath} (2.36)

It should be obvious that all these extensions are very easy to incorporate in EIS (see [36]) and MCML (see [46]). Bayesian estimation by MCMC remains quite usable but becomes more demanding in research time to tailor the algorithm for achieving a good level of efficiency of the Markov chain (see [12], in particular p 301-302, for such comments).

[12] also include a jump component term $ k_t
q_t$ in the conditional mean part to allow for irregular, transient movements in returns. The random variable $ q_t$ is equal to $ 1$ with unknown probablity $ \kappa$ and zero with probability $ 1-\kappa$, whereas $ k_t$ is the size of the jump when it occurs. These time-varying jump sizes are assumed independent draws of $ \ln (1+k_t)
\sim N(-$0.5 $ \delta^2,\delta^2)$, $ \delta$ being an unknown parameter representing the standard deviation of the jump size. For daily SP500 returns (period: 03/07/1962-26/08/1997) and a Student-$ t$ density for $ u_t$, [12] report posterior means of $ 0.002$ for $ \kappa$, and $ 0.034$ for $ \delta$ (for prior means of $ 0.02$ and $ 0.05$, respectively). This implies that a jump occurs on average every $ 500$ days, and that the variability of the jump size is on average $ {3.4}\,{\%}$. They also find that the removal of the jump component from the model reduces the posterior mean of $ \nu $ from $ 15$ to $ 12$, which corresponds to the fact that the jumps capture some outliers.

Another extension consists of relaxing the restriction of zero correlation $ (\rho)$ between $ u_t$ and $ v_t$. This may be useful for stock returns for which a negative correlation corresponds to the leverage effect of the financial literature. If the correlation is negative, a drop of $ u_t$, interpreted as a negative shock on the return, tends to increase $ v_t$ and therefore $ h_t$. Hence volatility increases more after a negative shock than after a positive shock of the same absolute value, which is a well-known stylized fact. [46] estimate such a model by MCML, and report $ \widehat \rho = -0.38$ for daily returns of the SP500 index (period: 02/01/80-30/12/87), while [34] do it by Bayesian inference using MCMC and report a posterior mean of $ \rho$ equal to $ -0.20$ on the same data. They use the same reparametrization as in (2.13) to impose that the first diagonal element of the covariance matrix of $ u_t$ and $ \sigma v_t$ must be equal to $ 1$. This covariance matrix is given by

$\displaystyle \Sigma = \begin{pmatrix}
 1 & \rho \sigma \\ \rho \sigma & \sigma...
...matrix}
 = \begin{pmatrix}
 1 & \psi \\ \psi & \phi^2 +\psi^2
 \end{pmatrix}\;,$ (2.37)

where the last matrix is a reparametrization. This enables to use a normal prior on the covariance $ \psi $ and an inverted gamma prior on $ \phi^2$, the conditional variance of $ \sigma v_t$ given $ u_t$. The corresponding conditional posteriors are of the same type, so that simulating these parameters in the MCMC algorithm is easy. This approach can also be used if $ u_t$ has a Student-$ t$ distribution.

Multivariate SV models are also on the agenda of researchers. [36] estimate by EIS a one-factor model introduced by [47], using return series of four currencies. [35], Sect. 6.6, explain how to deal with the multi-factor model case by extending the MCMC algorithm reviewed in Sect. 2.3.2.


2.3.5 Stochastic Duration and Intensity Models

Models akin to the SV model have been used for dynamic duration analysis by [5] and [3]. The context of application is the analysis of a sequence of time spells between events (also called durations) occurring on stock trading systems like the New York Stock Exchange (NYSE). Time stamps of trades are recorded for each stock on the market during trading hours every day, resulting in an ordered series of durations. Marks, such as the price, the exchanged quantity, the prevailing bid and ask prices, and other observed features may also be available, enabling to relate the durations to the marks in a statistical model. See [2] for a presentation of the issues.

Let $ 0 = t_0 < t_1 < t_2 < \ldots < t_n$ denote the arrival times and $ d_1, d_2 \ldots d_n$ denote the corresponding durations, i.e. $ d_i =
t_i-t_{i-1}$. The stochastic conditional duration (SCD) model of [5] is defined as

\begin{displaymath}\begin{split}
 &d_i = \exp(\psi_i) \, u_i\;, \quad u_i \sim D...
...psi_{i-1} + \sigma v_i\;, \quad v_i \sim N(0,1)\;,
 \end{split}\end{displaymath} (2.38)

where $ D(\gamma)$ denotes some distribution on the positive real line, possibly depending on a parameter $ \gamma$. For example, Bauwens and Veredas use the Weibull distribution and the gamma distribution (both with shape parapeter denoted by $ \gamma$). Assuming that the distribution of $ u_i$ is parameterized so that $ E(u_i)=1$, $ \psi_i$ is the logarithm of the unobserved mean of $ d_i$, and is modelled by a Gaussian autoregressive process of order one. It is also assumed that $ \{u_i\}$ and $ \{v_i\}$ are mutually independent sequences. The parameters to be estimated are $ (\omega,\, \beta,\, \sigma, \gamma)$, jointly denoted $ \boldsymbol {\theta }$. The parameter space is $ \mathbb{R} \times (-1,1)
\times \mathbb{R}_+ \times \mathbb{R}_+$.

The similarity with the canonical SV model (2.19) is striking. A major difference is the non-normality of $ u_i$ since this is by definition a positive random variable. This feature makes it possible to identify $ \gamma$. Therefore, the estimation methods available for the SV model can be adapted to the estimation of SCD models. [5] have estimated the SCD model by the quasi-maximum likelihood (QML) method, since the first equation of the model may be expressed as $ \ln
d_i = \psi_i + \ln u_i$. If $ \ln u_i$ were Gaussian, the model would be a Gaussian linear state space model and the Kalman filter could be directly applied. QML relies on maximizing the likelihood function as if $ \ln u_i$ were Gaussian. The QML estimator is known to be consistent but inefficient relative to the ML estimator which would obtain if the correct distribution of $ \ln u_i$ were used to define the likelihood function. [24], Chap. 3, has studied by simulation the loss of efficiency of QML relative to ML. ML estimation assuming a Weibull distribution is done by applying the EIS algorithm. For a sample size of $ 500$ observations, the efficiency loss ranges from $ 20$ to $ 50\,{\%}$, except for the parameter $ \omega$, for which it is very small. He also applied the EIS method using the same data as in [5]. For example, for a dataset of $ 1576$ volume durations of the Boeing stock (period: September-November 1996; source: TAQ database of NYSE), the ML estimates are: $ \widehat \omega=-0.028$, $ \widehat \beta =
0.94$, $ \widehat \sigma^2 = 0.0159$, $ \widehat \gamma =
1.73$. They imply a high persistence in the conditional mean process (corresponding to duration clustering), a Weibull distribution with an increasing concave hazard function, and substantial heterogeneity. Notice that an interesting feature of the SCD model is that the distribution of $ u_i$ conditional to the past information, but marginalized with respect to the latent process, is a Weibull mixed by a lognormal distribution.

[48] have designed a MCMC algorithm for the SCD model (2.38) assuming a standard exponential distribution for $ u_i$. The design of their MCMC algorithm borrows features from Koopman and Durbin's MCML approach and one of the MCMC algorithms used for the SV model.

As an alternative to modelling the sequence of durations, [3] model directly the arrival times through the intensity function of the point process. Their model specifies a dynamic intensity function, where the intensity function is the product of two intensity components: an observable component that depends on past arrival times, and a latent component. The logarithm of the latter is a Gaussian autoregressive process similar to the second equation in (2.19) and (2.38). The observable component may be a Hawkes process ([31]) or an autoregressive intensity model ([45]). When the model is multivariate, there is an observable intensity component specific to each particular point process, while the latent component is common to all particular processes. Interactions between the processes occur through the observable components and through the common component. The latter induces similar dynamics in the particular processes, reflecting the impact of a common factor influencing all processes. Bauwens and Hautsch use intensity-based likelihood inference, with the EIS algorithm to deal with the latent component.


next up previous contents index
Next: 2.4 Finite Mixture Models Up: 2. Econometrics Previous: 2.2 Limited Dependent Variable