next up previous contents index
Next: 1.4 Value at Risk, Up: 1. Computationally Intensive Value Previous: 1.2 Stable Distributions

Subsections



1.3 Hyperbolic Distributions

In response to remarkable regularities discovered by geomorphologists in the 1940s, [4] introduced the hyperbolic law for modeling the grain size distribution of windblown sand. Excellent fits were also obtained for the log-size distribution of diamonds from a large mining area in South West Africa. Almost twenty years later the hyperbolic law was found to provide a very good model for the distributions of daily stock returns from a number of leading German enterprises ([31,57]), giving way to its today's use in stock price modeling ([9]) and market risk measurement ([32]). The name of the distribution is derived from the fact that its log-density forms a hyperbola, see Fig. 1.8. Recall that the log-density of the normal distribution is a parabola. Hence the hyperbolic distribution provides the possibility of modeling heavier tails.

Figure 1.8: Densities and log-densities of hyperbolic (dotted line), NIG (dashed line) and Gaussian (solid line) distributions having the same variance, see (1.32). The name of the hyperbolic distribution is derived from the fact that its log-density forms a hyperbola, which is clearly visible in the right panel (Q: CSAfin07)
\includegraphics[width=10.2cm]{text/4-1/CSAfin07.eps}

The hyperbolic distribution is defined as a normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian (GIG) law with parameter $ \lambda=1$, i.e. it is conditionally Gaussian, see [4] and [6]. More precisely, a random variable $ Z$ has the hyperbolic distribution if:

$\displaystyle (Z\vert Y) \sim$   N$\displaystyle \left(\mu+\beta Y, Y \right)\;,$ (1.19)

where $ Y$ is a generalized inverse Gaussian GIG$ (\lambda=1,\chi,\psi)$ random variable and N$ (m,s^2)$ denotes the Gaussian distribution with mean $ m$ and variance $ s^2$. The GIG law is a very versatile positive domain three parameter distribution. It arises in the context of the first passage time of a diffusion process, when the drift and variance of displacement per unit time are dependent upon the current position of the particle. The probability density function of a GIG variable is given by:

$\displaystyle f_{\text{GIG}}(x)=\frac{(\psi/\chi)^{\lambda/2}}{2 \text{K}_{\lam...
...} \mathrm{e}^{ -\frac{1}{2}\left(\chi x^{-1}+ \psi x \right)}\;, \quad x > 0\;,$ (1.20)

with the parameters taking values in one of the ranges: (1)  $ \chi>0,\psi\ge 0$ if $ \lambda<0$, (2)  $ \chi>0,\psi>0$ if $ \lambda=0$ or (3)  $ \chi\ge 0,\psi=0$ if $ \lambda>0$. The generalized inverse Gaussian law has a number of interesting properties that we will use later in this section. The distribution of the inverse of a GIG variable is again GIG but with a different $ \lambda $, namely if:

$\displaystyle Y\sim$GIG$\displaystyle (\lambda,\chi,\psi)$   then$\displaystyle \quad Y^{-1}\sim$GIG$\displaystyle (-\lambda,\chi,\psi)\;.$ (1.21)

A GIG variable can be also reparameterized by setting $ a=\sqrt{\chi/\psi}$ and $ b=\sqrt{\chi\psi}$, and defining $ Y=a\widetilde{Y}$, where:

$\displaystyle \widetilde{Y} \sim$GIG$\displaystyle (\lambda,b,b)\;.$ (1.22)

The normalizing constant K$ _{\lambda}(t)$ in formula (1.20) is the modified Bessel function of the third kind with index $ \lambda $, also known as the MacDonald function. It is defined as:

K$\displaystyle _{\lambda}(t)=\frac{1}{2} \int_0^{\infty} x\,^{\lambda-1} \mathrm{e}^{-\frac{1}{2} t \left(x+x^{-1} \right)} {\text{d}} x\;, \qquad t>0\;.$ (1.23)

In the context of hyperbolic distributions, the Bessel functions are thoroughly discussed in [6]. Here we recall only two properties that will be used later. Namely, (1)  K$ _{\lambda}(t)$ is symmetric with respect to $ \lambda $, i.e. K$ _{\lambda}(t) =$   K$ _{-\lambda}(t)$, and (2) for $ \lambda=\pm\frac12$ it can be written in a simpler form:

K$\displaystyle _{\pm\frac12}(t) = \sqrt{\frac{\pi}{2}} t^{-\frac12} \mathrm{e}^{-t}\;.$ (1.24)

For other values of $ \lambda $ numerical approximations of the integral in (1.23) have to be used, see e.g. [19], [86] or [96].

Relation (1.19) implies that a hyperbolic random variable $ Z\sim$   H$ (\psi, \beta, \chi, \mu)$ can be represented in the form:

$\displaystyle Z \sim \mu+\beta Y + \sqrt{Y}$   N$\displaystyle (0,1)\;,$    

with the characteristic function:

$\displaystyle \phi_Z(u) = \mathrm{e}^{iu\mu} \int_0^{\infty} \mathrm{e}^{ i\beta z u - \frac{1}{2}z u^2} {\text{d}} F_Y(z)\;.$ (1.25)

Here $ F_Y(z)$ denotes the distribution function of a generalized inverse Gaussian random variable $ Y$ with parameter $ \lambda=1$, see (1.20). Hence, the hyperbolic PDF is given by:

$\displaystyle f_{\text{H}}(x) = \frac{\sqrt{\psi/\chi}}{2\sqrt{\psi+\beta^2} K_...
...i})} \mathrm{e}^{ -\sqrt{\{\psi+\beta^2\}\{\chi+(x-\mu)^2\}} + \beta(x-\mu)}\;.$ (1.26)

Sometimes another parameterization of the hyperbolic distribution with $ \delta=\sqrt{\chi}$ and $ \alpha=\sqrt{\psi+\beta^2}$ is used. Then the probability density function of the hyperbolic H$ (\alpha,\beta,\delta,\mu)$ law can be written as:

$\displaystyle f_{\text{H}}(x) = \frac{\sqrt{\alpha^2 - \beta^2}}{2\alpha\delta ...
...ta^2}\right)} \mathrm{e}^{ -\alpha \sqrt{\delta^2+(x-\mu)^2} + \beta(x-\mu)}\;,$ (1.27)

where $ \delta>0$ is the scale parameter, $ \mu \in R$ is the location parameter and $ 0\le \vert\beta\vert<\alpha$. The latter two parameters - $ \alpha $ and $ \beta$ - determine the shape, with $ \alpha $ being responsible for the steepness and $ \beta$ for the skewness. In XploRe the hyperbolic density and distribution functions are implemented in the pdfhyp and cdfhyp quantlets, respectively. The calculation of the PDF is straightforward, however, the CDF has to be numerically integrated from (1.27).

The hyperbolic law is a member of a more general class of generalized hyperbolic distributions. The generalized hyperbolic law can be represented as a normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian (GIG) law with any $ \lambda\in\mathbb{R}$. Hence, the generalized hyperbolic distribution is described by five parameters $ \theta = (\lambda, \alpha, \beta, \delta, \mu)$. Its probability density function is given by:

$\displaystyle f_{\text{GH}}(x) = \kappa \left\{ \delta^2 + (x-\mu)^2 \right\}^{...
... \left(\alpha \sqrt{\delta^2 + (x-\mu)^2} \right)\mathrm{e}\,^{\beta(x-\mu)}\;,$ (1.28)

where:

$\displaystyle \kappa = \frac{(\alpha^2 - \beta^2)^{\frac{\lambda}{2}}} {\sqrt{2...
...elta\,^{\lambda} \text{K}_\lambda\left(\delta\sqrt{\alpha^2-\beta^2}\right)}\;.$ (1.29)

For $ \vert\beta+z\vert<\alpha$ its moment generating function takes the form:

$\displaystyle M(z) = \mathrm{e}^{\mu z} \left\{\frac{\alpha^2-\beta^2}{\alpha^2...
...2} \right)}{\text{K}_{\lambda} \left(\delta \sqrt{\alpha^2-\beta^2} \right)}\;.$ (1.30)

Note, that $ M(z)$ is smooth, i.e. infinitely many times differentiable, near 0 and hence every moment exists. If we set $ \zeta = \delta\sqrt{\alpha^2-\beta^2} = \sqrt{\psi\chi}$ then the first two moments lead to the following formulas for the mean and variance of a generalized hyperbolic random variable:

$\displaystyle \mathbb{E} X = \mu + \frac{\beta\delta^2}{\zeta} \frac{\text{K}_{\lambda+1}(\zeta)}{\text{K}_{\lambda}(\zeta)}\;,$ (1.31)
Var$\displaystyle X = \delta^2 \left[ \frac{\text{K}_{\lambda+1}(\zeta)}{\zeta\text...
...lambda+1}(\zeta)}{\zeta\text{K}_{\lambda}(\zeta)} \right)^2 \right\} \right]\;.$ (1.32)

The normal-inverse Gaussian (NIG) distributions were introduced by [5] as a subclass of the generalized hyperbolic laws obtained for $ \lambda=-\frac12$. The density of the normal-inverse Gaussian distribution is given by:

$\displaystyle f_{\text{NIG}}(x) = \frac{\alpha\delta}{\pi} \mathrm{e}^{ \delta ...
...}_1\left(\alpha \sqrt{\delta^2+(x-\mu)^2}\right)}{\sqrt{\delta^2+(x-\mu)^2}}\;.$ (1.33)

In XploRe the NIG density and distribution functions are implemented in the pdfnig and cdfnig quantlets, respectively. Like for the hyperbolic distribution the calculation of the PDF is straightforward, but the CDF has to be numerically integrated from (1.33).

At the ''expense'' of four parameters, the NIG distribution is able to model symmetric and asymmetric distributions with possibly long tails in both directions. Its tail behavior is often classified as ''semi-heavy'', i.e. the tails are lighter than those of non-Gaussian stable laws, but much heavier than Gaussian. Interestingly, if we let $ \alpha $ tend to zero the NIG distribution converges to the Cauchy distribution (with location parameter $ \mu$ and scale parameter $ \delta$), which exhibits extremely heavy tails. The tail behavior of the NIG density is characterized by the following asymptotic relation:

$\displaystyle f_{\text{NIG}}(x) \approx \vert x\vert^{-3/2}\mathrm{e}^{(\mp\alpha+\beta)x} \quad \text{for} \quad x\rightarrow\pm\infty\;.$ (1.34)

In fact, this is a special case of a more general relation with the exponent of $ \vert x\vert$ being equal to $ \lambda-1$ (instead of $ -3/2$), which is valid for all generalized hyperbolic laws ([6]). Obviously, the NIG distribution may not be adequate to deal with cases of extremely heavy tails such as those of Pareto or non-Gaussian stable laws. However, empirical experience suggests an excellent fit of the NIG law to financial data ([52,62,89,97]). Moreover, the class of normal-inverse Gaussian distributions possesses an appealing feature that the class of hyperbolic laws does not have. Namely, it is closed under convolution, i.e. a sum of two independent NIG random variables is again NIG ([5]). In particular, if $ X_1$ and $ X_2$ are independent normal inverse Gaussian random variables with common parameters $ \alpha $ and $ \beta$ but having different scale and location parameters  $ \delta_{1,2}$ and $ \mu_{1,2}$, respectively, then $ X = X_1 + X_2$ is NIG$ (\alpha,\beta,\delta_1+\delta_1,\mu_1+\mu_2)$. This feature is especially useful in time scaling of risks, e.g. in deriving $ 10$-day risks from daily risks. Only two subclasses of the generalized hyperbolic distributions are closed under convolution. The other class with this important property is the class of variance-gamma (VG) distributions, which is a limiting case obtained for $ \delta\rightarrow 0$. The variance-gamma distributions (with $ \beta = 0$) were introduced to the financial literature by [63].

1.3.1 Simulation of Generalized Hyperbolic Variables

The most natural way of simulating generalized hyperbolic variables stems from the fact that they can be represented as normal variance-mean mixtures. Since the mixing distribution is the generalized inverse Gaussian law, the resulting algorithm reads as follows:

  1. simulate a random variable $ Y \sim$   GIG$ (\lambda,\chi,\psi)
=$   GIG$ (\lambda,\delta^2,\alpha^2-\beta^2)$;

  2. simulate a standard normal random variable $ N$, e.g. using the Box-Muller algorithm, see Sect. 1.2.3;

  3. return $ X = \mu + \beta Y + \sqrt{Y}N.$
The algorithm is fast and efficient if we have a handy way of simulating generalized inverse Gaussian variates. For $ \lambda=-\frac12$, i.e. when sampling from the so-called inverse Gaussian (IG) distribution, there exists an efficient procedure that utilizes a transformation yielding two roots. It starts with the observation that if we let $ \vartheta = \sqrt{\chi/\psi}$ then the GIG$ (-\frac12,\chi,\psi) =$   IG$ (\chi,\psi)$ density, see (1.20), of $ Y$ can be written as:

$\displaystyle f_{Y}(x) = \sqrt{\frac{\chi}{2\pi x^3}} \exp\left\{ \frac{-\chi (x-\vartheta)^2}{2x \vartheta^2} \right\}\;.$    

Now, following [92] we may write:

$\displaystyle V = \frac{\chi(Y-\vartheta)^2}{Y\vartheta^2} \sim \chi^2_{(1)}\;,$ (1.35)

i.e. $ V$ is distributed as a chi-square random variable with one degree of freedom. As such it can be simply generated by taking a square of a standard normal random number. Unfortunately, the value of $ Y$ is not uniquely determined by (1.35). Solving this equation for $ Y$ yields two roots:

$\displaystyle y_1 = \vartheta + \frac{\vartheta}{2\chi} \left( \vartheta V - \sqrt{4\vartheta \chi V + \vartheta^2 V^2} \right)$   and$\displaystyle \quad y_2 = \frac{\vartheta^2}{y_1}\;.$    

The difficulty in generating observations with the desired distribution now lies in choosing between the two roots. [74] showed that $ Y$ can be simulated by choosing $ y_1$ with probability $ \vartheta/(\vartheta+y_1)$. So for each random observation $ V$ from a  $ \chi^2_{(1)}$ distribution the smaller root $ y_1$ has to be calculated. Then an auxiliary Bernoulli trial is performed with probability $ p=\vartheta/(\vartheta+y_1)$. If the trial results in a ''success'', $ y_1$ is chosen; otherwise, the larger root $ y_2$ is selected. The rndnig quantlet of XploRe, as well as the rnig function of the Rmetrics collection of software packages for S-plus/R (see also Sect. 1.2.2 where Rmetrics was briefly described), utilize this routine.

In the general case, the GIG distribution - as well as the (generalized) hyperbolic law - can be simulated via the rejection algorithm. An adaptive version of this technique is used to obtain hyperbolic random numbers in the rhyp function of Rmetrics. Rejection is also implemented in the HyperbolicDist package for S-plus/R developed by David Scott, see the R-project home page http://cran.r-project.org/. The package utilizes a version of the algorithm proposed by [3], i.e. rejection coupled either with a two (''GIG algorithm'' for any admissible value of $ \lambda $) or a three part envelope (''GIGLT1 algorithm'' for $ 0\le\lambda<1$). Envelopes, also called hat or majorizing functions, provide an upper limit for the PDF of the sampled distribution. The proper choice of such functions can substantially increase the speed of computations, see Chap. II.2. As [3] shows, once the parameter values for these envelopes have been determined, the algorithm efficiency is reasonable for most values of the parameter space. However, finding the appropriate parameters requires optimization and makes the technique burdensome.

This difficulty led to a search for a short algorithm which would give comparable efficiencies but without the drawback of extensive numerical optimizations. A solution, based on the ''ratio-of-uniforms'' method, was provided a few years later by [25]. First, recalling properties (1.21) and (1.22), observe that we only need to find a method to simulate $ \widetilde{Y} \sim$   GIG$ (\lambda,b,b)$ variables and only for $ \lambda\ge 0$. Next, define the relocated variable $ \widetilde{Y}_m=\widetilde{Y}-m$, where $ m=\frac{1}{b}(\lambda-1 +
\sqrt{(\lambda-1)^2 + b^2})$ is the mode of the density of  $ \widetilde{Y}$. Then the relocated variable can be generated by taking $ \widetilde{Y}_m=\frac{V}{U}$, where the pair $ (U,V)$ is uniformly distributed over the region $ \{(u,v):0\le u \le
\sqrt{h(\frac{v}{u})}\}$ with:

$\displaystyle h(t) = (t+m)^{\lambda-1} \exp\left( -\frac{b}{2} \frac{t+m+1}{t+m}\right)\;,$   for$\displaystyle \quad t\ge-m\;.$    

Since this region is irregularly shaped, it is more convenient to generate the pair $ (U,V)$ uniformly over a minimal enclosing rectangle $ \{(u,v):0\le u \le u_+,$ $ v_-\le v \le v_+ \}$. Finally, the variate $ \frac{V}{U}$ is accepted if $ U^2\le{h(\frac{V}{U})}$. The efficiency of the algorithm depends on the method of deriving and the actual choice of $ u_+$ and $ v_{\pm}$. Further, for $ \lambda\le 1$ and $ b\le
1$ there is no need for the shift at mode $ m$. Such a version of the algorithm is implemented in the *gigru* functions of UNU.RAN, a library of C functions for non-uniform random number generation developed at the Department for Statistics, Vienna University of Economics, see http://statistik.wu-wien.ac.at/unuran/. It is also implemented in the gigru function of the SSC library (a Stochastic Simulation library in C developed originally by Pierre L'Ecuyer, see http://www.iro.umontreal.ca/~lecuyer and Chap. II.2) and in the rndghd quantlet of XploRe.

1.3.2 Estimation of Parameters

1.3.2.1 Maximum Likelihood Method

The parameter estimation of generalized hyperbolic distributions can be performed by the maximum likelihood method, since there exist closed-form formulas (although, involving special functions) for the densities of these laws. The computational burden is not as heavy as for $ \alpha $-stable laws, but it still is considerable.

In general, the maximum likelihood estimation algorithm is as follows. For a vector of observations $ x=(x_1,\ldots,x_n)$, the ML estimate of the parameter vector $ \theta = (\lambda, \alpha, \beta, \delta, \mu)$ is obtained by maximizing the log-likelihood function:

$\displaystyle L(x;\theta)$ $\displaystyle = \log \kappa + \frac{\lambda-\frac12}{2} \sum_{i=1}^n \log (\delta^2 + (x_i-\mu)^2) +$    
  $\displaystyle \quad\; + \sum_{i=1}^n \log$   K$\displaystyle _{\lambda-\frac12}\left(\alpha \sqrt{\delta^2+(x_i-\mu)^2}\right) + \sum_{i=1}^n \beta(x_i-\mu)\;,{}$ (1.36)

where $ \kappa$ is defined by (1.29). Obviously, for hyperbolic ( $ \lambda=1$) distributions the algorithm uses simpler expressions of the log-likelihood function due to relation (1.24).

The routines proposed in the literature differ in the choice of the optimization scheme. The first software product that allowed statistical inference with hyperbolic distributions - the HYP program - used a gradient search technique, see [10]. In a large simulation study [84] utilized the bracketing method. The XploRe quantlets mlehyp and mlenig use yet another technique - the downhill simplex method of [77], with slight modifications due to parameter restrictions.

The main factor for the speed of the estimation is the number of modified Bessel functions to compute. Note, that for $ \lambda=1$ (i.e. the hyperbolic distribution) this function appears only in the constant $ \kappa$. For a data set with $ n$ independent observations we need to evaluate $ n$ and $ n+1$ Bessel functions for NIG and generalized hyperbolic distributions, respectively, whereas only one for the hyperbolic. This leads to a considerable reduction in the time necessary to calculate the likelihood function in the hyperbolic case. [84] reported a reduction of ca. $ 33\,{\%}$, however, the efficiency results are highly sample and implementation dependent. For example, limited simulation studies performed in XploRe revealed a $ 25\,{\%}$, $ 55\,{\%}$ and $ 85\,{\%}$ reduction in CPU time for samples of size $ 500$, $ 1000$ and $ 2000$, respectively.

We also have to say that the optimization is challenging. Some of the parameters are hard to separate since a flat-tailed generalized hyperbolic distribution with a large scale parameter is hard to distinguish from a fat-tailed distribution with a small scale parameter, see [6] who observed such a behavior already for the hyperbolic law. The likelihood function with respect to these parameters then becomes very flat, and may have local mimima. In the case of NIG distributions [97] proposed simple estimates of $ \alpha $ and $ \beta$ that can be used as staring values for the ML scheme. Starting from relation (1.34) for the tails of the NIG density they derived the following approximation:

$\displaystyle \alpha-\beta \sim \frac12 \frac{x_{1-f} + \mathbb{E}(X\vert X>x_{1-f})}{\mathbb{E}(X^2\vert X>x_{1-f}) - x_{1-f}\mathbb{E}(X\vert X>x_{1-f})}\;,$    
$\displaystyle \alpha+\beta \sim -\frac12 \frac{x_{f} + \mathbb{E}(X\vert X<x_{f})}{\mathbb{E}(X^2\vert X<x_{f}) - x_{f}\mathbb{E}(X\vert X<x_{f})}\;,$    

where $ x_f$ is the $ f$-th population quantile, see Sect. 1.2.4. After the choice of a suitable value for $ f$, [97] used $ f=5\,{\%}$, the ''tail estimates'' of $ \alpha $ and $ \beta$ are obtained by replacing the quantiles and expectations by their sample values in the above relations.

Another method of providing the starting values for the ML scheme was suggested by [84]. He estimated the parameters of a symmetric ( $ \beta =\mu =0$) generalized hyperbolic law with a reasonable kurtosis (i.e. with $ \delta \alpha \approx 1.04$) that had the variance equal to that of the empirical distribution.

1.3.2.2 Other Methods

Besides the ML approach other estimation methods have been proposed in the literature. [84] tested different estimation techniques by replacing the log-likelihood function with other score functions, like the Anderson-Darling and Kolmogorov statistics or $ L^p$-norms. But the results were disappointing. [62] made use of the Markov chain Monte Carlo technique (see Chap. II.3), however, again the results obtained were not impressive. [52] described an EM type algorithm (see Chap. II.5) for maximum likelihood estimation of the normal inverse Gaussian distribution. The algorithm can be programmed in any statistical package supporting Bessel functions and it has all the properties of the standard EM algorithm, like sure, but slow, convergence, parameters in the admissible range, etc. The EM scheme can be also generalized to the family of generalized hyperbolic distributions.

1.3.3 Are Asset Returns NIG Distributed?

It is always necessary to find a reasonable tradeoff between the introduction of additional parameters and the possible improvement of the fit. [6] mentioned the flatness of the likelihood function for the hyperbolic distribution. The variation in the likelihood function of the generalized hyperbolic distribution is even smaller for a wide range of parameters. Consequently, the generalized hyperbolic distribution applied as a model for financial data leads to overfitting ([84]). In the empirical analysis that follows we will thus concentrate only on NIG distributions. They possess nice analytic properties and have been reported to fit financial data better than hyperbolic laws ([52,62,97]).


Table 1.4: NIG and Gaussian fits to 2000 returns of the Dow Jones Industrial Average (DJIA) index from the period January 2, 1985 - November 30, 1992. Empirical and model based (NIG and Gaussian) VaR numbers at the $ 95\,{\%}$ and $ 99\,{\%}$ confidence levels are also given. The values in parentheses are the relative differences between model and empirical VaR estimates. (Q: CSAfin08)
Parameters $ \alpha $ $ \delta$ or $ \sigma $ $ \beta$ $ \mu$
NIG fit ($ \delta$) $ 79.1786$ $ 0.0080$ $ -0.3131$ $ 0.0007$
Gaussian fit ($ \sigma $)   $ 0.0115$   $ 0.0006$
Test values Anderson-Darling Kolmogorov
NIG fit $ 0.3928$ $ 0.5695$
Gaussian fit $ +$INF $ 4.5121$
VaR estimates ( $ \times 10^{-2}$) $ 95\,{\%}$ $ 99\,{\%}$
Empirical $ 1.5242$   $ 2.8922$  
NIG fit $ 1.5194$ ( $ 0.31\,{\%}$) $ 2.7855$ ( $ 3.69\,{\%}$)
Gaussian fit $ 1.8350$ ( $ 20.39\,{\%}$) $ 2.6191$ ( $ 9.44\,{\%}$)

Now, we can return to the empirical analysis. This time we want to check whether DJIA and/or DAX returns can be approximated by the NIG distribution. We estimate the parameters using the maximum likelihood approach. As can be seen in Fig. 1.9 the fitted NIG distribution ''misses'' the very extreme DJIA returns. However, it seems to give a better fit to the central part of the empirical distribution than the $ \alpha $-stable law. This is confirmed by a lower value of the Kolmogorov statistics, compare Tables 1.2 and 1.4. Surprisingly, also the Anderson-Darling statistics returns a lower value, implying a better fit in the tails of the distribution as well.

Figure 1.9: NIG (solid grey line) and Gaussian (dashed line) fits to the DJIA returns (black circles) empirical cumulative distribution function from the period January 2, 1985 - November 30, 1992. The right panel is a magnification of the left tail fit on a double logarithmic scale. Vertical lines represent the NIG (solid grey line), Gaussian (dashed line) and empirical (solid line) VaR estimates at the 95 % (filled circles, triangles and squares) and 99 % (hollow circles, triangles and squares) confidence levels. The NIG law slightly underfits the tails of the empirical distribution. Compare with Fig. 1.5 where the stable law is shown to fit the DJIA returns very well (Q: CSAfin08)
\includegraphics[width=10.2cm]{text/4-1/CSAfin08.eps}

In the right panel of Fig. 1.9 we also plotted vertical lines representing the NIG, Gaussian and empirical daily VaR estimates at the $ c =
95\,{\%}$ and $ 99\,{\%}$ confidence levels. These estimates correspond to a one day VaR of a virtual portfolio consiting of one long position in the DJIA index. The NIG $ 95\,{\%}$ VaR estimate matches the empirical VaR almost perfectly and the NIG $ 99\,{\%}$ VaR estimate also yields a smaller difference than the stable estimate, compare Tables 1.2 and 1.4. However, if we were interested in very high confidence levels (i.e. very low quantiles) then the NIG fit would be less favorable than the stable one. Like in the stable case, no simple algorithms for inverting the NIG CDF are known but finding the right quantile could be performed through a binary search routine. For some members of the generalized hyperbolic family specialized inversion techniques have been developed. For example, [59] showed that the approximate inverse of the hyperbolic CDF can be computed as the solution of a first-order differential equation.

The second analyzed data set comprises $ 2000$ returns of the Deutsche Aktienindex (DAX) index. In this case the NIG distribution offers an indisputably better fit than the Gaussian or even the $ \alpha $-stable law, see Table 1.5 and compare with Table 1.3. This can be also seen in Fig. 1.10. The ''drop off'' in the left tail of the empirical distribution is nicely caught by the NIG distribution. The empirical VaR estimates are also ''caught'' almost perfectly.


Table 1.5: NIG and Gaussian fits to $ 2000$ returns of the Deutsche Aktienindex (DAX) index from the period January 2, 1995 - December 5, 2002. Empirical and model based (NIG and Gaussian) VaR numbers at the 95 % and 99 % confidence levels are also given. The values in parentheses are the relative differences between model and empirical VaR estimates (Q: CSAfin09)
Parameters $ \alpha $ $ \delta$ or $ \sigma $ $ \beta$ $ \mu$
NIG fit ($ \delta$) $ 55.4413$ $ 0.0138$ $ -4.8692$ $ 0.0016$
Gaussian fit ($ \sigma $)   $ 0.0157$   $ 0.0004$
Test values Anderson-Darling Kolmogorov
NIG fit $ 0.3604$ $ 0.8149$
Gaussian fit $ 16.4119$ $ 2.8197$
VaR estimates ( $ \times 10^{-2}$) $ 95\,{\%}$ $ 99\,{\%}$
Empirical $ 2.5731$   $ 4.5963$  
NIG fit $ 2.5601$ ( $ 0.51\,{\%}$) $ 4.5944$ ( $ 0.04\,{\%}$)
Gaussian fit $ 2.5533$ ( $ 0.77\,{\%}$) $ 3.6260$ ( $ 21.11\,{\%}$)

Figure 1.10: NIG (solid grey line) and Gaussian (dashed line) fits to the DAX returns (black circles) empirical cumulative distribution function from the period January 2, 1995 - December 5, 2002. The right panel is a magnification of the left tail fit on a double logarithmic scale clearly showing the superiority of the NIG distribution. Compare with Fig. 1.6 where the stable law is shown to overfit the DJIA returns. Vertical lines represent the NIG (solid grey line), Gaussian (dashed line) and empirical (solid line) VaR estimates at the 95% (filled circles, triangles and squares) and 99% (hollow circles, triangles and squares) confidence levels (Q: CSAfin09)
\includegraphics[width=10.2cm]{text/4-1/CSAfin09.eps}


next up previous contents index
Next: 1.4 Value at Risk, Up: 1. Computationally Intensive Value Previous: 1.2 Stable Distributions