1.2 General Properties of Delta-Gamma-Normal Models

The change in the portfolio value, $ \Delta V$, can be expressed as a sum of independent random variables that are quadratic functions of standard normal random variables $ Y_{i}$ by means of the solution of the generalized eigenvalue problem

$\displaystyle CC^{\top }$ $\displaystyle = \Sigma,$    
$\displaystyle C^{\top }\Gamma C$ $\displaystyle = \Lambda.$    

This implies

$\displaystyle \Delta V$ $\displaystyle = \sum_{i=1}^{m} (\delta_{i}Y_{i} + \ensuremath{\frac{1}{2}}\lambda_{i} Y_{i}^{2})$ (1.2)
  $\displaystyle = \sum_{i=1}^{m} \left\{\ensuremath{\frac{1}{2}}\lambda_{i}\left(...
...i}}{\lambda_{i}}+Y_{i}\right)^{2} - \frac{\delta_{i}^{2}}{2\lambda_{i}}\right\}$    

with $ X=C Y$, $ \delta=C^{\top }\Delta$ and $ \Lambda=\mathop{\hbox{diag}}(\lambda_{1},\ldots,\lambda_{m})$. Packages like LAPACK (Anderson et al.; 1999) contain routines directly for the generalized eigenvalue problem. Otherwise $ C$ and $ \Lambda$ can be computed in two steps:
  1. Compute some matrix $ B$ with $ BB^{\top }=\Sigma$. If $ \Sigma$ is positive definite, the fastest method is Cholesky decomposition. Otherwise an eigenvalue decomposition can be used.
  2. Solve the (standard) symmetric eigenvalue problem for the matrix $ B^{\top }\Gamma B$:

    $\displaystyle Q^{\top } B^{\top }\Gamma B Q = \Lambda
$

    with $ Q^{-1}=Q^{\top }$ and set $ C \stackrel{\mathrm{def}}{=}BQ$.
The decomposition is implemented in the quantlet


npar=
2273 VaRDGdecomp (par)uses a generalized eigen value decomposition to do a suitable coordinate change. par is a list containing Delta, Gamma, Sigma on input. npar is the same list, containing additionally B, delta, and lambda on output.

The characteristic function of a non-central $ \chi^{2}_{1}$ variate ($ (Z+a)^{2}$, with standard normal $ Z$) is known analytically:

$\displaystyle \ensuremath{\mathrm{E}}e^{it(Z+a)^{2}} = (1-2it)^{-1/2} \exp\left(\frac{a^{2}it}{1-2it}\right).
$

This implies the characteristic function for $ \Delta V$

$\displaystyle \ensuremath{\mathrm{E}}e^{it\Delta V} = \prod_{j} \frac{1}{\sqrt{...
...{j}t}}} \exp\{-\ensuremath{\frac{1}{2}}\delta_{j}^{2}t^{2}/(1-i\lambda_{j}t)\},$ (1.3)

which can be re-expressed in terms of $ \Gamma$ and $ B$

$\displaystyle \ensuremath{\mathrm{E}}e^{it\Delta V} = \mathop{\rm {det}}(I-it B...
...frac{1}{2}}t^{2}\Delta^{\top } B(I-it B^{\top }\Gamma B)^{-1}B^{\top }\Delta\},$ (1.4)

or in terms of $ \Gamma$ and $ \Sigma$

$\displaystyle \ensuremath{\mathrm{E}}e^{it\Delta V} = \mathop{\rm {det}}(I-it \...
...nsuremath{\frac{1}{2}}t^{2}\Delta^{\top }\Sigma(I-it\Gamma\Sigma)^{-1}\Delta\}.$ (1.5)

Numerical Fourier-inversion of (1.3) can be used to compute an approximation to the cumulative distribution function (cdf) $ F$ of $ \Delta V$. (The $ \alpha$-quantile is computed by root-finding in $ F(x)=\alpha$.) The cost of the Fourier-inversion is $ {\mathcal{O}}(N \log N)$, the cost of the function evaluations is $ {\mathcal{O}}(mN)$, and the cost of the eigenvalue decomposition is $ {\mathcal{O}}(m^{3})$. The cost of the eigenvalue decomposition dominates the other two terms for accuracies of one or two decimal digits and the usual number of risk factors of more than a hundred. Instead of a full spectral decomposition, one can also just reduce $ B^{\top }\Gamma B$ to tridiagonal form $ B^{\top }\Gamma B=QTQ^\top $. ($ T$ is tridiagonal and $ Q$ is orthogonal.) Then the evaluation of the characteristic function in (1.4) involves the solution of a linear system with the matrix $ I-itT$, which costs only $ {\mathcal{O}}(m)$ operations. An alternative route is to reduce $ \Gamma\Sigma$ to Hessenberg form $ \Gamma\Sigma=QHQ^\top $ or do a Schur decomposition $ \Gamma\Sigma=QRQ^\top $. ($ H$ is Hessenberg and $ Q$ is orthogonal. Since $ \Gamma\Sigma$ has the same eigenvalues as $ B^{\top }\Gamma B$ and they are all real, $ R$ is actually triangular instead of quasi-triangular in the general case, Anderson et al. (1999). The evaluation of (1.5) becomes $ {\mathcal{O}}(m^{2})$, since it involves the solution of a linear system with the matrix $ I-itH$ or $ I-itR$, respectively. Reduction to tridiagonal, Hessenberg, or Schur form is also $ {\mathcal{O}}(m^{3})$, so the asymptotics in the number of risk factors $ m$ remain the same in all cases. The critical $ N$, above which the complete spectral decomposition $ +$ fast evaluation via (1.3) is faster than the reduction to tridiagonal or Hessenberg form $ +$ slower evaluation via (1.4) or (1.5) remains to be determined empirically for given $ m$ on a specific machine.

The computation of the cumulant generating function and the characteristic function from the diagonalized form is implemented in the following quantlets:


z=
2323 VaRcgfDG (t,par)Computes the cumulant generating function (cgf) for the class of quadratic forms of Gaussian vectors.
z=
2326 VaRcharfDG (t,par)Computes the characteristic function for the class of quadratic forms of Gaussian vectors.
t is the complex argument and par the parameter list generated by 2329 VaRDGdecomp .

The advantage of the Cornish-Fisher approximation is that it is based on the cumulants, which can be computed without any matrix decomposition:

$\displaystyle \kappa_{1}$ $\displaystyle = \ensuremath{\frac{1}{2}}\sum_{i} \lambda_{i}$   $\displaystyle = \ensuremath{\frac{1}{2}}\mathop{\hbox{tr}}(\Gamma\Sigma),$    
$\displaystyle \kappa_{r}$ $\displaystyle =\ensuremath{\frac{1}{2}}\sum_{i} \{ (r-1)! \lambda_{i}^{r} + r! \delta_{i}^{2}\lambda_{i}^{r-2}\}$   $\displaystyle = \ensuremath{\frac{1}{2}}(r-1)! \mathop{\hbox{tr}}((\Gamma\Sigma)^{r})$    
      $\displaystyle \quad + \ensuremath{\frac{1}{2}}r! \Delta^{\top } \Sigma (\Gamma\Sigma)^{r-2}\Delta$    

($ r\ge 2$). Although the cost of computing the cumulants needed for the Cornish-Fisher approximation is also $ {\mathcal{O}}(m^{3})$, this method can be faster than the eigenvalue decomposition for small orders of approximation and relatively small numbers of risk factors.

The computation of all cumulants up to a certain order directly from $ \Gamma\Sigma$ is implemented in the quantlet 2353 VaRcumulantsDG , while the computation of a single cumulant from the diagonal decomposition is provided by 2356 VaRcumulantDG :


vec=
2377 VaRcumulantsDG (n,par)Computes the first n cumulants for the class of quadratic forms of Gaussian vectors. The list par contains at least Gamma and Sigma.
z=
2380 VaRcumulantDG (n,par)Computes the n-th cumulant for the class of quadratic forms of Gaussian vectors. The parameter list par is to be generated with 2383 VaRDGdecomp .

Partial Monte-Carlo (or partial Quasi-Monte-Carlo) costs $ {\mathcal{O}}(m^{2})$ operations per sample. (If $ \Gamma$ is sparse, it may cost even less.) The number of samples needed is a function of the desired accuracy. It is clear from the asymptotic costs of the three methods that partial Monte Carlo will be preferable for sufficiently large $ m$.

While Fourier-inversion and Partial Monte-Carlo can in principal achieve any desired accuracy, the Cornish-Fisher approximations provide only a limited accuracy, as shown in the next section.