next up previous contents index
Next: 1.2 Stable Distributions Up: 1. Computationally Intensive Value Previous: 1. Computationally Intensive Value

1.1 Introduction

Market risks are the prospect of financial losses - or gains - due to unexpected changes in market prices and rates. Evaluating the exposure to such risks is nowadays of primary concern to risk managers in financial and non-financial institutions alike. Until late 1980s market risks were estimated through gap and duration analysis (interest rates), portfolio theory (securities), sensitivity analysis (derivatives) or ''what-if'' scenarios. However, all these methods either could be applied only to very specific assets or relied on subjective reasoning.

Since the early 1990s a commonly used market risk estimation methodology has been the Value at Risk (VaR). A VaR measure is the highest possible loss $ L$ incurred from holding the current portfolio over a certain period of time at a given confidence level ([27,40,51]):

$\displaystyle \mathbb{P} ( L >$   VaR$\displaystyle ) \le 1 - c\;,$    

where $ c$ is the confidence level, typically $ 95\,{\%}$, $ {97.5}\,{\%}$ or $ 99\,{\%}$. By convention, $ L = -\rm\Delta
X(\tau)$, where $ \rm\Delta X(\tau)$ is the relative change (return) in portfolio value over the time horizon $ \tau$. Hence, large values of $ L$ correspond to large losses (or large negative returns).

The VaR figure has two important characteristics: (1) it provides a common consistent measure of risk across different positions and risk factors and (2) it takes into account the correlations or dependencies between different risk factors. Because of its intuitive appeal and simplicity, it is no surprise that in a few years Value at Risk has become the standard risk measure used around the world. However, VaR has a number deficiencies, among them the non-subadditivity - a sum of VaR's of two portfolios can be smaller than the VaR of the combined portfolio. To cope with these shortcomings, [2] proposed an alternative measure that satisfies the assumptions of a coherent, i.e. an adequate, risk measure. The Expected Shortfall (ES), also called Expected Tail Loss or Conditional VaR, is the expected value of the losses in excess of VaR:

ES$\displaystyle = \mathbb{E}( L \vert L >$   VaR$\displaystyle )\;.$    

It is interesting to note, that - although new to the finance industry - Expected Shortfall has been familiar to insurance practitioners for a long time. It is very similar to the mean excess function which is used to characterize claim size distributions (Burnecki et al., 2004).

The essence of the VaR and ES computations is estimation of low quantiles in the portfolio return distributions. Hence, the performance of market risk measurement methods depends on the quality of distributional assumptions on the underlying risk factors. Many of the concepts in theoretical and empirical finance developed over the past decades - including the classical portfolio theory, the Black-Scholes-Merton option pricing model and even the RiskMetrics variance-covariance approach to VaR - rest upon the assumption that asset returns follow a normal distribution. But is this assumption justified by empirical data?

Figure 1.1: DJIA daily closing values $ X_t$ (left panel) and daily returns $ \log (X_{t+1}/X_t)$ (right panel) from the period January 2, 1985 - November 30, 1992. Note, that this period includes Black Monday, the worst stock market crash in Wall Street history. On October 19, 1987 DJIA lost $ 508$ points or $ {22.6}\,{\%}$ of its value (Q: CSAfin01)
\includegraphics[width=10cm]{text/4-1/CSAfin01.eps}

Figure 1.2: Gaussian (dashed line) fit to the DJIA daily returns (circles) empirical cumulative distribution function from the period January 2, 1985 - November 30, 1992. For better exposition of the fit in the central part of the distribution ten largest and ten smallest returns are not illustrated in the left panel. The right panel is a magnification of the left tail fit on a double logarithmic scale clearly showing the discrepancy between the data and the normal distribution. Vertical lines represent the Gaussian (dashed line) and empirical (solid line) VaR estimates at the 95 % (filled triangles and squares) and 99 % (hollow triangles and squares) confidence levels (Q: CSAfin02)
\includegraphics[width=10.2cm]{text/4-1/CSAfin02.ps}

No, it is not! It has been long known that asset returns are not normally distributed. Rather, the empirical observations exhibit excess kurtosis (fat tails). The Dow Jones Industrial Average (DJIA) index is a prominent example, see Fig. 1.1 where the index itself and its returns (or log-returns) are depicted. In Fig. 1.2 we plotted the empirical distribution of the DJIA index. The contrast with the Gaussian law is striking. This heavy tailed or leptokurtic character of the distribution of price changes has been repeatedly observed in various markets and may be quantitatively measured by the kurtosis in excess of $ 3$, a value obtained for the normal distribution ([13,20,42,67,87]). In Fig. 1.2 we also plotted vertical lines representing the Gaussian and empirical daily VaR estimates at the $ c =
95\,{\%}$ and $ 99\,{\%}$ confidence levels. They depict a typical situation encountered in financial markets. The Gaussian model overestimates the VaR number at the $ 95\,{\%}$ confidence level and underestimates it at the $ 99\,{\%}$ confidence level.

These VaR estimates are used here only for illustrative purposes and correspond to a one day VaR of a virtual portfolio consisting of one long position in the DJIA index. Note, that they are equal to the absolute value of the $ 5\,{\%}$ and $ 1\,{\%}$ quantiles, respectively. Hence, calculating the VaR number reduces to finding the $ (1-c)$ quantile. The empirical $ (1-c)$ quantile is obtained by taking the $ k$th smallest value of the sample, where $ k$ is the smallest integer greater or equal to the lenght of the sample times $ (1-c)$. The Gaussian $ (1-c)$ quantile is equal to $ F^{-1}(1-c)$, where $ F$ is the normal distribution function. Since algorithms for evaluating the inverse of the Gaussian distribution function are implemented in practically any computing environment, calculating the quantile is straightforward.

Interestingly, the problem of the underestimation of risk by the Gaussian distribution has been dealt with by the regulators in an ad hoc way. The [7] suggested that for the purpose of determining minimum capital reserves financial institutions use a ten day VaR at the $ c = 99\,{\%}$ confidence level multiplied by a safety factor $ s \in [3,4]$, with the exact value of $ s$ depending on the past performance of the model. It has been argued by [93] and [26] that the range of the safety factor comes from the heavy-tailed nature of the returns distribution. Indeed, if we assume that the asset returns distribution is symmetric and has finite variance $ \sigma ^2$ then from Chebyshev's inequality ([58]) we obtain $ \mathbb{P}(L\ge\epsilon) \le
\sigma^2/2\epsilon^2$, where $ L$ represents the random loss over the specified time horizon. So if we want to calculate the upper bound for a $ 99\,{\%}$ VaR, setting $ \sigma^2/2\epsilon^2=1\,{\%}$ yields $ \epsilon=7.07\sigma$, which in turn implies that VaR$ _{99\,{\%}} \le 7.07 \sigma$. However, if we assumed a Gaussian distribution of returns then we would have VaR$ _{99\,{\%}} \le 2.33 \sigma$, which is roughly three times lower than the bound obtained for a heavy-tailed, finite variance distribution.

Having said this much about the inadequacy of the Gaussian distribution for financial modeling and risk management we have no other choice but offer some heavy-tailed alternatives. We have to mention, though, that all distributional classes described in this chapter present computational challenge. Large parts of the text are thus devoted to numerical issues. In Sect. 1.2 we deal with the historically earliest alternative - the stable laws and briefly characterize their recent generalizations - the so-called truncated stable distributions. In Sect. 1.3 we study the class of generalized hyperbolic laws. Finally, in Sect. 1.4 we introduce the notion of copulas and discuss the relation between VaR, asset portfolios and heavy tails.

All theoretical results are illustrated by empirical examples which utilize the quantlets (i.e. functions) of the XploRe computing environment ([43]). For reference, figure captions include names of the corresponding quantlets (Q). The reader of this chapter may therefore repeat and modify at will all the presented examples via the local XploRe Quantlet Server (XQS) without having to buy additional software. Such XQ Servers may be downloaded freely from the XploRe website http://www.xplore-stat.de. Currently, no other statistical computing environment offers a complete coverage of the issues discussed in this chapter. However, when available links to third-party libraries and specialized software are also provided.


next up previous contents index
Next: 1.2 Stable Distributions Up: 1. Computationally Intensive Value Previous: 1. Computationally Intensive Value