1.2 Definitions and Basic Characteristics

Stable laws - also called $ \alpha $-stable, stable Paretian or Lévy stable - were introduced by Levy (1925) during his investigations of the behavior of sums of independent random variables. A sum of two independent random variables having an $ \alpha $-stable distribution with index $ \alpha $ is again $ \alpha $-stable with the same index $ \alpha $. This invariance property, however, does not hold for different $ \alpha $'s.

Figure 1.1: Left panel: A semilog plot of symmetric ( $ \beta =\mu =0$) $ \alpha $-stable probability density functions (pdfs) for $ \alpha = 2$ (black solid line), 1.8 (red dotted line), 1.5 (blue dashed line) and 1 (green long-dashed line). The Gaussian ($ \alpha = 2$) density forms a parabola and is the only $ \alpha $-stable density with exponential tails. Right panel: Right tails of symmetric $ \alpha $-stable cumulative distribution functions (cdfs) for $ \alpha = 2$ (black solid line), 1.95 (red dotted line), 1.8 (blue dashed line) and 1.5 (green long-dashed line) on a double logarithmic paper. For $ \alpha <2$ the tails form straight lines with slope $ -\alpha $.
\includegraphics[width=.7\defpicwidth]{STFstab01a.ps} \includegraphics[width=.7\defpicwidth]{STFstab01b.ps}

The $ \alpha $-stable distribution requires four parameters for complete description: an index of stability $ \alpha\in (0,2]$ also called the tail index, tail exponent or characteristic exponent, a skewness parameter $ \beta\in [-1,1]$, a scale parameter $ \sigma>0$ and a location parameter $ \mu\in \mathbb{R}$. The tail exponent $ \alpha $ determines the rate at which the tails of the distribution taper off, see the left panel in Figure 1.1. When $ \alpha = 2$, the Gaussian distribution results. When $ \alpha <2$, the variance is infinite and the tails are asymptotically equivalent to a Pareto law, i.e. they exhibit a power-law behavior. More precisely, using a central limit theorem type argument it can be shown that (Samorodnitsky and Taqqu; 1994; Janicki and Weron; 1994):

\begin{displaymath}\begin{cases}\lim_{x\rightarrow\infty} x^\alpha \textrm{P}(X>...
...extrm{P}(X<-x) = C_{\alpha}(1+\beta) \sigma^\alpha, \end{cases}\end{displaymath} (1.1)

where:

$\displaystyle C_{\alpha}=\left(2\int_0^\infty x^{-\alpha} \sin(x) dx \right)^{-1}=\frac{1}{\pi}\Gamma(\alpha)\sin\frac{\pi\alpha}{2}.
$

The convergence to a power-law tail varies for different $ \alpha $'s and, as can be seen in the right panel of Figure 1.1, is slower for larger values of the tail index. Moreover, the tails of $ \alpha $-stable distribution functions exhibit a crossover from an approximate power decay with exponent $ \alpha>2$ to the true tail with exponent $ \alpha $. This phenomenon is more visible for large $ \alpha $'s (Weron; 2001).

When $ \alpha >1$, the mean of the distribution exists and is equal to $ \mu$. In general, the $ p$th moment of a stable random variable is finite if and only if $ p<\alpha$. When the skewness parameter $ \beta$ is positive, the distribution is skewed to the right, i.e. the right tail is thicker, see the left panel of Figure 1.2. When it is negative, it is skewed to the left. When $ \beta = 0$, the distribution is symmetric about $ \mu$. As $ \alpha $ approaches 2, $ \beta$ loses its effect and the distribution approaches the Gaussian distribution regardless of $ \beta$. The last two parameters, $ \sigma$ and $ \mu$, are the usual scale and location parameters, i.e. $ \sigma$ determines the width and $ \mu$ the shift of the mode (the peak) of the density. For $ \sigma =1$ and $ \mu=0$ the distribution is called standard stable.

Figure 1.2: Left panel: Stable pdfs for $ \alpha =1.2$ and $ \beta = 0$ (black solid line), 0.5 (red dotted line), 0.8 (blue dashed line) and 1 (green long-dashed line). Right panel: Closed form formulas for densities are known only for three distributions - Gaussian ($ \alpha = 2$; black solid line), Cauchy ($ \alpha =1$; red dotted line) and Levy ( $ \alpha =0.5, \beta =1$; blue dashed line). The latter is a totally skewed distribution, i.e. its support is $ \mathbb{R}_+$. In general, for $ \alpha <1$ and $ \beta =1$ ($ -1$) the distribution is totally skewed to the right (left).
\includegraphics[width=.7\defpicwidth]{STFstab02a.ps} \includegraphics[width=.7\defpicwidth]{STFstab02b.ps}

Figure 1.3: Comparison of $ S$ and $ S^0$ parameterizations: $ \alpha $-stable pdfs for $ \beta =0.5$ and $ \alpha =0.5$ (black solid line), 0.75 (red dotted line), 1 (blue short-dashed line), 1.25 (green dashed line) and 1.5 (cyan long-dashed line).
\includegraphics[width=.7\defpicwidth]{STFstab03a.ps} \includegraphics[width=.7\defpicwidth]{STFstab03b.ps}


1.2.1 Characteristic Function Representation

Due to the lack of closed form formulas for densities for all but three distributions (see the right panel in Figure 1.2), the $ \alpha $-stable law can be most conveniently described by its characteristic function $ \phi(t)$ - the inverse Fourier transform of the probability density function. However, there are multiple parameterizations for $ \alpha $-stable laws and much confusion has been caused by these different representations, see Figure 1.3. The variety of formulas is caused by a combination of historical evolution and the numerous problems that have been analyzed using specialized forms of the stable distributions. The most popular parameterization of the characteristic function of $ X \sim S_\alpha(\sigma,\beta,\mu)$, i.e. an $ \alpha $-stable random variable with parameters $ \alpha $, $ \sigma$, $ \beta$, and $ \mu$, is given by (Weron; 2004; Samorodnitsky and Taqqu; 1994):

$\displaystyle \ln\phi(t) = \begin{cases}-\sigma^{\alpha}\vert t\vert^{\alpha}\{...
...a {\rm sign}(t)\frac{2}{\pi}\ln\vert t\vert\}+ i \mu t, & \alpha=1. \end{cases}$ (1.2)

For numerical purposes, it is often advisable to use Nolan's (1997) parameterization:

$\displaystyle \ln\phi_0(t) = \begin{cases}-\sigma^{\alpha}\vert t\vert^{\alpha}...
...}(t)\frac{2}{\pi}\ln(\sigma \vert t\vert)\}+ i \mu_0 t, & \alpha=1. \end{cases}$ (1.3)

The $ S^0_{\alpha}(\sigma,\beta,\mu_0)$ parameterization is a variant of Zolotariev's (M)-parameterization (Zolotarev; 1986), with the characteristic function and hence the density and the distribution function jointly continuous in all four parameters, see the right panel in Figure 1.3. In particular, percentiles and convergence to the power-law tail vary in a continuous way as $ \alpha $ and $ \beta$ vary. The location parameters of the two representations are related by $ \mu = \mu_0
- \beta\sigma\tan\frac{\pi\alpha}{2}$ for $ \alpha\ne 1$ and
$ \mu = \mu_0 - \beta\sigma\frac{2}{\pi}\ln\sigma$ for $ \alpha =1$. Note also, that the traditional scale parameter $ \sigma_G$ of the Gaussian distribution defined by:

$\displaystyle f_G(x) = \frac{1}{\sqrt{2\pi}\sigma_G} \exp\left\{ -\frac{(x-\mu)^2}{2\sigma_G^2} \right\},$ (1.4)

is not the same as $ \sigma$ in formulas (1.2) or (1.3). Namely, $ \sigma_G = \sqrt{2} \sigma$.


1.2.2 Stable Density and Distribution Functions

The lack of closed form formulas for most stable densities and distribution functions has negative consequences. For example, during maximum likelihood estimation computationally burdensome numerical approximations have to be used. There generally are two approaches to this problem. Either the fast Fourier transform (FFT) has to be applied to the characteristic function (Mittnik, Doganoglu, and Chenyao; 1999) or direct numerical integration has to be utilized (Nolan; 1997, 1999).

For data points falling between the equally spaced FFT grid nodes an interpolation technique has to be used. Taking a larger number of grid points increases accuracy, however, at the expense of higher computational burden. The FFT based approach is faster for large samples, whereas the direct integration method favors small data sets since it can be computed at any arbitrarily chosen point. Mittnik, Doganoglu, and Chenyao (1999) report that for $ N=2^{13}$ the FFT based method is faster for samples exceeding 100 observations and slower for smaller data sets. Moreover, the FFT based approach is less universal - it is efficient only for large $ \alpha $'s and only for pdf calculations. When computing the cdf the density must be numerically integrated. In contrast, in the direct integration method Zolotarev's (1986) formulas either for the density or the distribution function are numerically integrated.

Set $ \zeta = -\beta\tan\frac{\pi\alpha}{2}$. Then the density $ f(x;\alpha,\beta)$ of a standard $ \alpha $-stable random variable in representation $ S^0$, i.e. $ X\sim S^0_\alpha(1,\beta,0)$, can be expressed as (note, that Zolotarev (1986, Section 2.2) used yet another parametrization):

where

$\displaystyle \xi =\begin{cases}
\frac{1}{\alpha} \arctan(-\zeta), & \alpha \ne 1, \\
\frac{\pi}{2} , & \alpha=1,
\end{cases}$

and

$\displaystyle V(\theta;\alpha,\beta) = \begin{cases}(\cos\alpha\xi)^\frac{1}{\a...
...c{\pi}{2}+\beta\theta)\tan\theta\right\} , & \alpha=1, \beta \ne 0.
\end{cases}$

The distribution $ F(x;\alpha,\beta)$ of a standard $ \alpha $-stable random variable in representation $ S^0$ can be expressed as:

Formula (1.5) requires numerical integration of the function $ g(\cdot)\exp\{-g(\cdot)\}$, where $ g(\theta;x,\alpha,\beta)=(x-\zeta)^\frac{\alpha}{\alpha-1}V(\theta;\alpha,\beta)$. The integrand is 0 at $ -\xi$, increases monotonically to a maximum of $ \frac{1}{e}$ at point $ \theta^*$ for which $ g(\theta^*;x,\alpha,\beta)=1$, and then decreases monotonically to 0 at $ \frac{\pi}{2}$ (Nolan; 1997). However, in some cases the integrand becomes very peaked and numerical algorithms can miss the spike and underestimate the integral. To avoid this problem we need to find the argument $ \theta^*$ of the peak numerically and compute the integral as a sum of two integrals: one from $ -\xi$ to $ \theta^*$ and the other from $ \theta^*$ to $ \frac{\pi}{2}$.