1.4 Fourier Inversion


1.4.1 Error Types in Approximating the Quantile through Fourier Inversion

Let $ f$ denote a continuous, absolutely integrable function and $ \phi(t)=\int_{-\infty}^{\infty} e^{itx}f(x)dx$ its Fourier transform. Then, the inversion formula

$\displaystyle f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-itx} \phi(t) dt$ (1.16)

holds.

The key to an error analysis of trapezoidal, equidistant approximations to the integral (1.16)

$\displaystyle \ensuremath{\tilde{f}}(x,\Delta_{t},t) \stackrel{\mathrm{def}}{=}...
...a_t}{2\pi} \sum_{k=-\infty}^{\infty} \phi(t + k\Delta_t) e^{-i(t +k\Delta_t) x}$ (1.17)

is the Poisson summation formula

$\displaystyle \ensuremath{\tilde{f}}(x,\Delta_{t},t) = \sum_{j=-\infty}^{\infty} f(x+\frac{2\pi}{\Delta_t}j) e^{2\pi i t j/\Delta_{t}},$ (1.18)

see (Abate and Whitt; 1992, p.22). If $ f(x)$ is approximated by $ \ensuremath{\tilde{f}}(x,\Delta_{t},0)$, the residual

$\displaystyle e_{a}(x,\Delta_{t},0) = \sum_{j\ne 0} f(x+\frac{2\pi}{\Delta_t}j)$ (1.19)

is called the aliasing error, since different ``pieces'' of $ f$ are aliased into the window $ (-\pi/\Delta_{t},\pi/\Delta_{t})$. Another suitable choice is $ t=\Delta_{t}/2$:

$\displaystyle \ensuremath{\tilde{f}}(x,\Delta_{t},\Delta_{t}/2) = \sum_{j=-\infty}^{\infty} f(x+\frac{2\pi}{\Delta_t}j)(-1)^{j}.$ (1.20)

If $ f$ is nonnegative, $ \ensuremath{\tilde{f}}(x,\Delta_{t},0)\ge f(x)$. If $ f(x)$ is decreasing in $ \vert x\vert$ for $ \vert x\vert>\pi/\Delta_{t}$, then $ \ensuremath{\tilde{f}}(x,\Delta_{t},\Delta_{t}/2)\le
f(x)$ holds for $ \vert x\vert<\pi/\Delta_{t}$. The aliasing error can be controlled by letting $ \Delta_{t}$ tend to 0. It decreases only slowly when $ f$ has ``heavy tails'', or equivalently, when $ \phi$ has non-smooth features.

It is practical to first decide on $ \Delta_{t}$ to control the aliasing error and then decide on the cut-off in the sum (1.17):

$\displaystyle \ensuremath{\tilde{\tilde{f}}}(x,T,\Delta_{t},t)=\frac{\Delta_{t}...
...} \sum_{\vert t+k\Delta_{t}\vert\le T} \phi(t+k\Delta_t) e^{-i(t+k\Delta_t) x}.$ (1.21)

Call $ e_{t}(x,T,\Delta_{t},t)\stackrel{\mathrm{def}}{=}\ensuremath{\tilde{\tilde{f}}}(x,T,\Delta_{t},t)-\ensuremath{\tilde{f}}(x,\Delta_{t},t)$ the truncation error.

For practical purposes, the truncation error $ e_{t}(x,T,\Delta_{t},t)$ essentially depends only on $ (x,T)$ and the decision on how to choose $ T$ and $ \Delta_{t}$ can be decoupled. $ e_{t}(x,T,\Delta_{t},t)$ converges to

$\displaystyle e_{t}(x,T)\stackrel{\mathrm{def}}{=}\frac{1}{2\pi} \int\limits_{-T}^{T} e^{-itx} \phi(t) dt - f(x)$ (1.22)

for $ \Delta_{t}\downarrow 0$. Using $ \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{-itx}dt = \frac{\sin(\pi x)}{\pi x} \stackrel{\mathrm{def}}{=}\operatorname{sinc}(x)$ and the convolution theorem, one gets

$\displaystyle \frac{1}{2\pi} \int\limits_{-\pi/\Delta_{x}}^{\pi/\Delta_{x}} e^{...
...= \int_{-\infty}^{\infty} f(y\Delta_{x}) \operatorname{sinc}(x/\Delta_{x}-y)dy,$ (1.23)

which provides an explicit expression for the truncation error $ e_{t}(x,T)$ in terms of $ f$. It decreases only slowly with $ T\uparrow\infty$ ( $ \Delta_{x}\downarrow 0$) if $ f$ does not have infinitely many derivatives, or equivalently, $ \phi$ has ``power tails''. The following lemma leads to the asymptotics of the truncation error in this case.

LEMMA 1.1   If $ \lim_{t\to\infty}\alpha(t)=1$, $ \nu>0$, and $ \int_{T}^{\infty}
\alpha(t) t^{-\nu}e^{it} dt$ exists and is finite for some $ T$, then

\begin{equation}
\int_{T}^{\infty} \alpha(t) t^{-\nu} e^{itx} dt \sim
\begin{...
...
\frac{i}{x} T^{-\nu} e^{ixT} & \text{if } x\neq 0
\end{cases} \end{equation}

for $ T\to\infty$.

PROOF. Under the given conditions, both the left and the right hand side converge to 0, so l'Hospital's rule is applicable to the ratio of the left and right hand sides. $ \qedsymbol$

THEOREM 1.1   If the asymptotic behavior of a Fourier transform $ \phi$ of a function $ f$ can be described as

$\displaystyle \phi(t)=w\vert t\vert^{-\nu} e^{i b \operatorname{sign}(t) +i\ensuremath{x_{\ast}}t} \alpha(t)$ (1.25)

with $ \lim_{t\to\infty}\alpha(t)=1$, then the truncation error (1.22)

\begin{align}
& e_{t}(x,T) = - \frac{1}{\pi} \Re \left\{\int_{T}^{\infty}
\phi...
...{x_{\ast}}-x)T)
& \text{if } x\neq\ensuremath{x_{\ast}} \end{cases} \end{align}

for $ T\to\infty$ at all points $ x$ where $ \frac{1}{2\pi}\int_{-T}^{T}
\phi(t) e^{-itx}$ converges to $ f(x)$. (If in the first case $ \cos(b)=0$, this shall mean that $ \lim_{T\to\infty}e_{t}(x;T)T^{\nu-1}=0$.)

PROOF. The previous lemma is applicable for all points $ x$ where the Fourier inversion integral converges. $ \qedsymbol$

The theorem completely characterizes the truncation error for those cases, where $ f$ has a ``critical point of non-smoothness'' and has a higher degree of smoothness everywhere else. The truncation error decreases one power faster away from the critical point than at the critical point. Its amplitude is inversely proportional to the distance from the critical point.

Let $ \ensuremath{\tilde{F}}$ be a (continuous) approximation to a (differentiable) cdf $ F$ with $ f=F'>0$. Denote by $ \epsilon\ge \vert\ensuremath{\tilde{F}}(x) - F(x)\vert$ a known error-bound for the cdf. Any solution $ \ensuremath{\tilde{q}}(x)$ to $ \ensuremath{\tilde{F}}(\ensuremath{\tilde{q}}(x))=F(x)$ may be considered an approximation to the true $ F(x)$-quantile $ x$. Call $ e_{q}(x) = \ensuremath{\tilde{q}}(x)-x$ the quantile error. Obviously, the quantile error can be bounded by

$\displaystyle \vert e_{q}(x)\vert \le \frac{\epsilon}{\inf_{y\in U} f(y)},$ (1.26)

where $ U$ is a suitable neighborhood of $ x$. Given a sequence of approximations $ \ensuremath{\tilde{F}}_{\epsilon}$ with $ \sup_{x}\vert\ensuremath{\tilde{F}}_{\epsilon}(x)-F(x)\vert=\epsilon\to 0$,

$\displaystyle e_{q}(x) \sim \frac{F(x)-\ensuremath{\tilde{F}}_{\epsilon}(x)}{f(x)} \qquad (\epsilon\to 0)$ (1.27)

holds.

FFT-based Fourier inversion yields approximations for the cdf $ F$ on equidistant $ \Delta_{x}$-spaced grids. Depending on the smoothness of $ F$, linear or higher-order interpolations may be used. Any monotone interpolation of $ \{F(x_{0}+\Delta_{x}j)\}_{j}$ yields a quantile approximation whose interpolation error can be bounded by $ \Delta_{x}$. This bound can be improved if an upper bound on the density $ f$ in a suitable neighborhood of the true quantile is known.


1.4.2 Tail Behavior

If $ \lambda_{j}=0$ for some $ j$, then $ \vert\phi(t)\vert={\mathcal{O}}(e^{-\delta_{j}^{2}t^{2}/2})$. In the following, we assume that $ \vert\lambda_{i}\vert > 0$ for all $ i$. The norm of $ \phi(t)$ has the form

$\displaystyle \vert\phi(t)\vert$ $\displaystyle = \prod_{i=1}^{m} (1+\lambda_{i}^{2}t^{2})^{-1/4}\exp\left\{ -\frac{\delta_{i}^{2}t^{2}/2}{1+\lambda_{i}^{2}t^{2}}\right\},$ (1.28)
$\displaystyle \vert\phi(t)\vert$ $\displaystyle \sim \ensuremath{w_{\ast}}\vert t\vert^{-m/2} \quad \vert t\vert\to\infty$ (1.29)

with


$\displaystyle \ensuremath{w_{\ast}}$ $\displaystyle \stackrel{\mathrm{def}}{=}\prod_{i=1}^{m} \vert\lambda_{i}\vert^{-1/2} \exp\left\{-\ensuremath{\frac{1}{2}}(\delta_{i}/\lambda_{i})^{2} \right\}.$ (1.30)

The arg has the form

$\displaystyle \arg \phi(t)$ $\displaystyle = \theta t +\sum_{i=1}^{m}\big\{ \ensuremath{\frac{1}{2}}\arctan(...
...ac{1}{2}}\delta_{i}^{2}t^{2}\frac{\lambda_{i}t}{1+\lambda_{i}^{2}t^{2}} \big\},$ (1.31)
$\displaystyle \arg \phi(t)$ $\displaystyle \sim \theta t + \sum_{i=1}^{m} \left\{\frac{\pi}{4} \operatorname{sign}(\lambda_{i}t) -\frac{\delta_{i}^{2}t}{2\lambda_{i}})\right\}$ (1.32)

(for $ \vert t\vert\to\infty$). This motivates the following approximation for $ \phi$:

$\displaystyle \tilde\phi(t)$ $\displaystyle \stackrel{\mathrm{def}}{=}\ensuremath{w_{\ast}}\vert t\vert^{-m/2...
...4} \ensuremath{m_{\ast}}\operatorname{sign}(t) + i\ensuremath{x_{\ast}}t \big\}$ (1.33)

with


$\displaystyle \ensuremath{m_{\ast}}$ $\displaystyle \stackrel{\mathrm{def}}{=}\sum_{i=1}^{m} \operatorname{sign}(\lambda_{i}),$ (1.34)
$\displaystyle \ensuremath{x_{\ast}}$ $\displaystyle \stackrel{\mathrm{def}}{=}\theta - \ensuremath{\frac{1}{2}}\sum_{i=1}^{m}\frac{\delta_{i}^{2}}{\lambda_{i}}.$ (1.35)

$ x_{\ast}$ is the location and $ w_{\ast}$ the ``weight'' of the singularity. The multivariate delta-gamma-distribution is $ C^{\infty}$ except at $ \ensuremath{x_{\ast}}$, where the highest continuous derivative of the cdf is of order $ [(m-1)/2]$.

Note that

$\displaystyle \alpha(t)\stackrel{\mathrm{def}}{=}\phi(t)/\tilde\phi(t) =\prod_{...
...ath{\frac{1}{2}}\frac{\delta_{j}^{2}}{\lambda_{j}^{2}} (1-i\lambda_{j}t)^{-1}\}$ (1.36)

and $ \alpha$ meets the assumptions of theorem 1.1.


1.4.3 Inversion of the cdf minus the Gaussian Approximation

Assume that $ F$ is a cdf with mean $ \mu$ and standard deviation $ \sigma$, then

$\displaystyle F(x) - \Phi(x;\mu,\sigma) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-ixt} \frac{i}{t}(\phi(t) - e^{i\mu t -\sigma^{2}t^{2}/2}) \,dt$ (1.37)

holds, where $ \Phi(.;\mu,\sigma)$ is the normal cdf with mean $ \mu$ and standard deviation $ \sigma$ and $ e^{i\mu t -\sigma^{2}t^{2}/2}$ its characteristic function. (Integrating the inversion formula (1.16) w.r.t. $ x$ and applying Fubini's theorem leads to (1.38).) Applying the Fourier inversion to $ F(x) - \Phi(x;\mu,\sigma)$ instead of $ F(x)$ solves the (numerical) problem that $ \frac{i}{t}\phi(t)$ has a pole at 0. Alternative distributions with known Fourier transform may be chosen if they better approximate the distribution $ F$ under consideration.

The moments of the delta-gamma-distribution can be derived from (1.3) and (1.5):

$\displaystyle \mu = \sum_{i} (\theta_{i} + \ensuremath{\frac{1}{2}}\lambda_{i})...
... {1}\hskip-4pt{1}}}+ \ensuremath{\frac{1}{2}}\mathop{\hbox{tr}}(\Gamma\Sigma)
$

and

$\displaystyle \sigma^{2} = \sum_{i} (\delta_{i}^2 + \ensuremath{\frac{1}{2}}\la...
...Sigma \Delta + \ensuremath{\frac{1}{2}}\mathop{\hbox{tr}}((\Gamma\Sigma)^{2}).
$

Let $ \psi(t)\stackrel{\mathrm{def}}{=}\frac{i}{t}(\phi(t) - e^{i\mu t -\sigma^{2}t^{2}/2})$. Since $ \psi(-t)=\overline{\psi(t)}$, the truncated sum (1.21) can for $ t=\Delta_{t}/2$ and $ T=(K-\ensuremath{\frac{1}{2}})\Delta_{t}$ be written as

$\displaystyle \ensuremath{\tilde{\tilde{F}}}(x_{j};T,\Delta_{t},t) - \Phi(x_{j}...
...rac{1}{2}})\Delta_t) e^{-i((k+\ensuremath{\frac{1}{2}})\Delta_t) x_{j}}\right),$    

which can comfortably be computed by a FFT with modulus $ N\ge K$:

$\displaystyle = \frac{\Delta_{t}}{\pi}\Re \big(e^{-i \frac{\Delta_{t}}{2}x_{j} ...
...nsuremath{\frac{1}{2}})\Delta_t) e^{-i k\Delta_t x_{0}} e^{-2\pi i k j/N}\big),$ (1.38)

with $ \Delta_{x}\Delta_{t}=\frac{2\pi}{N}$ and the last $ N-K$ components of the input vector to the FFT are padded with zeros.

The aliasing error of the approximation (1.20) applied to $ F-N$ is

$\displaystyle e_{a}(x,\Delta_{t},\Delta_{t}/2) = \sum_{j\ne 0} \left[F(x+\frac{2\pi}{\Delta_t}j) - \Phi(x+\frac{2\pi}{\Delta_t}j)\right](-1)^{j}.$ (1.39)

The cases $ (\lambda,\delta,\theta)=(\pm\sqrt{2},0,\mp\sqrt{2}/2)$ are the ones with the fattest tails and are thus candidates for the worst case for (1.40), asymptotically for $ \Delta_{t}\to 0$. In these cases, (1.40) is eventually an alternating sequence of decreasing absolute value and thus

$\displaystyle F(-\pi/\Delta_{t}) + 1 - F(\pi/\Delta_{t}) \le \sqrt{\frac{2}{\pi e}} e^{-\ensuremath{\frac{1}{2}}\sqrt{2}\pi/\Delta_{t}}$ (1.40)

is an asymptotic bound for the aliasing error.

The truncation error (1.22) applied to $ F-N$ is

$\displaystyle e_{t}(x;T) = -\frac{1}{\pi}\Re\left\{\int_{T}^{\infty} \frac{i}{t}\big(\phi(t) - e^{i\mu t-\sigma^{2}t^{2}/2}\big)dt\right\}.$ (1.41)

The Gaussian part plays no role asymptotically for $ T\to\infty$ and Theorem 1.1 applies with $ \nu=m/2+1$.

The quantile error for a given parameter $ \vartheta$ is

$\displaystyle \ensuremath{\tilde{q}}(\vartheta) - q(\vartheta) \sim -\frac{e_{a...
...;\Delta_{t}) + e_{t}^{\vartheta}(q(\vartheta);T)}{f^{\vartheta}(q(\vartheta))},$ (1.42)

asymptotically for $ T\to\infty$ and $ \Delta_{t}\to 0$. ( $ q(\vartheta)$ denotes the true 1%-quantile for the triplet $ \vartheta=(\theta,\Delta,\Gamma)$.) The problem is now to find the right trade-off between ``aliasing error'' and ``truncation error'', i.e., to choose $ \Delta_{t}$ optimally for a given $ K$.

Empirical observation of the one- and two-factor cases shows that $ (\lambda,\delta,\theta)=(-\sqrt{2},0,\sqrt{2}/2)$ has the smallest density ( $ \approx 0.008$) at the 1%-quantile. Since $ (\lambda,\delta,\theta)=(-\sqrt{2},0,\sqrt{2}/2)$ is the case with the maximal ``aliasing error'' as well, it is the only candidate for the worst case of the ratio of the ``aliasing error'' over the density (at the 1%-quantile).

The question which $ \vartheta$ is the worst case for the ratio of the ``truncation error'' over the density (at the 1%-quantile) is not as clear-cut. Empirical observation shows that the case $ (\lambda,\delta,\theta)=(-\sqrt{2},0,\sqrt{2}/2)$ is also the worst case for this ratio over a range of parameters in one- and two-factor problems. This leads to the following heuristic to choose $ \Delta_{t}$ for a given $ K$ ( $ T=(K-0.5)\Delta_{t}$). Choose $ \Delta_{t}$ such as to minimize the sum of the aliasing and truncation errors for the case $ (\lambda,\delta,\theta)=(-\sqrt{2},0,\sqrt{2}/2)$, as approximated by the bounds (1.41) and

$\displaystyle \limsup_{T\to \infty} \vert e_{t}(x,T)\vert T^{3/2} = \frac{w}{\pi\vert\ensuremath{x_{\ast}}-x\vert}$ (1.43)

with $ w=2^{-1/4}$, $ \ensuremath{x_{\ast}}=\sqrt{2}/2$, and the 1%-quantile $ x\approx
-3.98$. (Note that this is suitable only for intermediate $ K$, leading to accuracies of 1 to 4 digits in the quantile. For higher $ K$, other cases become the worst case for the ratio of the truncation error over the density at the quantile.)

Since $ F-N$ has a kink in the case $ m=1$, $ \lambda\neq 0$, higher-order interpolations are futile in non-adaptive methods and $ \Delta_{x}=\frac{2\pi}{N\Delta_{t}}$ is a suitable upper bound for the interpolation error. By experimentation, $ N\approx 4K$ suffices to keep the interpolation error comparatively small.

$ K=2^{6}$ evaluations of $ \phi$ ($ N=2^{8}$) suffice to ensure an accuracy of 1 digit in the approximation of the 1%-quantile over a sample of one- and two-factor cases. $ K=2^{9}$ function evaluations are needed for two digits accuracy. The XploRe implementation of the Fourier inversion is split up as follows:


z=
4091 VaRcharfDGF2 (t,par)implements the function $ \psi(t)\stackrel{\mathrm{def}}{=}\frac{i}{t}(\phi(t) - e^{i\mu t -\sigma^{2}t^{2}/2})$ for the complex argument t and the parameter list par.
z=
4097 VaRcorrfDGF2 (x,par)implements the correction term $ \Phi(x,\mu,\sigma^{2})$ for the argument x and the parameter list par.
vec=
4100 gFourierInversion (N,K,dt,t0,x0,charf,par) implements a generic Fourier inversion like in (1.39). charf is a string naming the function to be substituted for $ \psi $ in (1.39). par is the parameter list passed to charf.

4103 gFourierInversion can be applied to 4106 VaRcharfDG , giving the density, or to 4109 VaRcharfDGF2 , giving the cdf minus the Gaussian approximation. The three auxiliary functions are used by


l=
4137 VaRcdfDG (par,N,K,dt) to approximate the cumulative distribution function (cdf) of the distribution from the class of quadratic forms of Gaussian vectors with parameter list par. The output is a list of two vectors x and y, containing the cdf-approximation on a grid given by x.
q=
4140 cdf2quant (a,l) approximates the a-quantile from the list l, as returned from 4143 VaRcdfDG .
q=
4146 VaRqDG (a,par,N,K,dt)calls 4149 VaRcdfDG and 4152 cdf2quant to approximate an a-quantile for the distribution of the class of quadratic forms of Gaussian vectors that is defined by the parameter list par.

The following example plots the 1%-quantile for a one-parametric family of the class of quadratic forms of one- and two-dimensional Gaussian vectors:




4156 XFGqDGtest.xpl