22.1 Integration Theory

Definition 22.1  
A decomposition $ \cal Z$ of the interval $ [a,b]$ is understood to be a set
$ {\cal Z}
\stackrel{\mathrm{def}}{=}\{ t_0,t_1,\ldots,t_n \}$ of points $ t_j$ with $ a = t_0 <
t_1 < \ldots < t_n = b$. Through this the interval $ [a,b]$ is decomposed into $ n$ sub-intervals $ [t_k,t_{k+1}],$ where $ k=0,1,2,\ldots,n-1.$ $ \vert{\cal Z}\vert \stackrel{\mathrm{def}}{=}\max_k(t_{k+1}-t_k)$, that is, the length of the largest resulting sub-interval and is referred to as the refinement of the decomposition $ {\cal
Z}$.

Definition 22.2  
For a function $ w :[a,b]\longrightarrow\mathbb{R}$ and a decomposition $ {\cal Z}
\stackrel{\mathrm{def}}{=}\{ t_0,t_1,\ldots,t_n \}$ one defines the variation of $ w$ with respect to $ {\cal
Z}$ as:

$\displaystyle V({\cal Z})\stackrel{\mathrm{def}}{=}\sum_{k=0}^{n-1}\vert w(t_{k+1})-w(t_k)\vert $

$ V \stackrel{\mathrm{def}}{=}\sup\limits_{{\cal Z}}V({\cal Z})$ is called the total variation of $ w$ on $ [a,b]$. If $ V<\infty$ holds, then $ w$ is of finite variation on $ [a,b]$.

Theorem 22.1  
For a function $ w :[a,b]\longrightarrow\mathbb{R}$ it holds that:
  1. $ w$ is of finite variation when $ w$ is monotone,
  2. $ w$ is of finite variation when $ w$ is Lipschitz continuous,
  3. $ w$ is bounded when $ w$ is of finite variation.
Moreover, sums, differences and products of functions of finite variation are themselves of finite variation.

Definition 22.3  
Given the functions $ f,w : [a,b] \to \mathbb{R}$ and a decomposition $ {\cal
Z}$, choose for $ k=0,1,\ldots,n-1$ partitions $ \tau_k\in[t_k,t_{k+1}]$ and form:

$\displaystyle I({\cal Z},$ $ \mathit \tau$$\displaystyle ) \stackrel{\mathrm{def}}{=}\sum_{k=0}^{n-1} f(\tau_k)\cdot
\{w(t_{k+1})-w(t_k)\} $

If $ I({\cal Z},$$&tau#tau;$$ )$ converges for $ \vert{\cal Z}\vert \to 0$ to a limiting value $ I$, which does not depend on the chosen decomposition $ {\cal
Z}$ nor on the choice of the partitions $ \tau_k$, then $ I$ is called the Riemann-Stieltjes integral of $ f$. One writes:

$\displaystyle I=\int_a^b f(t) dw(t).$

For $ w(t)=t$ we get the Riemann Integral as a special case of the Stieltjes Integrals.

Theorem 22.2 (Characteristics of the Riemann-Stieltjes Integral)  
  1. If the corresponding integrals on the right hand side exist, then the linearity characteristics hold:
    $\displaystyle \int_a^b (\alpha\cdot f+\beta\cdot g)~dw$ $\displaystyle =$ $\displaystyle \alpha\int_a^b f~dw + \beta\int_a^b g~dw \quad(\alpha,\beta\in\mathbb{R})$  
    $\displaystyle \int_a^b f~d(\alpha\cdot w+\beta\cdot v)$ $\displaystyle =$ $\displaystyle \alpha\int_a^b f~dw + \beta\int_a^b f~d v \quad(\alpha,\beta\in\mathbb{R})$  

  2. If the integral $ \int_a^b fdw$ and the Integrals $ \int_a^c fdw$ exist, then for $ \int_c^b f dw,$ $ a<c<b$ it holds that:

    $\displaystyle \int_a^b fdw = \int_a^c fdw + \int_c^b fdw $

  3. If $ f$ is continuous on $ [a,b]$ and $ w$ is of finite variation, then $ \int_a^b fdw$ exists.
  4. If $ f$ is continuous on $ [a,b]$ and $ w$ is differentiable with a bounded derivative, then it holds that:

    $\displaystyle \int_a^b f(t)dw(t) = \int_a^b f(t)\cdot w'(t) dt$

  5. Partial integration: If $ \int_a^b fdg$ or $ \int_a^b gdf$ exist, so does the other respective integral and it holds that:

    $\displaystyle \int_a^b fdg + \int_a^b gdf = f(b)g(b)-f(a)g(a) $

  6. If $ w$ is continuous, it holds that $ \int_a^b dw(t) =
w(b) - w(a)$
  7. If $ f$ is continuous on $ [a,b]$ and $ w$ is step-wise constant with discontinuity points $ \{c_k,k=1,\ldots,m\}$, then:

    $\displaystyle \int_a^b fdw = \sum_{k=1}^m f(c_k)\cdot\Big\{w(c_k^+)-w(c_k^-)\Big\} $

    where $ c_k^+$ ($ c_k^-$) is the right (left) continuous limit and $ w(c_k^+)-w(c_k^-)$ is the step height of $ w$ on $ \{c_k\}.$

Theorem 22.3 (Radon-Nikodym)  
Let $ \lambda$ and $ \mu$ be positive measures on $ (\Omega,\cal{F})$ with
  1. $ 0<\mu(\Omega) < \infty$ and $ 0 < \lambda(\Omega)<\infty$
  2. $ \lambda$ is absolutely continuous with respect to $ \mu$, then from $ \mu(A)=0$ it follows that $ \lambda(A)=0$ for all $ A\in\cal{F}$ (written: $ \lambda\ll\mu$).
When a non-negative $ \cal{F}$-measurable function $ h$ exists on $ \Omega,$ then it holds that:
$ \forall A \in\cal{F}: \lambda(A) = {{\int}_{A}}$ $ h\;
d\mu.$
In particular, for all measurable functions $ f$ it holds that:

$\displaystyle \int f d\lambda = \int f\cdot h \; d\mu.$

Remark 22.1  
One often uses the abbreviation $ \lambda=h\cdot\mu$ in the Radon-Nikodym theorem and refers to $ h$ as the density of $ \lambda$ with respect to $ \mu$. Due to its construction $ h$ is also referred to as the Radon-Nikodym derivative. In this case one writes $ h=\frac{d\lambda}{d\mu}$.

An important tool in stochastic analysis is the transformation of measure, which is illustrated in the following example.

Example 22.1   Let $ Z_1, \ldots, Z_n$ be independent random variables with standard normal distributions on the measurable space $ (\Omega,\cal{F},{\P})$ and $ \mu_1,\ldots,\mu_n \in \mathbb{R}$. Then by

$\displaystyle {\rm Q}(d\omega)\stackrel{\mathrm{def}}{=}\xi(\omega) \cdot {\P(d...
...rel{\mathrm{def}}{=}\exp\{\sum\limits_{i=1}^n \mu_iZ_i(\omega)-\frac12\mu_i^2\}$

an equivalent probability measure $ {\rm Q}$ for $ {\P}$ is defined. For the distribution of the $ Z_1, \ldots, Z_n$ under the new measure $ {\rm Q}$ it holds that:
    $\displaystyle {\rm Q}(Z_1\in dz_1,\ldots,Z_n\in dz_n)$  
  $\displaystyle =$ $\displaystyle \exp\{\sum\limits_{i=1}^n (\mu_iz_i-\frac12\mu_i^2)\} \cdot
{\P}(Z_1\in dz_1,\ldots,Z_n\in dz_n)$  
  $\displaystyle =$ $\displaystyle \exp\{\sum\limits_{i=1}^n (\mu_iz_i-\frac12\mu_i^2)\} \cdot
{(2\pi)^{-\frac{n}2}}
\exp\{-\frac12\sum\limits_{i=1}^n z_i^2\} dz_1 \ldots dz_n$  
  $\displaystyle =$ $\displaystyle {(2\pi)^{-\frac{n}2}}
\exp\{-\frac12\sum\limits_{i=1}^n (z_i-\mu_i)^2\} dz_1 \ldots
dz_n,$  

in other words $ Z_1, \ldots, Z_n$ are, with respect to $ {\rm Q}$, independent and normally distributed with expectations $ {\mathop{\text{\rm\sf E}}}_{{\rm Q}}(Z_i)=\mu_i$ and $ {\mathop{\text{\rm\sf E}}}_{{\rm Q}}[(Z_i-\mu_i)^2]=1.$ Thus the random variables $ \widetilde {Z_i}
\stackrel{\mathrm{def}}{=}Z_i-\mu_i$ are independent random variables with standard normal distributions on the measurable space $ (\Omega,\cal{F},{\rm Q}).$

Going from $ {\P}$ to $ {\rm Q}$ by multiplying by $ \xi$ changes the expectations of the normally distributed random variables, but the volatility structure remains notably unaffected.

The following Girsanov theorem generalizes this method for the continuous case, that is, it constructs for a given $ {\P}$-Brownian motion $ W_t$ an equivalent measure $ {\rm Q}$ and an appropriately adjusted process $ W^*_t$, so that it represents a $ {\rm Q}$-Brownian motion. In doing so the ("arbitrarily" given) expectation $ \mu_i$ is replaced by an ("arbitrarily" given) drift, that is, a stochastic process $ X_t$.

Theorem 22.4 (Girsanov)  
Let $ (\Omega,\cal{F},{\P})$ be a probability space, $ W_t$ a Brownian motion with respect to $ {\P}$, $ {\cal{F}}_t$ a filtration in $ \cal{F}$ and $ X_t$ an adapted stochastic process. Then

$\displaystyle \xi_t\stackrel{\mathrm{def}}{=}\exp(\int_0^t X_u dW_u-\frac{1}{2}\int_0^tX_u^2du) $

defines a martingal with respect to $ {\P}$ and $ {\cal{F}}_t$. The process $ W^*_t$ defined by

$\displaystyle W^*_t\stackrel{\mathrm{def}}{=}W_t - \int_0^t X_udu $

is a Wiener process with respect to the filtration $ {\cal{F}}_t$ and

$\displaystyle {\rm Q}\stackrel{\mathrm{def}}{=}\xi_T\cdot {\P}$ (22.1)

is a $ P$ equivalent probability measure $ {\rm Q}$.

The Girsanov theorem thus shows that for a $ {\P}$-Brownian motion $ W_t$ an equivalent probability measure $ {\rm Q}$ can be found such that $ W_t$, as a $ {\rm Q}$-Brownian motion at time $ t$, contains the drift $ X_t$. In doing so (A.1) means that: $ \int_\Omega \boldsymbol{1}(\omega\in A)d{\rm Q}(\omega)= {\rm Q}(A)\stackrel{\...
...}(\omega) = {\mathop{\text{\rm\sf E}}}_{\P}[\boldsymbol{1}
(\omega\in A) \xi_T]$ for all $ A \in \cal{F}.$

Remark 22.2  
With the relationships mentioned above $ \xi_t$ is by all means a martingale with respect to $ {\P}$ and $ {\cal{F}}_t$ when the so-called Novikov Condition

$\displaystyle {\mathop{\text{\rm\sf E}}}_{\P}\Big[\exp(\int_0^t X_u^2du)\Big] < \infty \quad \textrm{ for all } t\in [0,T]$

is met, that is, when $ X_t$ does not vary too much.

Another important tool used to derive the Black-Scholes formula by means of martingale theory is the martingale representation theory. It states that every $ {\rm Q}$-martingale under certain assumptions can be represented by a predetermined $ {\rm Q}$-martingale by means of a square-integrable process.

Theorem 22.5 (Martingale Representation theorem)  
Let $ M_t$ be a martingale with respect to the probability measure $ {\rm Q}$ and the filtration $ {\cal{F}}_t$, for which the volatility process $ \sigma_t$ of $ {\rm Q}$ almost surely $ \sigma_t \neq 0$ for all $ t\in[0,T]$, where $ \sigma^2_t={\mathop{\text{\rm\sf E}}}_{{\rm Q}}[M^2_t\vert{\cal{F}}_t].$ If $ N_t$ is another martingale with respect to $ {\rm Q}$ and $ {\cal{F}}_t$, then there exists (uniquely defined) on $ {\cal{F}}_t$ an adapted stochastic process $ H_t$ with $ \int_0^T H^2_t\sigma^2_tdt <\infty$ with:

$\displaystyle N_t = N_0 + \int_0^t H_sdM_s .$    

Example 22.2   It is easy to show that the standard Wiener process $ W_t$ with respect to the probability measure $ {\P}$ is a martingale with respect to $ {\P}$ and its corresponding filtration $ {\cal{F}}_t$. If $ X_t$ is another martingale with respect to $ {\P}$ and $ {\cal{F}}_t$, then according to the previous theorem there exists a $ {\cal{F}}_t$ adapted stochastic process $ H_t$, so that

$\displaystyle X_t = X_0 + \int_0^t H_s dW_s .$

Remark 22.3  
Writing the last expression in terms of derivatives:

$\displaystyle dX_t = H_tdW_t. $

The example shows once again that a martingale cannot possess a drift.