2.1 Copulas

In this section we summarize the basic results without proof that are necessary to understand the concept of copulas. Then, we present the most important properties of copulas that are needed for applications in finance. In doing so, we will follow the notation used in Nelsen (1999).


2.1.1 Definition

DEFINITION 2.1   A 2-dimensional copula is a function $ C: \, [0,1]^2 \to [0,1]$ with the following properties:
  1. For every $ u \in [0,1]$

    $\displaystyle C(0,u) = C(u,0) = 0 \, .$ (2.1)

  2. For every $ u \in [0,1]$

    $\displaystyle C(u,1) = u \quad \textrm{and} \quad C(1,u) = u \, .$ (2.2)

  3. For every $ (u_1,u_2), (v_1,v_2) \in [0,1]\times [0,1]$ with $ u_1 \le v_1$ and $ u_2 \le v_2$:

    $\displaystyle C(v_1,v_2) - C(v_1,u_2) - C(u_1,v_2) + C(u_1,u_2) \ge 0 \, .$ (2.3)

A function that fulfills property 1 is also said to be grounded. Property 3 is the two-dimensional analogue of a nondecreasing one-dimensional function. A function with this feature is therefore called 2-increasing.

The usage of the name ''copula'' for the function $ C$ is explained by the following theorem.


2.1.2 Sklar's Theorem

The distribution function of a random variable $ R$ is a function $ F$ that assigns all $ r \in \overline{\mathbb{R}}$ a probability $ F(r) = \textrm{P}(R \le r)$. In addition, the joint distribution function of two random variables $ R_1,R_2$ is a function $ H$ that assigns all $ r_1,r_2 \in \mathbb{R}$ a probability $ H(r_1,r_2) = \textrm{P}(R_1 \le r_1, R_2 \le r_2)$.

THEOREM 2.1 (Sklar's theorem)   Let $ H$ be a joint distribution function with margins $ F_1$ and $ F_2$. Then there exists a copula $ C$ with

$\displaystyle H(x_1,x_2) = C(F_1(x_1),F_2(x_2))$ (2.4)

for every $ x_1,x_2 \in \overline{\mathbb{R}}$. If $ F_1$ and $ F_2$ are continuous, then $ C$ is unique. Otherwise, $ C$ is uniquely determined on Range $ F_1 \, \times$ Range $ F_2$. On the other hand, if $ C$ is a copula and $ F_1$ and $ F_2$ are distribution functions, then the function $ H$ defined by (2.4) is a joint distribution function with margins $ F_1$ and $ F_2$.

It is shown in Nelsen (1999) that $ H$ has margins $ F_1$ and $ F_2$ that are given by $ F_1(x_1) \stackrel{\mathrm{def}}{=}H(x_1,+ \infty)$ and $ F_2(x_2) \stackrel{\mathrm{def}}{=}H(+ \infty, x_2)$, respectively. Furthermore, $ F_1$ and $ F_2$ themselves are distribution functions. With Sklar's Theorem, the use of the name ``copula'' becomes obvious. It was chosen by Sklar (1996) to describe ``a function that links a multidimensional distribution to its one-dimensional margins'' and appeared in mathematical literature for the first time in Sklar (1959).


2.1.3 Examples of Copulas

2.1.3.0.1 Product Copula

The structure of independence is especially important for applications.

DEFINITION 2.2   Two random variables $ R_1$ and $ R_2$ are independent if and only if the product of their distribution functions $ F_1$ and $ F_2$ equals their joint distribution function $ H$,

$\displaystyle H(r_1,r_2) = F_1(r_1) \cdot F_2(r_2) \quad \textrm{for all} \quad r_1,r_2 \in \overline{\mathbb{R}} \, .$ (2.5)

Thus, we obtain the independence copula $ C = \Pi$ by

$\displaystyle \Pi(u_1,\dots,u_n)=\prod_{i=1}^n u_i \; ,
$

which becomes obvious from the following theorem:

THEOREM 2.2   Let $ R_1$ and $ R_2$ be random variables with continuous distribution functions $ F_1$ and $ F_2$ and joint distribution function $ H$. Then $ R_1$ and $ R_2$ are independent if and only if $ C_{R_1 R_2} = \Pi$.

From Sklar's Theorem we know that there exists a unique copula $ C$ with

$\displaystyle \textrm{P}(R_1 \le r_1, R_2 \le r_2) = H(r_1,r_2) = C(F_1(r_1),F_2(r_2)) \, .$ (2.6)

Independence can be seen using Equation (2.4) for the joint distribution function $ H$ and the definition of $ \Pi$,

$\displaystyle H(r_1,r_2) = C(F_1(r_1),F_2(r_2)) = F_1(r_1) \cdot F_2(r_2) \; .$ (2.7)


2.1.3.0.2 Gaussian Copula

The second important copula that we want to investigate is the Gaussian or normal copula,

$\displaystyle C^{\rm Gauss}_{\rho}(u, v) \stackrel{\mathrm{def}}{=} \int_{- \in...
...Phi_1^{-1}(u)} \int_{- \infty}^{\Phi_2^{-1}(v)} f_\rho(r_1,r_2) d r_2 d r_1\; ,$ (2.8)

see Embrechts, McNeil and Straumann (1999). In (2.8), $ f_\rho$ denotes the bivariate normal density function with correlation $ \rho $ for $ n=2$. The functions $ \Phi_1$, $ \Phi_2$ in (2.8) refer to the corresponding one-dimensional, cumulated normal density functions of the margins.

In the case of vanishing correlation, $ \rho=0$, the Gaussian copula becomes

$\displaystyle C^{\rm Gauss}_0(u, v)$ $\displaystyle =$ $\displaystyle \int_{- \infty}^{\Phi_1^{-1}(u)} f_1(r_1) d r_1 \;
\int_{- \infty}^{\Phi_2^{-1}(v)} f_2(r_2) d r_2 \;$  
  $\displaystyle =$ $\displaystyle u \, v$ (2.9)
  $\displaystyle =$ $\displaystyle \Pi (u,v) \quad \textrm{if} \quad \rho = 0 \; .$  

Result (2.9) is a direct consequence of Theorem 2.2.

As $ \Phi_1(r_1), \Phi_2(r_2) \in [0,1]$, one can replace $ u,v$ in (2.8) by $ \Phi_1(r_1), \Phi_2(r_2)$. If one considers $ r_1, r_2$ in a probabilistic sense, i.e. $ r_1$ and $ r_2$ being values of two random variables $ R_1$ and $ R_2$, one obtains from (2.8)

$\displaystyle C^{\rm Gauss}_\rho(\Phi_1(r_1),\Phi_2(r_2)) = \textrm{P}(R_1 \le r_1, R_2 \le r_2) \; .$ (2.10)

In other words: $ C^{\rm Gauss}_\rho(\Phi_1(r_1),\Phi_2(r_2))$ is the binormal cumulated probability function.


2.1.3.0.3 Gumbel-Hougaard Copula

Next, we consider the Gumbel-Hougaard family of copulas, see Hutchinson (1990). A discussion in Nelsen (1999) shows that $ C_{\theta}$ is suited to describe bivariate extreme value distributions. It is given by the function

$\displaystyle C_{\theta}(u, v) \stackrel{\mathrm{def}}{=}\exp \left\{ - \left[ (-\ln u)^{\theta} + (-\ln v)^{\theta} \right]^{1 / \theta} \right\} \; .$ (2.11)

The parameter $ \theta $ may take all values in the interval $ [1,\infty)$.

For $ \theta = 1$, expression (2.11) reduces to the product copula, i.e. $ C_1(u,v) = \Pi(u,v) = u \, v$. For $ \theta \to \infty$ one finds for the Gumbel-Hougaard copula

$\displaystyle C_{\theta}(u,v) \stackrel{\theta \to \infty}{\longrightarrow}
\min(u,v) \stackrel{\mathrm{def}}{=}M(u,v).$

It can be shown that $ M$ is also a copula. Furthermore, for any given copula $ C$ one has $ C(u,v) \le M(u,v)$, and $ M$ is called the Fréchet-Hoeffding upper bound. The two-dimensional function $ W(u,v) \stackrel{\mathrm{def}}{=}\max(u+v-1,0)$ defines a copula with $ W(u,v) \le C(u,v)$ for any other copula $ C$. $ W$ is called the Fréchet-Hoeffding lower bound.


2.1.4 Further Important Properties of Copulas

In this section we focus on the properties of copulas. The theorem we will present next establishes the continuity of copulas via a Lipschitz condition on $ [0,1] \times [0,1]$:

THEOREM 2.3   Let $ C$ be a copula. Then for every $ u_1, u_2, v_1, v_2 \in [0,1]$:

$\displaystyle \vert C(u_2, v_2) - C(u_1,v_1)\vert \le \vert u_2 - u_1\vert + \vert v_2 - v_1\vert \, .$ (2.12)

From (2.12) it follows that every copula $ C$ is uniformly continuous on its domain. A further important property of copulas concerns the partial derivatives of a copula with respect to its variables:

THEOREM 2.4   Let $ C$ be a copula. For every $ u \in [0,1]$, the partial derivative $ \partial \, C/ \partial \, v$ exists for almost every $ v \in [0,1]$. For such $ u$ and $ v$ one has

$\displaystyle 0 \le \frac{\partial}{\partial \, v} C(u,v) \le 1 \, .$ (2.13)

The analogous statement is true for the partial derivative $ \partial \, C/ \partial \, u$.
In addition, the functions $ u \to C_v(u) \stackrel{\mathrm{def}}{=}\partial \, C(u,v) / \partial \, v$ and $ v \to C_u(v) \stackrel{\mathrm{def}}{=}\partial \, C(u,v) / \partial \, u$ are defined and nondecreasing almost everywhere on [0,1].

To give an example of this theorem, we consider the partial derivative of the Gumbel-Hougaard copula (2.11) with respect to $ u$,

$\displaystyle C_{\theta, u} (v) = \frac{\partial}{\partial \, u} C_{\theta}(u, v)$ $\displaystyle =\!\!\!\!\!$ $\displaystyle \exp \left\{ - \left[ (-\ln u)^{\theta} + (-\ln v)^{\theta}
\right]^{1 / \theta} \right\} \times$  
  $\displaystyle $ $\displaystyle \left[ (-\ln u)^{\theta} + (-\ln v)^{\theta}
\right]^{- \frac{\theta -1}{\theta}} \;
\frac{(- \ln u)^{\theta -1}}{u}.$ (2.14)

Note that for $ u \in \, (0,1)$ and for all $ \theta \in \mathbb{R}$ where $ \theta > 1$, $ C_{\theta,u}$ is a strictly increasing function of $ v$. Therefore the inverse function $ C_{\theta,u}^{-1}$ is well defined. However, as one might guess from (2.14), $ C_{\theta,u}^{-1}$ can not be calculated analytically so that some kind of numerical algorithm has to be used for this task. As $ C_{\theta}$ is symmetric in $ u$ and $ v$, the partial derivative of $ C_{\theta}$ with respect to $ v$ shows an identical behaviour for the same set of parameters.

We will end this section with a statement on the behaviour of copulas under strictly monotone transformations of random variables.

THEOREM 2.5   Let $ R_1$ and $ R_2$ be random variables with continuous distribution functions and with copula $ C_{R_1 R_2}$. If $ \alpha_1$ and $ \alpha_2$ are strictly increasing functions on Range $ R_1$ and Range $ R_2$, then $ C_{\alpha_1(R_1) \, \alpha_2(R_2)} = C_{R_1 R_2}$. In other words: $ C_{R_1 R_2}$ is invariant under strictly increasing transformations of $ R_1$ and $ R_2$.