11.1 Certain Definitions

First we will need to look closer at stochastic processes, the basic object in time series analysis.

Definition 11.1 (stochastic process)  
A stochastic process $ X_t$, $ t \in \mathbb{Z}$, is a family of random variables, defined in a probability space $ (\Omega, {\cal F},
\P)$.

At a specific time point $ t$, $ X_t$ is a random variable with a specific density function. Given a specific $ \omega \in \Omega$, $ X(\omega) = \{X_t(\omega), t \in \mathbb{Z}\}$ is a realization or a path of the process.

Definition 11.2 (cdf of a stochastic process)  
The joint cumulative distribution function (cdf) of a stochastic process $ X_t$ is defined as

$\displaystyle F_{t_1,\ldots,t_n}(x_1,\ldots,x_n)= \P(X_{t_1}\le x_1,\ldots,X_{t_n}\le x_n).$

The stochastic process $ X_t$ is clearly identified, when the system of its density functions is known. If for any $ t_1,\ldots,t_n \in \mathbb{Z}$ the joint distribution function $ F_{t_1,\ldots,t_n}(x_1,\ldots,x_n)$ is known, the underlying stochastic process is uniquely determined.

Definition 11.3 (conditional cdf)  
The conditional cdf of a stochastic process $ X_t$ for any $ t_1,\ldots,t_n \in \mathbb{Z}$ with $ t_1 < t_2 < \ldots < t_n$ is defined as

$\displaystyle F_{t_n \mid t_{n-1},\ldots,t_1}(x_n \mid x_{n-1},\ldots,x_1) =
\P(X_{t_n}\le x_n \mid X_{t_{n-1}} = x_{n-1},\ldots,X_{t_1} =
x_1).$

Next we will define moment functions of real stochastic process. Here we will assume that the moments exist. If this is not the case, then the corresponding function is not defined.

Definition 11.4 (Mean function)  
The mean function $ \mu_t$ of a stochastic process $ X_t$ is defined as

$\displaystyle \mu_t = {\mathop{\text{\rm\sf E}}}[X_t] = \int_{\mathbb{R}} x dF_t(x).$ (11.1)

In general $ \mu_t$ depends on time $ t$, as, for example, processes with a seasonal or periodical structure or processes with a deterministic trend.

Definition 11.5 (Autocovariance function)  
The autocovariance function of a stochastic process $ X$ is defined as
$\displaystyle \gamma(t,\tau)$ $\displaystyle =$ $\displaystyle {\mathop{\text{\rm\sf E}}}[(X_t-\mu_t)(X_{t-\tau}-\mu_{t-\tau})]$  
  $\displaystyle =$ $\displaystyle \int_{\mathbb{R}^2} (x_1-\mu_t)(x_2-\mu_{t-\tau})
dF_{t,t-\tau}(x_1,x_2)$ (11.2)

for $ \tau\in \mathbb{Z}$.

The autocovariance function is symmetric, i.e., $ \gamma(t,\tau) =
\gamma(t-\tau,-\tau)$. For the special case $ \tau=0$ the result is the variance function $ \gamma(t,0) = \mathop{\text{\rm Var}}(X_t)$. In general $ \gamma(t,\tau)$ is dependent on $ t$ as well as on $ \tau$. In the following we define the important concept of stationarity, which will simplify the moment functions in many cases.

Definition 11.6 (Stationary)  
A stochastic process $ X$ is covariance stationary if
  1. $ \mu_t = \mu$, and
  2. $ \gamma(t, \tau) = \gamma_\tau$.

A stochastic process $ X_t$ is strictly stationary if for any $ t_1,\ldots,t_n$ and for all $ n, s \in \mathbb{Z}$ it holds that

$\displaystyle F_{t_1,\ldots,t_n}(x_1,\ldots,x_n) = F_{t_1+s,\ldots,t_n+s}(x_1,\ldots,x_n).$

For covariance stationary the term weakly stationary is often used. One should notice, however, that a stochastic process can be strictly stationary without being covariance stationary, namely then, when the variance (or covariance) does not exist. If the first two moment functions exist, then covariance stationary follows from strictly stationary.

Definition 11.7 (Autocorrelation function (ACF))  
The autocorrelation function $ \rho$ of a covariance stationary stochastic process is defined as

$\displaystyle \rho_\tau = \frac{\gamma_\tau}{\gamma_0}.$

The ACF is normalized on the interval [-1,1] and thus simplifies the interpretation of the autocovariance structure from various stochastic processes. Since the process is required to be covariance stationary, the ACF depends only on one parameter, the lag $ \tau$. Often the ACF is plotted as a function of $ \tau$, the so called correlogram. This is an important graphical instrument to illustrate linear dependency structures of the process.

Next we define two important stochastic processes which build the foundation for further modelling.

Definition 11.8 (White noise (WN))  
The stochastic process $ X_t$ is white noise if the following holds
  1. $ \mu_t = 0$, and
  2. $ \gamma_\tau = \left\{ \begin{array}{ll}
\sigma^2 & \text{\rm when} \:\: \tau = 0 \\
0 & \text{\rm when} \:\: \tau \ne 0. \\
\end{array} \right. $
13795 SFEtimewr.xpl

If $ X_t$ is a process from i.i.d. random values with expectation 0 and finite variance, then it is a white noise. This special case is called independent white noise. On the contrary the white noise could have dependent third or higher moments, and in this case it would not be independent.

Definition 11.9 (Random Walk)  
The stochastic process $ X_t$ follows a random walk, if it can be represented as

$\displaystyle X_t = c + X_{t-1} + \varepsilon_t
$

with a constant $ c$ and white noise $ \varepsilon_t$.

If $ c$ is not zero, then the variables $ Z_t = X_t - X_{t-1} = c +
\varepsilon_t$ have a non-zero mean. We call it a random walk with a drift (see Section 4.1). In contrast to Section 4.3 we do not require here that the variables are independent. The random walk defined here is the boundary case for an AR(1) process introduced in Example 10.1 as $ \alpha
\rightarrow 1$. When we require, as in Section 4.3, that $ \varepsilon_t$ is independent white noise, then we will call $ X_t$ a random walk with independent increments. Historically the random walk plays a special role, since at the beginning of the last century it was the first stochastic model to represent the development of stock prices. Even today the random walk is often assumed as an underlying hypothesis. However the applications are rejected in its strongest form with independent increments.

In order to determine the moment functions of a random walk, we will simply assume that the constant $ c$ and the initial value $ X_0$ are set to zero. Then through recursive substitutions we will get the representation

$\displaystyle X_t = \varepsilon_t + \varepsilon_{t-1} + \ldots + \varepsilon_1.
$

The mean function is simply

$\displaystyle \mu_t = {\mathop{\text{\rm\sf E}}}[X_t] = 0,$ (11.3)

and for the variance function, since there is no correlation of $ \varepsilon_t$, we obtain

$\displaystyle {\mathop{\text{\rm Var}}}(X_t) = {\mathop{\text{\rm Var}}}(\sum_{...
...epsilon_i) = \sum_{i=1}^t {\mathop{\text{\rm Var}}}(\varepsilon_i) = t\sigma^2.$ (11.4)

The variance of the random walk increases linearly with time. For the autocovariance function the following holds for $ \tau < t$:

$\displaystyle \gamma(t,\tau)$ $\displaystyle =$ $\displaystyle \mathop{\text{\rm Cov}}(X_t,X_{t-\tau})$  
  $\displaystyle =$ $\displaystyle \mathop{\text{\rm Cov}}(\sum_{i=1}^t \varepsilon_i, \sum_{j=1}^{t-\tau}\varepsilon_j)$  
  $\displaystyle =$ $\displaystyle \sum_{j=1}^{t-\tau}\sum_{i=1}^t \mathop{\text{\rm Cov}}(\varepsilon_i,\varepsilon_j)$  
  $\displaystyle =$ $\displaystyle \sum_{j=1}^{t-\tau} \sigma^2 = (t-\tau) \sigma^2.$  

For $ \tau < t$ the autocovariance is thus strictly positive. Since the covariance function depends on time $ t$ (and not only on the lags $ \tau$), the random walk is not covariance stationary. For the autocorrelation function $ \rho$ we obtain

$\displaystyle \rho(t,\tau) = \frac{(t-\tau)\sigma^2}{\sqrt{t\sigma^2(t-\tau)\sigma^2}}
= \frac{(t-\tau)}{\sqrt{t(t-\tau)}} = \sqrt{1 - \frac{\tau}{t}} .
$

Again $ \rho$ depends on $ t$ as well as on $ \tau$, thus the random walk is not covariance stationary.

As further illustration we consider a simple, but important stochastic process.

Example 11.1 (AR(1) Process)  
The stochastic process $ X_t$ follows an autoregressive process of first order, written AR(1) process, if

$\displaystyle X_t = c + \alpha X_{t-1} + \varepsilon_t
$

with a constant parameter $ \alpha$, $ \vert\alpha\vert < 1$. The process $ X_t$ can also, through iterative substitutions, be written as


$\displaystyle X_t$ $\displaystyle =$ $\displaystyle c(1+\alpha+\alpha^2+\ldots+\alpha^{k-1})$  
    $\displaystyle + \alpha^k X_{t-k} +
\varepsilon_t + \alpha\varepsilon_{t-1} +\ldots+\alpha^{k-1}\varepsilon_{t-k+1}$  
  $\displaystyle =$ $\displaystyle c(\sum_{i=0}^{k-1} \alpha^i) + \alpha^k X_{t-k} + \sum_{i=0}^{k-1}
\alpha^i \varepsilon_{t-i}$  
  $\displaystyle =$ $\displaystyle c \ \frac{1 - \alpha^k}{1 - \alpha} + \alpha^k X_{t-k} + \sum_{i=0}^{k-1}
\alpha^i \varepsilon_{t-i}$  

If $ X_{t-k}$ is given for a particular $ k$ (for example, the initial value of the process), the characteristics of the process are obviously dependent on this value. This influence disappears, however, over time, since we have assumed that $ \vert\alpha\vert < 1$ and thus $ \alpha^k \rightarrow 0$ for $ k\rightarrow \infty$. For $ k\rightarrow \infty$ there exists a limit in the sense of squared deviation, thus we can write the process $ X_t$ as

$\displaystyle X_t = c \frac{1}{1-\alpha} + \sum_{i=0}^{\infty}\alpha^i \varepsilon_{t-i}.
$

For the moment functions we then have

$\displaystyle \mu_t = c \frac{1}{1-\alpha},
$

and

$\displaystyle \gamma_\tau = \frac{\sigma^2}{1-\alpha^2} \alpha^\tau.
$

The ACF is thus simply $ \rho_{\tau} = \alpha^\tau$. For positive $ \alpha$ this function is strictly positive, for negative $ \alpha$ it alternates around zero. In every case it converges to zero, but with $ \alpha=0.5$, for example, convergence is very fast, and with $ \alpha=0.99$ it is quite slow. 13812 SFEacfar1.xpl

Definition 11.10 (Markov Process)  
A stochastic process has the Markov property if for all $ t \in \mathbb{Z}$ and $ k \ge 1$

$\displaystyle F_{t\vert t-1,\ldots,t-k}(x_t\vert x_{t-1},\ldots,x_{t-k}) = F_{t\vert t-1}(x_t\vert x_{t-1}).
$

In other words, the conditional distribution of a Markov process at a specific point in time is completely determined by the condition of the system at the previous date. One can also define Markov processes of higher order, from which the conditional distribution only depends on the finite number of past levels. Two examples for the Markov process of first order are the above mentioned random walk with independent variables and the AR(1) process with independent white noise.

Definition 11.11 (Martingale)  
The stochastic process $ X_t$ is a martingale if the following holds

$\displaystyle {\mathop{\text{\rm\sf E}}}[X_t \vert X_{t-1}=x_{t-1},\ldots,X_{t-k}=x_{t-k}] = x_{t-1}
$

for every $ k>0$.

The martingale is also a frequently used instrument in describing prices in financial markets. One should notice, that for a martingale process only one statement about the conditional expectation is made, while for a Markov process statements on the entire conditional distribution are made. An example of a martingale is the random walk without a drift. The AR(1) process with $ 0 < \alpha < 1$ is not a Martingale, since $ {\mathop{\text{\rm\sf E}}}[X_t \vert
x_{t-1},\ldots,x_{t-k}] = c + \alpha x_{t-1}.$

Definition 11.12 (fair game)  
The stochastic process $ X_t$ is a fair game if the following holds

$\displaystyle {\mathop{\text{\rm\sf E}}}[X_t \vert X_{t-1}=x_{t-1},\ldots,X_{t-k}=x_{t-k}] = 0
$

for every $ k>0$.

Sometimes a fair game is also called a martingale difference. If $ X_t$ is namely a martingale, then $ Z_t = X_t - X_{t-1}$ is a fair game.

Definition 11.13 (Lag-Operator)  
The operator $ L$ moves the process $ X_t$ back by one unit of time, i.e., $ L X_t = X_{t-1}$ and $ L^k X_t = X_{t-k}$. In addition we define the difference operator $ \Delta$ as $ \Delta = 1 - L$, i.e., $ \Delta X_t = X_t - X_{t-1}$, and $ \Delta^k = (1 - L)^k$.

After these mathematical definitions we arrive at the more econometric definitions, and in particular, at the term return. We start with a time series of prices $ P_1,\ldots,P_n$ and are interested in calculating the return between two periods.

Definition 11.14 (simple return)  
The simple return $ R_t$ is defined as

$\displaystyle R_t = \frac{P_t - P_{t-1}}{P_{t-1}}.
$

Should the average return $ R_t(k)$ need to be calculated over $ k$ periods, then the geometric mean is taken from the simple gross return, i.e.,

$\displaystyle R_t(k) = \left(\prod_{j=0}^{k-1}(1+R_{t-j})\right)^{1/k}-1.
$

In general the geometric mean is not equal to the arithmetic mean
$ k^{-1}\sum_{j=0}^{k-1}R_{t-j}$.

Definition 11.15 (log return)  
The log return $ r_t$ is defined as

$\displaystyle r_t = \ln \frac{P_t}{P_{t-1}} = \ln ( 1+R_t).
$

13822 SFEContDiscRet.xpl

The log return is defined for the case of continuous compounding. For the average return over several periods we have

$\displaystyle r_t(k)$ $\displaystyle =$ $\displaystyle \ln\{1+R_t(k)\} = \frac{1}{k} \ln \prod_{j=0}^{k-1}(1+R_{t-j})$  
  $\displaystyle =$ $\displaystyle \frac{1}{k} \sum_{j=0}^{k-1}\ln (1+R_{t-j})$  
  $\displaystyle =$ $\displaystyle \frac{1}{k} \sum_{j=0}^{k-1} r_{t-j},$  

i.e., for log returns the arithmetic average return is applied.

For small price changes the difference of the simple return and log return is negligible. According to the Taylor approximation it follows that

$\displaystyle \ln (1+x)$ $\displaystyle =$ $\displaystyle \ln(1)+\frac{\partial \ln x}{\partial x}(1)x
+ \frac{\partial^2 \ln x}{\partial x^2}(1) \frac{x^2}{2!}
+ \dots$  
  $\displaystyle =$ $\displaystyle x - \frac{x^2}{2!} + \frac{x^3}{3!}+ \dots.$  

For $ x$ close to zero a first order approximation is sufficient, i.e., $ \ln (1+x) \approx x$. As a general rule one could say, that with returns under 10% it does not really matter whether the simple or the log returns are used. This is above all the case when one is studying financial time series with a high frequency, as, for example, with daily values.