4.5 ARIMA Model Building

We have determined the population properties of the wide class of $ ARIMA$ models but, in practice, we have a time series and we want to infer which $ ARIMA$ model can have generated this time series. The selection of the appropriate $ ARIMA$ model for the data is achieved by a iterative procedure based on three steps (Box, Jenkins and Reinsel; 1994):

In this section we will explain this procedure in detail illustrating each step with an example.


4.5.1 Inference for the Moments of Stationary Processes

Since a stationary $ ARMA$ process is characterized in terms of the moments of the distribution, meanly its mean, ACF and PACF, it is necessary to estimate them using the available data in order to make inference about the underlying process.


4.5.1.0.1 Mean.

A consistent estimator of the mean of the process, $ \mu$, is the sample mean of the time series $ \overline{y}$:

$\displaystyle \overline{y} = \frac{1}{T}\displaystyle \sum_{t=1}^{T} y_t$ (4.39)

whose asymptotic variance can be approximated by $ \frac{1}{T}$ times the sample variance $ S^2_y$.

Sometimes it is useful to check whether the mean of a process is zero or not, that is, to test $ H_0\!\!: \mu \,=\, 0$ against $ H_a\!\!:\mu\,
\neq\, 0$. Under $ H_0$ the test statistic $ \sqrt{T}\frac{\bar{y}}{S_y}$ follows approximately a normal distribution.


4.5.1.0.2 Autocorrelation Function.

A consistent estimator of the ACF is the sample autocorrelation function. It is defined as:

$\displaystyle r_{j}= \frac{\displaystyle\sum_{t=j+1}^{T} (y_t -\bar{y})(y_{t-j}
- \bar{y})}{ \displaystyle\sum_{t=1}^{T} (y_t - \bar{y})^2} \qquad
\forall j$

In order to identify the underlying process, it is useful to check whether these coefficients are statistically nonzero or, more specifically, to check whether $ y_t $ is a white noise. Under the assumption that $ y_t \sim WN(0, \sigma^2)$, the distribution of these coefficients in large samples can be approximated by:

$\displaystyle \sqrt{T}r_j \stackrel{a}{\sim} N(0, 1)$ (4.40)

Then a usual test of individual significance can be applied, i.e., $ H_0\!\!: \rho_j = 0$ against $ H_a\!\!: \rho_j \neq 0$ for any $ j = 1, 2,
\dots$. The null hypothesis $ H_0$ would be rejected at the 5% level of significance if:

$\displaystyle \vert r_j\vert > 1.96/\sqrt{T}$ (4.41)

Usually, the correlogram plots the ACF jointly with these two-standard error bands around zero, approximated by $ \pm\, 2/\sqrt{T} $, that allow us to carry out this significance test by means of an easy graphic method.

We are also interested in whether a set of $ M$ autocorrelations are jointly zero or not, that is, in testing $ H_0\!\!: \rho_1 = \rho_2 = \dots = \rho_M = 0$. The most usual test statistic is the Ljung-Box statistic:

$\displaystyle Q_{LB} = T (T+2) \sum_{j=1}^M \frac{r_j^2}{T-j}$ (4.42)

that follows asymptotically a $ \chi^2_{(M)}$ distribution under $ H_0$.


4.5.1.0.3 Partial Autocorrelation Function.

The partial autocorrelation function is estimated by the OLS coefficient $ \hat{\phi}_{ii}$ from the expression (4.11), that is known as sample PACF.

Under the assumption that $ y_t \sim WN(0, \sigma^2)$, the distribution of the sample coefficients $ \hat{\phi}_{ii}$ in large samples is identical to those of the sample ACF (4.40). In consequence, the rule for rejecting the null hypothesis of individual non-significance (4.41) is also applied to the PACF. The bar plot of the sample PACF is called the sample partial correlogram and usually includes the two standard error bands $ \pm
2/\sqrt{T}$ to assess for individual significance.


4.5.2 Identification of ARIMA Models

The objective of the identification is to select a subclass of the family of $ ARIMA$ models appropriated to represent a time series. We follow a two step procedure: first, we get a stationary time series, i.e., we select the parameter $ \lambda$ of the Box-Cox transformation and the order of integration $ d $, and secondly we identify a set of stationary $ ARMA$ processes to represent the stationary process, i.e. we choose the orders $ (p,q)$.

4.5.2.0.1 Selection of Stationary Transformations.

Our task is to identify if the time series could have been generated by a stationary process. First, we use the timeplot of the series to analyze if it is variance stationary. The series departs from this property when the dispersion of the data varies along time. In this case, the stationarity in variance is achieved by applying the appropriate Box-Cox transformation (4.17) and as a result, we get the series $ y^{(\lambda)}$.

The second part is the analysis of the stationarity in mean. The instruments are the timeplot, the sample correlograms and the tests for unit roots and stationarity. The path of a nonstationary series usually shows an upward or downward slope or jumps in the level whereas a stationary series moves around a unique level along time. The sample autocorrelations of stationary processes are consistent estimates of the corresponding population coefficients, so the sample correlograms of stationary processes go to zero for moderate lags. This type of reasoning does not follow for nonstationary processes because their theoretical autocorrelations are not well defined. But we can argue that a 'non-decaying' behavior of the sample ACF should be due to a lack of stationarity. Moreover, typical profiles of sample correlograms of integrated series are shown in figure 4.14: the sample ACF tends to damp very slowly and the sample PACF decays very quickly, at lag $ j=2$, with the first value close to unity.

When the series shows nonstationary patterns, we should take first differences and analyze if $ \Delta y^{(\lambda)}_t$ is stationary or not in a similar way. This process of taking successive differences will continue until a stationary time series is achieved. The graphics methods can be supported with the unit-root and stationarity tests developed in subsection 4.3.3. As a result, we have a stationary time series $ z_t= \Delta^d y^{(\lambda)}_t$ and the order of integration $ d $ will be the number of times that we have differenced the series $ y^{(\lambda)}$.

Figure 4.14: Sample correlograms of random walk
\includegraphics[width=0.7\defpicwidth]{rwsacf.ps} \includegraphics[width=0.7\defpicwidth]{rwspacf.ps}

4.5.2.0.2 Selection of Stationary ARMA Models.

The choice of the appropriate $ (p,q)$ values of the $ ARMA$ model for the stationary series $ z_t$ is carried out on the grounds of its characteristics, that is, the mean, the ACF and the PACF.

Table 4.1: Autocorrelation patterns of ARMA processes
Process ACF PACF
*[2mm] $ AR(p)$ Infinite: exponential and/or Finite: cut off at lag $ p$
  sine-cosine wave decay  
$ MA(q)$ Finite: cut off at lag $ p$ Infinite: exponential and/or
    sine-cosine wave decay
$ ARMA(p, q)$ Infinite: exponential and/or Infinite: exponential and/or
  sine-cosine wave decay sine-cosine wave decay


The mean of the process is closely connected with the parameter $ \delta$: when the constant term is zero, the process has zero mean (see equation (4.22)). Then a constant term will be added to the model if $ H_0\!\!: \textrm{E}(z) = 0$ is rejected.

The orders $ (p,q)$ are selected comparing the sample ACF and PACF of $ z_t$ with the theoretical patterns of $ ARMA$ processes that are summarized in table 4.1.

4.5.2.0.3 Example: Minks time series.

To illustrate the identification procedure, we analyze the annual number of mink furs traded by the Hudson's Bay Company in Canada from $ 1848$ to $ 1909$, denoted by $ Minks$.

21557 XEGutsm12.xpl

Figure 4.15: Minks series and stationary transformation
\includegraphics[width=0.7\defpicwidth]{minkgra.ps} \includegraphics[width=0.7\defpicwidth]{lminks.ps}

Figure: Minks series. Sample ACF and PACF of $ \ln(Minks)$
\includegraphics[width=0.7\defpicwidth]{mink-s.ps} \includegraphics[width=0.7\defpicwidth]{mink-p.ps}


Table: Minks series. Sample ACF and PACF of $ \ln(Minks)$
lag Two standard error ACF PACF
       
*[-2mm] 1 0.254 0.6274 0.6274
2 0.254 0.23622 -0.2596
3 0.254 0.00247 -0.03770
4 0.254 -0.15074 -0.13233
5 0.254 -0.22492 -0.07182


Figure 4.15 plots the time series $ Minks$. It suggests that the variance could be changing. The plot of the $ \ln(Minks)$ shows a more stable pattern in variance, so we select the transformation parameter $ \lambda=0$. This series $ \ln(Minks)$ appears to be stationary since it evolves around a constant level and the correlograms decay quickly (see figure 4.16). Furthermore the ADF test-value is $ -3.6$, that clearly rejects the unit-root hypothesis. Therefore, the stationary time series we are going to analyze is $ z_t =
\ln(Minks)$.

As far as the selection of the orders $ (p,q)$ is concerned, we study the correlograms of figure 4.16 and the numerical values of the first five coefficients that are reported in table 4.2. The main feature of the ACF is its damping sine-cosine wave structure that reflects the behavior of an $ AR(p)$ process with complex roots. The PACF, where the $ H_0\!\!:\phi_{jj} =0$ is rejected only for $ j=1,2$, leads us to select the values $ p=2$ for the $ AR$ model.

The statistic for testing the null hypothesis $ H_0\!\!: \textrm{E}(z) = 0$ takes the value 221.02. Given a significance level of 5%, we reject the null hypothesis of zero mean and a constant should be included into the model. As a result, we propose an $ ARIMA(2,0,0)$ model for $ \ln(Minks)$:

$\displaystyle z_t = \delta + \phi_1 z_{t-1} + \phi_2 z_{t-2} + \varepsilon_t \qquad
\hbox{with} \quad z=\ln(Minks)
$


4.5.3 Parameter Estimation

The parameters of the selected $ ARIMA(p,d,q) $ model can be estimated consistently by least-squares or by maximum likelihood. Both estimation procedures are based on the computation of the innovations $ \varepsilon_t $ from the values of the stationary variable. The least-squares methods minimize the sum of squares,

$\displaystyle min \qquad \sum_t \varepsilon_t^2$ (4.43)

The log-likelihood can be derived from the joint probability density function of the innovations $ \varepsilon_1, \dots, \varepsilon_T$, that takes the following form under the normality assumption, $ \varepsilon_t
\sim \hbox{N.I.D.}(0, \sigma^2_{\varepsilon})$:

$\displaystyle f(\varepsilon_1, \dots, \varepsilon_T) \propto \sigma_{\varepsilo...
... \left\{-\sum_{t=1}^{T} \frac{\varepsilon_t^2}{2\sigma^2_{\varepsilon}}\right\}$ (4.44)

In order to solve the estimation problem, equations (4.43) and (4.44) should be written in terms of the observed data and the set of parameters $ (\Theta, \Phi, \delta)$. An $ ARMA(p, q)$ process for the stationary transformation $ z_t$ can be expressed as:

$\displaystyle \varepsilon_t = z_t - \delta - \sum_{i=1}^p \phi_i z_{t-i} - \sum_{i=1}^q \theta_i \varepsilon_{t-i}$ (4.45)

Then, to compute the innovations corresponding to a given set of observations $ (z_1, \dots z_T)$ and parameters, it is necessary to count with the starting values $ z_0, \dots, z_{p-1}$, $ \varepsilon_0, \dots,
\varepsilon_{q-1}$. More realistically, the innovations should be approximated by setting appropriate conditions about the initial values, giving to conditional least squares or conditional maximum likelihood estimators. One procedure consists of setting the initial values equal to their unconditional expectations, that is,

$\displaystyle \varepsilon_0 = \dots =\varepsilon_{q-1} = 0 \quad \hbox{ and } \quad
z_0= \dots = z_{p-1}= \delta(1-\sum_{i=1}^p
\phi_i)^{-1}$

For example, for the $ MA(1)$ process with zero mean, equation (4.45) is $ \varepsilon_t = z_t - \theta
\varepsilon_{t-1} $. Assuming $ \varepsilon_0=0$, then we compute the innovations recursively as follows:
$\displaystyle \varepsilon_1$ $\displaystyle =$ $\displaystyle z_1$  
$\displaystyle \varepsilon_2$ $\displaystyle =$ $\displaystyle z_2 - \theta z_1$  
$\displaystyle \varepsilon_3$ $\displaystyle =$ $\displaystyle z_3 - \theta z_2 + \theta^2 z_1$  

and so on. That is,

$\displaystyle \varepsilon_t = \sum_{i=0}^{t-1} (-1)^i \theta^i z_{t-i}$ (4.46)

A second useful mechanism is to assume that the first $ p$ observations of $ y$ are the starting values and the previous innovations are again equal to zero. In this case we run the equation (4.45) from $ t=p+1$ onwards. For example, for an $ AR(p)$ process, it is

$\displaystyle \varepsilon_t = z_t - \delta - \phi_1 z_{t-1} - \dots - \phi_p z_{t-p}$ (4.47)

thus, for given values of $ \delta,
\Phi$ and conditional on the initial values $ z_1, \dots, z_T$, we can get innovations from $ t=p+1$ until $ T$. Both procedures are the same for pure $ MA(q)$ models, but the first one could be less suitable for models with $ AR$ components. For example, let's consider an $ AR(1)$ process with zero mean and $ AR$ parameter close to the nonstationary boundary. In this case, the initial value $ z_1$ could deviate from its unconditional expectation and the condition $ z_0=0$ could distort the estimation results.

4.5.3.0.1 Conditional Least Squares.

Least squares estimation conditioned on the first $ p$ observations become straightforward in the case of pure $ AR$ models, leading to linear Least Squares. For example, for the $ AR(1)$ process with zero mean, and conditioned on the first value $ y_1$, equation (4.43) becomes the linear problem,

$\displaystyle min \quad \sum_{t=2}^{T} (y_t - \phi y_{t-1})^2$

leading to the usual estimator

$\displaystyle \hat{\phi}_{LS} \, =\,
\frac{\sum_t y_t y_{t-1}}{\sum_t y_{t-1}^2}$

which is consistent and asymptotically normal.

In a general model with a $ MA$ component the optimization problem (4.43) is nonlinear. For example, to estimate the parameter $ \theta$ of the $ MA(1)$ process, we substitute equation (4.46) in (4.43),

$\displaystyle min \sum_{t=2}^{T} \varepsilon_t^2 \qquad = \qquad min \quad \sum_{t=2}^{T}
\left(\sum_{i=0}^{t-1} (-1)^i \theta^i z_{t-i}\right)^2$

which is a nonlinear function of $ \theta$. Then, common nonlinear optimization algorithms such as Gauss-Newton can be applied in order to get the estimates.

4.5.3.0.2 Maximum Likelihood.

The ML estimator conditional to the first $ p$ values is equal to the conditional LS estimator. For example, returning to the $ AR(1)$ specification, we substitute the innovations $ \varepsilon_t = z_t - \phi
z_{t-1}$ in the ML principle (4.44). Taking logarithms we get the corresponding log-likelihood conditional on the first value $ y_1$:

$\displaystyle \ell (\phi, \sigma^2_\varepsilon\vert y_1) = -\frac{T-1}{2}\ \ln(...
...arepsilon) -\frac{1}{2\sigma^2_\varepsilon} \sum_{t=2}^{T}(y_t -\phi y_{t-1})^2$ (4.48)

The maximization of this function gives the LS estimator. Instead of setting the initial conditions, we can compute the unconditional likelihood. For an $ AR(1)$ model, the joint density function can be decomposed as:

$\displaystyle f(y_1, y_2, \dots, y_T) = f(y_1) f(y_2, \dots, y_T\vert y_1)$

where the marginal distribution of $ y_1$ is normal with zero mean, if $ \delta = 0$, and variance $ \sigma_{\varepsilon}^2 (1-\phi^2)^{-1}$. Then, the exact log-likelihood under the normality assumption is:

$\displaystyle \ell (\phi, \sigma^2_\varepsilon) = -\frac{1}{2} \ln(\sigma^2_\va...
...1-\phi^2)}{2\sigma^2_\varepsilon}
+ \ell (\phi, \sigma^2_\varepsilon\vert y_1)
$

where the second term $ \ell(\phi, \sigma^2_\varepsilon\vert y_1)$ is equation (4.48). Then, the exact likelihood for a general $ ARMA$ model is the combination of the conditional likelihood and the unconditional probability density function of the initial values. As can be shown for the $ AR(1)$ model, the exact ML estimator is not linear and these estimates are the solution of a nonlinear optimization problem that becomes quite complex. This unconditional likelihood can be computed via the prediction error decomposition by applying the Kalman filter (Harvey; 1993), which is also a useful tool for bayesian estimation of $ ARIMA$ models (Bauwens, Lubrano and Richard; 1999; Box, Jenkins and Reinsel; 1994). As the sample size increases the relative contribution of these initials values to the likelihood tends to be negligible, and so do the differences between conditional and unconditional estimation.

4.5.3.0.3 Example: Minks time series.

As an example let us estimate the following models for the series $ z = \ln(Minks)$:
$\displaystyle AR(2)$ $\displaystyle z_t \,=$ $\displaystyle \delta + \phi_1 z_{t-1} + \phi_2 z_{t-2} + \varepsilon_t$ (4.49)
$\displaystyle MA(1)$ $\displaystyle z_t \,=$ $\displaystyle \delta + \theta \varepsilon_{t-1} + \varepsilon_t$ (4.50)
$\displaystyle ARMA(1,1)$ $\displaystyle z_t \,=$ $\displaystyle \delta + \phi_1 z_{t-1} + \theta \varepsilon_{t-1} +
\varepsilon_t$ (4.51)

The ariols quantlet may be applied to compute the linear LS estimates of pure $ AR$ process such as the $ AR(2)$ model (4.49). In this case we use the code ar2=ariols(z,p, d, "constant") with $ p=2, d=0$ and the results are stored in the object ar2. The first three elements are the basic results: ar2.b is the vector of parameter estimates ( $ \hat{\delta}, \hat{\phi}_1,
\hat{\phi}_2$), ar2.bst is the vector of their corresponding asymptotic standard errors and ar2.wnv is the innovation variance estimate $ \hat{\sigma}^2_{\varepsilon}$. The last three components ar2.checkr, ar2.checkp and ar2.models are lists that include the diagnostic checking statistics and model selection criteria when the optional strings rcheck, pcheck and ic are included and take zero value otherwise.

The $ MA(1)$ model (4.50) is estimated by conditional nonlinear LS with the arimacls quantlet. The basic results of the code ma1=arimacls(z,p,d,q, "constant") with $ p=d=0, q=1$ consist of: the vector ma1.b of the estimated parameters, the innovation variance estimate ma1.wnv and the vector ma1.conv with the number of iterations and a 0-1 scalar indicating convergence. The other output results, ar2.checkr and ar2.ic, are the same as the ariols components.

The exact maximum likelihood estimation of the $ ARIMA(1,0,1)$ process (4.51) can be done by applying the quantlet arima11 . The corresponding code is arima11=arima11v(z,d, "constant") with $ d=0$ and the output includes the vector of parameter estimates ( $ \hat{\delta}, \hat{\phi}, \hat{\theta}$), the asymptotic standard errors of ARMA components ( $ \hat{\phi}, \hat{\theta}$), the innovation variance estimate $ \hat{\sigma}^2_{\varepsilon}$ and the optional results arima11.checkr, arima11.checkp and arima11.ic.

The parameter estimates are summarized in table 4.3.


Table 4.3: Minks series. Estimated models
Model $ AR(2)$ $ MA(1)$ $ ARMA(1,1)$
*[2mm] $ \delta$ 4.4337 10.7970 4.6889
$ \hat{\phi}_1$ 0.8769   0.5657
$ \hat{\phi}_2$ - 0.2875    
$ \hat{\theta}$   0.6690 0.3477
$ \hat{\sigma}_\varepsilon^2$ 0.0800 0.0888 0.0763


21890 XEGutsm13.xpl


4.5.4 Diagnostic Checking

Once we have identified and estimated the candidate $ ARMA$ models, we want to assess the adequacy of the selected models to the data. This model diagnostic checking step involves both parameter and residual analysis.

4.5.4.0.1 Diagnostic Testing for Residuals.

$ \quad$

If the fitted model is adequate, the residuals should be approximately white noise. So, we should check if the residuals have zero mean and if they are uncorrelated. The key instruments are the timeplot, the ACF and the PACF of the residuals. The theoretical ACF and PACF of white noise processes take value zero for lags $ j \neq 0$, so if the model is appropriate most of the coefficients of the sample ACF and PACF should be close to zero. In practice, we require that about the 95% of these coefficients should fall within the non-significance bounds. Moreover, the Ljung-Box statistic (4.42) should take small values, as corresponds to uncorrelated variables. The degrees of freedom of this statistic take into account the number of estimated parameters so the statistic test under $ H_0$ follows approximately a $ \chi^2_{(M-k)}$ distribution with $ k= p+q$. If the model is not appropriate, we expect the correlograms (simple and partial) of the residuals to depart from white noise suggesting the reformulation of the model.

4.5.4.0.2 Example: Minks time series.

We will check the adequacy of the three fitted models to the $ \ln(Minks)$ series. It can be done by using the optional string 'rcheck' of the estimation quantlets.

For example, for the $ MA(1)$ model we can use the code ma1= arimacls(z,0,0, 1, "constant","rcheck"). This option plots the residuals with the usual standard error bounds $ \pm 2
\hat{\sigma}_{\varepsilon}$ and the simple and partial correlograms of $ \hat{\varepsilon}_t$ (see figure 4.17). The output ma1.checkr also stores the residuals in the vector a, the statistic for testing $ H_0\!\!: \textrm{E}\varepsilon_t = 0$ in the scalar stat and the ACF, Ljung-Box statistic and the PACF in the matrix acfQ.

With regard to the zero mean condition, the timeplot shows that the residuals of the $ MA(1)$ model evolve around zero and this behavior is supported by the corresponding hypothesis test $ H_0\!\!: \textrm{E}\varepsilon_t = 0$. The value of the statistic is $ 0.016$ so the hypothesis of zero mean errors is not rejected. We can see in the correlogram of these residuals in figure 4.17 that several coefficients are significant and besides, the correlogram shows a decaying sine-cosine wave. The Ljung-Box statistics for some $ M$ take the following values: $ Q_{LB}(10)=26.65$ and $ Q_{LB}(15)=40.47$, rejecting the required hypothesis of uncorrelated errors. These results lead us to reformulate the model.

Figure 4.17: Minks series. Residual diagnostics of $ MA(1)$ model
\includegraphics[width=0.8\defpicwidth]{minkm1res.ps} \includegraphics[width=0.7\defpicwidth]{minkm1s.ps} \includegraphics[width=0.7\defpicwidth]{minkm1p.ps}

Next, we will check the adequacy of the AR(2) model (4.49) to $ \ln(Minks)$ data by means of the code ar2=ariols(z,2,0, "constant", "rcheck"). The output ar2.checkr provides us with the same results as the arimacls , namely, ar2.checkr.a, ar2.checkr.stat, and ar2.checkr.acfQ. Most of the coefficients of the ACF lie under the non-significance bounds and the Ljung-Box statistic takes values $ Q_{LB}(10)=12.9$ and $ Q_{LB}(15)=20.1$. Then the hypothesis of uncorrelated errors is not rejected in any case.

22223 XEGutsm15.xpl

Finally, the residual diagnostics for the $ ARMA(1,1)$ model (4.51) are computed with the optional string "rcheck" of the code arma11=arima11(z,0, "constant", "rcheck"). These results show that, given a significance level of 5%, both hyphotesis of uncorrelated errors and zero mean errors are not rejected.

22229 XEGutsm16.xpl

4.5.4.0.3 Diagnostic Testing for Parameters.

The usual t-statistics to test the statistical significance of the $ AR$ and $ MA$ parameters should be carried out to check if the model is overspecified. But it is important, as well, to assess whether the stationarity and invertibility conditions are satisfied. If we factorize de $ AR$ and $ MA$ polynomials:

$\displaystyle \Phi(L)$ $\displaystyle =$ $\displaystyle (1 - R_1^{-1}) \, (1 - R_2^{-1}) \ldots(1 - R_p^{-1})$  
$\displaystyle \Theta(L)$ $\displaystyle =$ $\displaystyle (1 - S_1^{-1}) \, (1 - S_2^{-1}) \ldots(1 - S_p^{-1})$  

and one of these roots is close to unity it may be an indication of lack of stationarity and/or invertibility.

An inspection of the covariance matrix of the estimated parameters allows us to detect the possible presence of high correlation between the estimates of some parameters which can be a manifestation of the presence of a 'common factor' in the model (Box and Jenkins; 1976).

4.5.4.0.4 Example: Minks time series.

We will analyse the parameter diagnostics of the estimated $ AR(2)$ and $ ARMA(1,1)$ models (see equations (4.49) and (4.51)). The quantlets ariols for OLS estimation of pure $ ARI(p,d)$ models and arima11 for exact ML estimation of $ ARIMA(1,d,1)$ models provide us with these diagnostics by means of the optional string "pcheck".

The ariols output stores the t-statistics in the vector ar2.checkp.bt, the estimate of the asymptotic covariance matrix in ar2.checkp.bvar and the result of checking the necessary condition for stationarity (4.16) in the string ar2.checkp.est. When the process is stationary, this string takes value 0 and in other case a warning message appears. The arima11 output also checks the stationary and invertibility conditions of the estimated model and stores the t-statistics and the asymptotic covariance matrix of the $ ARMA$ parameters $ \phi, \theta$.

The following table shows the results for the $ AR(2)$ model:

[1,] "           ln(Minks), AR(2) model            "
[2,] "   Parameter    Estimate      t-ratio        "
[3,] "_____________________________________________"
[4,] "    delta        4.434         3.598"
[5,] "    phi1         0.877         6.754"
[6,] "    phi2        -0.288        -2.125"

22235 XEGutsm17.xpl

It can be observed that the parameters of the $ AR(2)$ model are statistically significant and the roots of the polynomial $ (1 - 0.877 L +
0.288L^2)=0$ are $ 1.53 \pm 1.07 i$, indicating that the stationarity condition is clearly satisfied. Then, the $ AR(2)$ seems to be an appropriate model for the $ \ln(Minks)$ series. Similar results are obtained for the $ ARMA(1,1)$ model.


4.5.5 Model Selection Criteria

Once a set of models have been identified and estimated, it is possible that more than one of them is not rejected in the diagnostic checking step. Although we may want to use all models to check which performs best in forecasting, usually we want to select between them. In general, the model which minimizes a certain criterion function is selected.

The standard goodness of fit criterion in Econometrics is the coefficient of determination:

$\displaystyle R^2 = 1 - \frac{\hat{\sigma}^2_\varepsilon}{\hat{\sigma}^2_y}
$

where $ \hat{\sigma}^2_\varepsilon = \sum \hat{\varepsilon}^2_t / T$. Therefore, maximizing $ R^2$ is equivalent to minimize the sum of squared residuals. This measure presents some problems to be useful for model selection. First, the $ R^2$ cannot decrease when more variables are added to a model and typically it will fall continuously. Besides, economic time series usually present strong trends and/or seasonalities and any model that captures this facts to some extent will have a very large $ R^2$. Harvey (1989) proposes modifications to this coefficient to solve this problem.

Due to the limitations of the $ R^2$ coefficient, a number of criteria have been proposed in the literature to evaluate the fit of the model versus the number of parameters (see Postcher and Srinivasan (1994) for a survey). These criteria were developed for pure $ AR$ models but have been extended for $ ARMA$ models. It is assumed that the degree of differencing has been decided and that the object of the criterion is to determine the most appropiate values of $ p$ and $ q$. The more applied model selection criteria are the Akaike Information Criterion, $ AIC$, (Akaike; 1974) and the Schwarz Information Criterion, $ SIC$, (Schwarz; 1978) given by:

$\displaystyle AIC = \ln(\hat{\sigma}^2_\varepsilon) + \frac{2k}{T}$ (4.52)

$\displaystyle SIC = \ln(\hat{\sigma}^2_\varepsilon) + \frac{k}{T} \, \ln(T)$ (4.53)

where $ k$ is the number of the estimated $ ARMA$ parameters $ (p+q)$ and $ T$ is the number of observations used for estimation. Both criteria are based on the estimated variance $ \hat{\sigma}^2_\varepsilon $ plus a penalty adjustment depending on the number of estimated parameters and it is in the extent of this penalty that these criteria differ. The penalty proposed by $ SIC$ is larger than $ AIC$'s since $ \ln(T) > 2 $ for $ T \geq 8$. Therefore, the difference between both criteria can be very large if $ T$ is large; $ SIC$ tends to select simpler models than those chosen by $ AIC$. In practical work, both criteria are usually examined. If they do not select the same model, many authors tend to recommend to use the more parsimonious model selected by $ SIC$.

4.5.5.0.1 Example: Minks time series.

We will apply these information criteria to select between the $ AR(2)$ and $ ARMA(1,1)$ models that were not rejected at the diagnostic checking step. The optional string 'msc' of the estimation quantlets provide us with the values for both criteria, $ AIC$ and $ SIC$. For example, the output of the code ar2=ariols(z,0,2, "constant","nor", "nop", "msc") includes the vector ar2.ic with these values. The following table summarizes the results for these fitted models:

  $ AR(2)$ $ ARMA(1,1)$
*[1mm] $ AIC$ -2.510 -2.501
$ SIC$ -2.440 -2.432

22459 XEGutsm18.xpl

Figure 4.18: Mink series. Forecasts for $ AR(2)$ model
\includegraphics[width=0.8\defpicwidth]{lminkf.ps}

Both criteria select the $ AR(2)$ model and we use this model to forecast. This model generates a cyclical behavior of period equal to 10,27 years. The forecast function of this model for the $ \ln(Minks)$ series can be seen in figure 4.18.


4.5.6 Example: European Union G.D.P.

To illustrate the time series modelling methodology we have presented so far, we analyze a quarterly, seasonally adjusted series of the European Union G.D.P. from the first quarter of 1962 until the first quarter of 2001 (157 observations).


Table 4.4: GDP (E.U. series). Unit-root and stationarity tests
Statistic Testvalue Critical value ( $ \alpha=5\%$)
*[2mm] $ \tau_{\mu}$ 0.716 -2.86
$ \tau_{\tau} $ -1.301 -3.41
$ KPSS_{\mu}$ 7.779 0.46
$ KPSS_{\tau}$ 0.535 0.15


This series is plotted in the first graphic of figure 4.19. It can be observed that the $ GDP$ series displays a nonstationary pattern with an upward trending behavior. Moreover, the shape of the correlograms (left column of figure 4.19) is typical of a nonstationary process with a slow decay in the ACF and a coefficient $ \hat{\phi}_{11}$ close to unity in the PACF. The ADF test-values (see table 4.4) do not reject the unit-root null hypothesis both under the alternative of a stationary process in deviations to a constant or to a linear trend. Furthermore the KPSS statistics clearly reject the null hypothesis of stationarity around a constant or a linear trend. Thus, we should analyze the stationarity of the first differences of the series, $ z=\Delta GDP$.

Figure 4.19: European Union G.D.P. ($ 10^{11}$ US Dollars 1995)
\includegraphics[width=0.7\defpicwidth]{ejue.ps} \includegraphics[width=0.7\defpicwidth]{ejued.ps} \includegraphics[width=0.7\defpicwidth]{ejacf.ps} \includegraphics[width=0.7\defpicwidth]{ejacfd.ps} \includegraphics[width=0.7\defpicwidth]{ejpacf.ps} \includegraphics[width=0.7\defpicwidth]{ejpacfd.ps}

The right column of figure 4.19 displays the timeplot of the differenced series $ z_t$ and its estimated ACF and PACF. The graph of $ z_t$ shows a series that moves around a constant mean with approximately constant variance. The estimated ACF decreases quickly and the ADF test-value, $ \tau_{\mu}= -9.61$, clearly rejects the unit-root hypothesis against the alternative of a stationarity process around a constant. Given these results we may conclude that $ z_t$ is a stationary series.

The first coefficients of the ACF of $ z_t$ are statistically significant and decay as $ AR$ or $ ARMA$ models. With regard to the PACF, its first coefficient is clearly significant and large, indicating that an $ AR(1)$ model could be appropriated for the $ z_t$ series. But given that the first coefficients show some decreasing structure and the $ \hat{\phi}_{55}$ is statistically significant, perhaps an $ ARMA(1,1)$ model should be tried as well. With regard to the mean, the value of the statistic for the hypothesis $ H_0\!\!: Ez = 0$ is $ 14.81$ and so we reject the zero mean hypothesis.

Therefore we will analyze the following two models:

$\displaystyle (1 - \phi L) \Delta GDP_t$ $\displaystyle =$ $\displaystyle \delta + (1 + \theta L) \varepsilon_t$  
$\displaystyle (1 - \phi L) \Delta GDP_t$ $\displaystyle =$ $\displaystyle \delta + \varepsilon_t$  


Table 4.5: GDP (E.U. series). Estimation results
  $ ARIMA(1,1,1)$ $ ARIMA(1,1,0)$
*[2mm] $ \delta\; (t-statistic)$ 0.22 0.28 (7.37)
$ \phi\; (t-statistic)$ 0.40 ( 0.93) 0.25 (3.15)
$ \theta\; (t-statistic)$ -0.24 (-0.54) -
$ \sigma^2_\varepsilon$ 0.089 0.089
$ H_0:\!\textrm{E}\, \varepsilon=0$ 0.020 -4.46e-15
$ Q_{LB}(5)$ 6.90 6.60
$ Q_{LB}(15)$ 9.96 10.58
$ AIC$ -2.387 -2.397
$ SIC$ -2.348 -2.377


Estimation results of the $ ARIMA(1,1,1)$ and $ ARIMA(1,1,0)$ models are summarized in table 4.5 and figure 4.20. Table 4.5 reports parameter estimates, t-statistics, zero mean hypothesis test-value, Ljung-Box statistic values for $ M=5,15$ and the usual selection model criteria, $ AIC$ and $ SIC$, for the two models. Figure 4.20 shows the plot of the residuals and their correlograms.

Figure 4.20: GDP (E.U. series). Residuals and ACF
\includegraphics[width=0.7\defpicwidth]{eum1res.ps} \includegraphics[width=0.7\defpicwidth]{eum2res.ps} \includegraphics[width=0.7\defpicwidth]{eum1s.ps} \includegraphics[width=0.7\defpicwidth]{eum2s.ps}

Both models pass the residual diagnostics with very similar results: the zero mean hypothesis for the residuals is not rejected and the correlograms and the Ljung-Box statistics indicate that the residuals behave as white noise processes. However, the parameters of the $ ARIMA(1,1,1)$ model are not statistically significant. Given the fact that including an $ MA$ term does not seem to improve the results (see the $ AIC$ and $ SIC$ values), we select the more parsimonious $ ARIMA(1,1,0)$ model.

Figure 4.21: GDP (E.U. series). Actual and Forecasts
\includegraphics[width=0.8\defpicwidth]{eugdpf.ps}

Figure 4.21 plots the point and interval forecasts for the next 5 years generated by this model. As it was expected, since the model for the $ GDP$ series is integrated of order one with nonzero constant, the eventual forecast function is a straight line with positive slope.