4.3 Nonstationary Models for Time Series

The models presented so far are based on the stationarity assumption, that is, the mean and the variance of the underlying process are constant and the autocovariances depend only on the time lag. But many economic and business time series are nonstationary. Nonstationary time series can occur in many different ways. In particular, economic time series usually show time-changing levels, $ \mu_t$, (see graph (b) in figure 4.1) and/or variances (see graph (c) in figure 4.1).


4.3.1 Nonstationary in the Variance

When a time series is not stationary in variance we need a proper variance stabilizing transformation. It is very common for the variance of a nonstationary process to change as its level changes. Thus, let us assume that the variance of the process is:

$\displaystyle V(y_t) = k f(\mu_t)
$

for some positive constant $ k$ and some known function $ f$. The objective is to find a function $ h$ such that the transformed series $ h(y_t)$ has a constant variance. Expanding $ h(y_t)$ in a first-order Taylor series around $ \mu_t$:

$\displaystyle h(y_t) \simeq h(\mu_t) + (y_t - \mu_t) h'(\mu_t)
$

where $ h'(\mu_t)$ is the first derivative of $ h(y_t)$ evaluated at $ \mu_t$. The variance of $ h(y_t)$ can be approximated as:
$\displaystyle V[h(y_t)]$ $\displaystyle \simeq$ $\displaystyle V[h(\mu_t) + (y_t - \mu_t) h'(\mu_t)]$  
  $\displaystyle =$ $\displaystyle [h'(\mu_t)]^2 V(y_t) = [h'(\mu_t)]^2 \, k \, f(\mu_t)$  

Thus, the transformation $ h(y_t)$ must be chosen so that:

$\displaystyle h'(\mu_t) = \frac{1}{\sqrt{f(\mu_t)}}
$

For example, if the standard deviation of a series $ y_t $ is proportional to its level, then $ f(\mu_t) = \mu_t^2$ and the transformation $ h(\mu_t)$ has to satisfy $ h' (\mu_t) =
\mu_t^{-1}$. This implies that $ h(\mu_t) = \ln(\mu_t)$. Hence, a logarithmic transformation of the series will give a constant variance. If the variance of a series is proportional to its level, so that $ f(\mu_t) = \mu_t $, then a square root transformation of the series, $ \sqrt{y_t} $, will give a constant variance.

More generally, to stabilize the variance, we can use the power transformation introduced by Box and Cox (1964):

$\displaystyle y_t^{(\lambda)} = \left\{ \begin{array}{ll} \frac{y_t^{\lambda} - 1}{\lambda} & \lambda \ne 0 \\ \ln(y_t) & \lambda = 0 \end{array} \right.$ (4.17)

where $ \lambda$ is called the transformation parameter. It should be noted that, frequently, the Box-Cox transformation not only stabilizes the variance but also improves the approximation to normality of process $ y_t $.


4.3.2 Nonstationarity in the Mean

One of the dominant features of many economic and business time series is the trend. Trend is slow, long-run evolution in the variables that we want to model. In business, economics, and finance time series, trend is usually produced by slowly evolving preferences, technologies and demographics. This trend behavior can be upward or downward, steep or not, and exponential or approximately linear. With such a trending pattern, a time series is nonstationary, it does not show a tendency of mean reversion.

Nonstationarity in the mean, that is a non constant level, can be modelled in different ways. The most common alternatives are deterministic trends and stochastic trends.

4.3.2.0.1 Deterministic Trends

$ \,$

Let us consider the extension of Wold's decomposition theorem for nonstationary series given by Cramer (1961):

$\displaystyle y_t = \mu_t + u_t
$

where $ u_t$ is a zero mean stationary process. The changing mean of a nonstationary process or trend, $ \mu_t$ can be represented by a deterministic function of time. These models for the trend imply that the series trend evolves in a perfectly predictable way, therefore they are called deterministic trend models.

For example, if the mean function $ \mu_t$ follows a linear trend, one can use the deterministic linear trend model:

$\displaystyle y_{t} = \alpha + \beta \, t + u_t$ (4.18)

The parameter $ \alpha $ is the intercept; it is the value of the trend at time $ t=0$ and $ \beta $ is the slope; it is positive if the trend is increasing and negative if the trend is decreasing. The larger the absolute value of $ \beta $ the steeper the trend's slope.

Sometimes trend appears nonlinear, or curved, as for example when a variable increases at an increasing or decreasing rate. In fact, it is not required that trends be linear only that they be smooth. Quadratic trend models can potentially capture nonlinearities such as those observed in some series. Such trends are quadratic as opposed to linear functions of time,

$\displaystyle y_{t} = \alpha + \beta_1 \, t + \beta_2 \, t^2 + u_t
$

Higher order polynomial trends are sometimes considered, but it is important to use low-order polynomials to maintain smoothness. Other types of nonlinear trends that are sometimes appropriate are the exponential trends. If trend is characterized by constant growth at rate $ \beta $, then we can write:

$\displaystyle y_{t} = \alpha \, e^{\beta t} \, U_t
$

Trend has been modelled as a nonlinear (exponential) function of time in levels, but in logarithms we have

$\displaystyle \ln(y_{t}) = \ln(\alpha) + \beta t + u_t
$

Thus, trend is a linear function of time. This situation, in which a trend appears nonlinear in levels but linear in logarithms is called exponential trend or log-linear trend and is very common in economics because economic variables often displays roughly constant growth rates.

4.3.2.0.2 Stochastic Trends

$ \,$

Nonstationarity in the mean can be dealt within the class of the $ ARMA(p, q)$ models (4.7). An $ ARMA$ model is nonstationary if its $ AR$ polynomial does not satisfy the stationarity condition, that is, if some of its roots do not lie outside the unit circle. If the $ AR$ polynomial contains at least one root inside the unit circle, the behavior of a realization of the process will be explosive. However, this is not the sort of evolution that can be observed in economic and business time series. Although many of them are nonstationary, these series behave very much alike except for their difference in the local mean levels. If we want to model the evolution of the series independent of its level within the framework of $ ARMA$ models, the $ AR$ polynomial must satisfy:

$\displaystyle \Phi(L) (y_t - \mu) = \Theta(L) \varepsilon_t
$

that is:

$\displaystyle \Phi(L) \mu = 0 \hskip 0.5cm \Rightarrow \hskip 0.5cm \Phi(1) = 0
$

so that the $ \Phi(L)$ polynomial can be factorised as:

$\displaystyle \Phi(L) = \Phi^*(L) (1 - L)
$

Applying this decomposition to the general $ ARMA$ model:

$\displaystyle \Phi^*(L) (1 - L) y_t = \Theta(L) \varepsilon_t
$

or

$\displaystyle \Phi^*(L) \Delta y_t = \Theta(L) \varepsilon_t
$

where $ \Phi^*(L)$ is a polynomial of order $ (p-1)$ and $ \Delta =
(1-L)$. If $ \Phi^*(L)$ is a stationary $ AR$ polynomial, we say that $ y_t $ has a unit autoregressive root. When the nonstationary $ AR$ polynomial presents more than one unit root, for instance $ d $, it can be decomposed as:

$\displaystyle \Phi(L) = \Phi^*(L) (1 - L)^d
$

Applying again this decomposition to the general $ ARMA$ model we get:

$\displaystyle \Phi^*(L) \Delta^d y_t = \Theta(L) \varepsilon_t
$

for some $ d > 0$ where $ \Phi^*(L)$ is a stationary $ AR$ polynomial of order $ (p-d)$.

In short, if we use $ ARMA$ processes for modelling nonstationary time series, the nonstationarity leads to the presence of unit roots in the autoregressive polynomial. In other words, the series $ y_t $ is nonstationary but its $ d $th differenced series, $ (1 -
L)^d y_t$, for some integer $ d \ge 1 $, follows a stationary and invertible $ ARMA(p-d,q) $ model. A process $ y_t $ with these characteristics is called an integrated process of order d and it is denoted by $ y_t \sim I(d) $. It can be noted that the order of integration of a process is the number of differences needed to achieve stationarity, i.e., the number of unit roots present in the process. In practice $ I(0)$ and $ I(1)$ processes are by far the most important cases for economic and business time series, arising $ I(2)$ series much less frequently. Box and Jenkins (1976) refer to this kind of nonstationary behavior as homogeneous nonstationarity, indicating that the local behavior of this sort of series is independent of its level (for $ I(1)$ processes) and of its level and slope (for $ I(2)$ processes).

In general, if the series $ y_t $ is integrated of order $ d $, it can be represented by the following model:

$\displaystyle \Phi_p(L) (1-L)^d y_t = \delta + \Theta_q(L) \varepsilon_t$ (4.19)

where the stationary $ AR$ operator $ \Phi_p(L)$ and the invertible $ MA$ operator $ \Theta_q(L)$ share no common factors.

The resulting homogeneous nonstationary model (4.19) has been referred to as the Autoregressive Integrated Moving Average model of order $ (p,d,q)$ and is denoted as the $ ARIMA(p,d,q) $ model. When $ p= 0$, the $ ARIMA(0,d,q) $ is also called the Integrated Moving Average model of order $ (d,q)$ and it is denoted as the $ IMA(d,q)$ model. When $ q=0$, the resulting model is called the Autoregressive Integrated model $ ARI(p,d)$.

In order to get more insight into the kind of nonstationary behavior implied by integrated processes, let us study with some detail two of the most simple $ ARIMA$ models: random walk and random walk with drift models.

4.3.2.0.3 Random Walk Model.

The random walk model is simply an $ AR(1)$ with coefficient $ \phi=1$:

$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \varepsilon_t, \hskip 1cm
\varepsilon_t \sim WN(0, \sigma^{2}_{\varepsilon})$  
$\displaystyle y_t$ $\displaystyle =$ $\displaystyle y_{t-1} + \varepsilon_t$ (4.20)

That is, in the random walk model the value of $ y$ at time $ t$ is equal to its value at time $ t-1$ plus a random shock. The random walk model is not covariance stationary because the $ AR(1)$ coefficient is not less than one. But since the first difference of the series follows a white noise process, $ y_t $ is an integrated process of order 1, $ I(1)$. This model has been widely used to describe the behavior of finance time series such as stock prices, exchange rates, etc.

Graph (a) of figure 4.11 shows a simulated realization of size 150 of a random walk process, with $ \sigma^2_{\varepsilon} = 1$. It can be observed that the series does not display what is known as a mean reversion behavior: it wanders up and down randomly with no tendency to return to any particular point. If a shock increases the value of a random walk, there is no tendency for it to necessarily lower again, it is expected to stay permanently higher.

Taking expectations in (4.20) given the past information $ y_{t-1},
y_{t-2}, \ldots $, we get:

$\displaystyle E[y_t\vert y_{t-1}, y_{t-2}, \ldots] = \mu_{t\vert t-1} = y_{t-1}
$

This implies that the level at time $ t$ of a series generated by a random walk model is subject to the stochastic disturbance at time $ (t-1)$. That is, the mean level of the process $ y_t $ changes through time stochastically, and the process is characterized as having a stochastic trend. This is different from the deterministic trend model (4.18) of the previous section, where the parameter $ \beta $ remains constant through time and the mean level of the process is a pure deterministic function of time.

Figure 4.11: Realizations from nonstationary processes
\includegraphics[width=0.7\defpicwidth]{rwalk.ps} \includegraphics[width=0.7\defpicwidth]{rwalkyd.ps}

Assuming that the random walk started at some time $ t_0$ with value $ y_0$, we get:

$\displaystyle y_t = y_0 + \sum_{i=t_0+1}^{t} \varepsilon_i
$

Therefore,

$\displaystyle \textrm{E}(y_t) = y_0 \hskip 2cm V(y_t) = (t-t_0)
\sigma^{2}_{\varepsilon}
$

so that the variance grows continuously rather than converging to some finite unconditional variance. The correlation between $ y_t $ and $ y_{t-k}$ is:

$\displaystyle \rho_{k,t} = \frac{t-t_0-k}{\sqrt{(t-t_0)(t-t_0-k)}}
= \sqrt{\frac{t-t_0-k}{t-t_0}}
$

If $ (t-t_0)$ is large compared to $ k$, the correlation coefficients will be close to one. Therefore, the random walk model process can be characterized by coefficients in the sample ACF of the original series $ y_t $ that decay very slowly.

4.3.2.0.4 Random Walk with Drift Model.

The random walk with drift model results of adding a nonzero constant term to the random walk model:

$\displaystyle \Delta y_t = \delta + \varepsilon_t
$

or

$\displaystyle y_t = y_{t-1} + \delta + \varepsilon_t$ (4.21)

So the process $ y_t $ is integrated of order 1, $ I(1)$. Assuming that the process started at some time $ t_0$, by successive substitution, we have:

$\displaystyle y_t = y_{0} + (t-t_0)\, \delta + \sum^{t}_{i=t_0+1} \varepsilon_i
$

It can be observed that $ y_t $ contains a deterministic trend with slope or drift $ \delta$, as well as a stochastic trend. Given the past information $ y_{t-1},
y_{t-2}, \ldots $, the level of the series at time $ t$ is given by:

$\displaystyle E[y_t\vert y_{t-1}, y_{t-2}, \ldots] = \mu_{t\vert t-1} = y_{t-1} + \delta
$

which is influenced by the stochastic disturbance at time $ (t-1)$ through the term $ y_{t-1}$ as well as by the deterministic component through the parameter $ \delta$.

The random walk with drift is a model that on average grows each period by the drift, $ \delta$. This drift parameter $ \delta$ plays the same role as the slope parameter in the linear deterministic trend model (4.18). Just as the random walk has no particular level to which it returns, so the random walk with drift model has no particular trend to which it returns. If a shock moves the value of the process below the currently projected trend, there is no tendency for it to return; a new trend simply begins from the new position of the series (see graph (b) in figure 4.11).

In general, if a process is integrated, that is, $ y_t \sim ARIMA(p,d,q)$ for some $ d > 0$, shocks have completely permanent effects; a unit shock moves the expected future path of the series by one unit forever. Moreover, the parameter $ \delta$ plays very different roles for $ d=0$ and $ d > 0$. When $ d=0$, the process is stationary and the parameter $ \delta$ is related to the mean of the process, $ \mu$:

$\displaystyle \delta = \mu (1 - \phi_1 - \ldots - \phi_p)$ (4.22)

However, when $ d > 0$, the presence of the constant term $ \delta$ introduces a deterministic linear trend in the process (see graph (b) in figure 4.11). More generally, for models involving the $ d $th differenced series $ (1 -
L)^d y_t$, the nonzero parameter $ \delta$ can be shown to correspond to the coefficient $ \beta_d$ of $ t^d$ in the deterministic trend, $ \beta_0 + \beta_1 t + \ldots +
\beta_d t^d$. That is why, when $ d > 0$, the parameter $ \delta$ is referred to as the deterministic trend term. In this case, the models may be interpreted as including a deterministic trend buried in a nonstationary noise.


4.3.3 Testing for Unit Roots and Stationarity

As we have seen the properties of a time series depend on its order of integration, $ d $, that is on the presence of unit roots. It is important to have techniques available to determine the actual form of nonstationarity and to distinguish between stochastic and deterministic trends if possible. There are two kinds of statistical tests: one group is based on the unit root hypothesis while the other is on the stationary null hypothesis.

4.3.3.0.1 Unit Root Tests

$ \,$

There is a large literature on testing for unit roots theory. A good survey may be found in Dickey and Bell and Miller (1986), among others. Let us consider the simple $ AR(1)$ model:

$\displaystyle y_t = \phi y_{t-1} + \varepsilon_t,$ (4.23)

where $ y_0 = 0 $ and the innovations $ \varepsilon_t $ are a white noise sequence with constant variance. We can regress $ y_t $ on $ y_{t-1}$ and then use the standard t-statistic for testing the null hypothesis $ H_0\!\!: \phi=
\phi_0$. The problem arises because we do not know a priori whether the model is stationary. If $ \vert\phi\vert
<1$, the $ AR(1)$ model is stationary and the least-squares (LS) estimator of $ \phi$, $ \hat{\phi}_{LS}$, equals the Maximum Likelihood estimator under normality and follows a normal asymptotic distribution. Furthermore, the statistic given by:

$\displaystyle t_{\phi} = \frac{\hat{\phi}_{LS} - \phi_0}{s_{\hat{\phi}}}
$

where $ s_{\hat{\phi}}$ is the estimated standard deviation of $ \hat{\phi}_{LS}$, follows an asymptotic distribution $ N(0, 1)$. For small samples, this statistic is distributed approximately as a Student's $ t$ with $ (T-1)$ degrees of freedom. Nevertheless, when $ \phi=1$, this result does not hold. It can be shown that the LS estimator of $ \phi$ is biased downwards and that the t-statistic under the unit-root null hypothesis, does not have a Student's $ t$ distribution even in the limit as the sample size becomes infinite.

The $ AR(1)$ model (4.23) can be written as follows by substracting $ y_{t-1}$ to both sides of the equation:

$\displaystyle y_t - y_{t-1}$ $\displaystyle =$ $\displaystyle (\phi - 1) \, y_{t-1} + \varepsilon_t$  
$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \rho \, y_{t-1} + \varepsilon_t$ (4.24)

where $ \rho = \phi - 1 $. The relevant unit-root null hypothesis is $ \rho
= 0 $ and the alternative is one sided $ H_a\!\!: \rho < 0 $, since $ \rho >
0$ corresponds to explosive time series models. Dickey (1976) tabulated the percentiles of this statistic under the unit root null hypothesis. The $ H_0$ of a unit root is rejected when the value of the statistic is lower than the critical value. This statistic, denoted by $ \tau $, is called the Dickey-Fuller statistic and their critical values are published in Fuller (1976).

Up to now it has been shown how to test the null hypothesis of a random walk (one unit root) against the alternative of a zero mean, stationary $ AR(1)$. For economic time series, it could be of interest to consider alternative hypothesis including stationarity around a constant and/or a linear trend. This could be achieved by introducing these terms in model (4.24):

$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \alpha + \rho \, y_{t-1} + \varepsilon_t$ (4.25)
$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \alpha + \beta t + \rho \, y_{t-1} + \varepsilon_t$ (4.26)

The unit-root null hypothesis is simply $ H_0\!\!: \rho=0$ in both models (4.25)-(4.26). Dickey-Fuller tabulated the critical values for the corresponding statistics, denoted by $ \tau_{\mu}$ and $ \tau_{\tau} $ respectively. It should be noted that model (4.26) under the null hypothesis becomes a random walk plus drift model, which is a hypothesis that frequently arises in economic applications.

The tests presented so far have the disadvantage that they assume that the three models considered (4.24), (4.25) and (4.26) cover all the possibilities under the null hypothesis. However, many $ I(1)$ series do not behave in that way. In particular, their Data Generating Process may include nuisance parameters, like an autocorrelated process for the error term, for example. One method to allow a more flexible dynamic behavior in the series of interest is to consider that the series $ y_t $ follows an $ AR(p)$ model:

$\displaystyle y_t = \phi_1 y_{t-1} + \phi_2 y_{t-2} + \ldots + \phi_p y_{t-p} +
\varepsilon_t
$

This assumption is not particularly restrictive since every $ ARMA$ model always have an $ AR$ representation if its moving average polynomial is invertible. The $ AR(p)$ model can be rewritten as the following regression model:

$\displaystyle \Delta y_t = \rho y_{t-1} + \sum_{i=1}^{p-1} \gamma_i \Delta y_{t-i} + \varepsilon_t$ (4.27)

where $ \rho = \sum_{i=1}^{p} \phi_{i} - 1 $ and $ \gamma_i =
-\sum_{j=1}^{i} \phi_{p-i+j}$. Since the autoregressive polynomial will have a unit root if $ \sum_{i=1}^{p} \phi_{i} = 1 $, the presence of such a root is formally equivalent to the null hypothesis $ \rho
= 0 $. In this case, the unit root test, denoted as the Augmented Dickey-Fuller (Dickey and Fuller; 1979), is based on the LS estimation of the $ \rho $ coefficient and the corresponding t-statistic. The distribution of this statistic is the same that the distribution of $ \tau $, so we may use the same critical values. This model may include a constant and/or a linear trend:
$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \alpha + \rho y_{t-1} + \sum_{i=1}^{p-1} \gamma_i
\Delta y_{t-i} + \varepsilon_t$ (4.28)
$\displaystyle \Delta y_t$ $\displaystyle =$ $\displaystyle \alpha + \beta t + \rho y_{t-1} + \sum_{i=1}^{p-1}
\gamma_i \Delta y_{t-i} + \varepsilon_t$ (4.29)

and the t-statistics for the unit root null hypothesis follow the same distribution as $ \tau_{\mu}$ and $ \tau_{\tau} $ respectively.

The most common values for $ d $ are zero and 1 in economic and business time series. That is why we have concentrated so far in testing the null hypothesis of one unit root against the alternative of stationarity (possibly in deviations from a mean or a linear trend). But it is possible that the series present more than one unit root. If we want to test, in general, the hypothesis that a series is $ I(d) $ against the alternative that it is $ I(d-1)$, Dickey and Pantula (1987) suggest to follow a sequential procedure. First, we should test the null hypothesis of $ d $ unit roots against the alternative of $ (d-1)$ unit roots. If we reject this $ H_0$, then the null hypothesis of $ (d-1)$ unit roots should be tested against the alternative of $ (d-2)$ unit roots. Last, the null of one unit root is tested against the alternative of stationarity.

4.3.3.0.2 Example.

The XEGutsm07.xpl code computes the ADF statistic to test the unit root hypothesis for a simulated random walk series of size 1000. The value of the $ \tau_{\mu}$ is -0.93178, which rejects the null hypothesis at the 5% significance level. This output provides as well with the critical values 1%, 5%, 10%, 90%, 95% and 99%. It can be observed that the differences between the distributions of the conventional t-statistic and $ \tau_{\mu}$ are important. For example, using a 0.05 significance level the critical $ \tau_{\mu}$ value is -2.86 while that of the normal approximation to Student's $ t$ is -1.96 for large samples.

 

20576 XEGutsm07.xpl

4.3.3.0.3 Testing for Stationarity

$ \,$

If we want to check the stationarity of a time series or a linear combination of time series, it would be interesting to test the null hypothesis of stationarity directly. Bearing in mind that the classical hypothesis testing methodology ensures that the null hypothesis is accepted unless there is strong evidence against it, it is not surprising that a good number of empirical work show that standard unit-root tests fail to reject the null hypothesis for many economic time series. Therefore, in trying to decide wether economic data are stationary or integrated, it would be useful to perform tests of the null hypothesis of stationarity as well as tests of the unit-root null hypothesis.

Kwiatkowski, Phillips, Schmidt and Shin (1992) (KPSS) have developed a test for the null hypothesis of stationarity against the alternative of unit root. Let us consider the following Data Generating Process:

$\displaystyle y_t = \beta t + \alpha_t + u_t$ (4.30)

The term $ \alpha_t $ is a random walk:

$\displaystyle \alpha_t - \alpha_{t-1} = \zeta_t
$

where the initial value $ \alpha_0$ is treated as fixed and plays the role of intercept, the error term $ \zeta_t \sim i.i.d.(0, \sigma^{2}_{\zeta})$ and $ u_t$ is assumed to be a stationary process independent of $ \zeta_t$. If $ \alpha_t $ is not a random walk the series $ y_t $ would be trend stationary. Therefore, the stationarity hypothesis is simply $ H_0\!\!: \sigma^{2}_{\zeta} = 0 $.

Let $ \hat{u}_{t},\,\, t=1,2, \ldots, T $, be the LS residuals of the auxiliary regression:

$\displaystyle y_t = \alpha + \beta t + u_t
$

and let us define the partial sum of the residuals as:

$\displaystyle S_t = \sum_{i=1}^{t} \hat{u}_{i}
$

The test statistic developed by KPSS is based on the idea that for a trend stationary process, the variance of the sum series $ S_t$ should be relative small, while it should be important in the presence of one unit root. Then, the test statistic for the null hypothesis of trend stationarity versus a stochastic trend representation is:

$\displaystyle \eta = \frac{\sum_{t=1}^{T} S_t^2}{T^2 \hat{\sigma}^2 }\vspace*{2mm}$ (4.31)

where $ \hat{\sigma}^2$ stands for a consistent estimation of the 'long-term' variance of the error term $ u_t$. KPSS derived the asymptotic distribution of this test statistic under the stronger assumptions that $ \zeta_t$ is normal and $ u_t \sim N.I.D. (0,
\sigma^2_{u}) $, and tabulated the corresponding critical values. Since $ \eta$ only takes positive values, this test procedure is an upper tail test. The null hypothesis of trend stationarity is rejected when $ \eta$ exceeds the critical value.

The distribution of this test statistic has been tabulated as well for the special case in which the slope parameter of model (4.30) is $ \beta=0$. In such a case, the process $ y_t $ is stationary around a level ($ \alpha_0$) rather than around a trend under the null hypothesis. Therefore, the residual $ \hat{u}_t$, is obtained from the auxiliary regression of $ y_t $ on an intercept only, that is $ \hat{u}_t = y_t -
\bar{y}$.

4.3.3.0.4 Example.

The XEGutsm08.xpl code tests the stationarity hypothesis for a simulated $ AR(1)$ series with $ \phi=0.4$ and $ T=1000$. The results do not reject the null hypothesis of stationarity.
[1,]    Order   Test   Statistic          Crit. Value
[2,]                                   0.1   0.05   0.01
[3,] _______________________________________________________
[4,]        2  const       0.105     0.347  0.463  0.739
[5,]        2  trend       0.103     0.119  0.146  0.216
20582 XEGutsm08.xpl