12.6 Goodness-of-Fit Statistic
To extend the empirical likelihood ratio statistic to a global measure of Goodness-of-Fit,
we choose
-equally spaced lattice points
in
where
,
and
for
.
We let
and
as
.
This essentially divides
into
small bins of size
.
A simple choice is to let
where
is the
largest integer less than
. This choice as justified later
ensures asymptotic independence among
at different
s.
Bins of different size can be adopted to suit situations where
there are areas of low design density.
This corresponds to the use of
different bandwidth values in adaptive kernel smoothing.
The main results of this chapter is not affected by
un-equal bins. For the purpose of easy presentation,
we will treat bins of equal size.
As
measures the Goodness-of-Fit at a fixed
, an empirical likelihood based
statistic that measures the global Goodness-of-Fit is defined as
The following theorem was proven by Chen et al. (2001).
THEOREM 12.2
Under the assumptions (i) - (vi),
 |
(12.20) |
where

.
Härdle and Mammen (1993) proposed the
distance
as a measure of Goodness-of-Fit where
is a given weight function.
Theorem 12.2
indicates that the leading term of
is
with
.
The differences between the two test statistics are
(a) the empirical likelihood test statistic automatically
studentizes via its internal algorithm conducted at the background, so that there is no need to explicitly estimate
;
(b) the empirical likelihood statistic is able to capture other features such as skewness and kurtosis
exhibited in the data without using the bootstrap resampling which involves more technical details
when data are dependent.
If we choose
as prescribed, then the remainder term in
(12.21) becomes
.
We will now discuss the asymptotic distribution of the test statistic
. Theorem 12.3 was
proven by Chen et al. (2001).
THEOREM 12.3
Suppose assumptions (i) - (vi), then
where

is a Gaussian process on
![$ [0,1]$](xfghtmlimg424.gif)
with mean
and covariance
where
 |
|
|
(12.21) |
As
is a compact kernel on
, when both
and
are
in
(the interior part of
), we get from (12.22) with
where
is the convolution of
.
The compactness of
also means that
if
which implies
if
.
Hence
and
are independent if
. As
when
, we get
 |
(12.23) |
So, the leading order of the covariance
function is free of
and
, i.e.
is completely known.
Let
 |
(12.24) |
Then
is a normal process with zero mean and covariance
.
The boundedness of
implies
being bounded, and hence
.
We will now study the expectation and variance of
.
Let
where
From some basic results on stochastic integrals, Lemma
12.2 and (12.24) follows,
From (12.23) and the fact that the size of the region
is
, we have
Therefore,
It is obvious that
and
As
and
are bounded in
,
there exists a constant
such that
Furthermore we know from the discussion above,
with other constants
and
, and thus, there exists a constant
, such that
As
is non-random, we have
(12.28) and (12.29) together with
Theorem 12.3 give the asymptotic
expectation and variance of the test statistic
.