Next: 9.3 Location and Scale Up: 9. Robust Statistics Previous: 9.1 Robust Statistics; Examples

Subsections

# 9.2 Location and Scale in

## 9.2.1 Location, Scale and Equivariance

Changes in measurement units and baseline correspond to affine transformations on . We write

 with (9.14)

For any probability measure and for any we define

 (9.15)

denoting all Borel sets on . Consider a subset of which is closed under affine transformations, that is

 for all (9.16)

A functional will be called a location functional on if

 (9.17)

Similarly we define a functional to be a scale functional if

 (9.18)

## 9.2.2 Existence and Uniqueness

The fact that the mean of (9.7) cannot be defined for all distributions is an indication of its lack of robustness. More precisely the functional is not locally bounded (9.11) in the metric at any distribution . The median MED can be defined at any distribution as the mid-point of the interval of -values for which

 and (9.19)

Similar considerations apply to scale functionals. The standard deviation requires the existence of the second moment of a distribution. The median absolute deviation MAD (see [2]) of a distribution can be well defined at all distributions as follows. Given we define by

and set

 (9.20)

## 9.2.3 M-estimators

An important family of statistical functionals is the family of M-functionals introduced by [56] Let and be functions defined on with values in the interval . For a given probability distribution we consider the following two equations for and

 0 (9.21) (9.22)

If the solution exists and is uniquely defined we denote it by

In order to guarantee existence and uniqueness conditions have to be placed on the functions and as well as on the probability measure . The ones we use are due to [99] (see also [58]) and are as follows:
 ( ) for all . ( ) is strictly increasing ( ) ( ) is continuously differentiable with derivative . ( ) for all . ( ) is strictly increasing ( ) ( ) ( ) is continuously differentiable with derivative . ( ) is strictly increasing.

If these conditions hold and satisfies

 (9.23)

then (9.21) and (9.22) have precisely one solution. If we set

then satisfies (9.16) and and are a location and a scale functional respectively. Two functions which satisfy the above conditions are

 (9.24) (9.25)

where is a tuning parameter. The restriction on is to guarantee . Algorithms for calculating the solution of (9.21) and (9.22) are given in the Fortran library ROBETH ([67]) which also contains many other algorithms related to robust statistics.

The main disadvantage of M-functionals defined by (9.21) and (9.22) is ( ) which links the location and scale parts in a manner which may not be desirable. In particular there is a conflict between the breakdown behaviour and the efficiency of the M-functional (see below). There are several ways of overcoming this. One is to take the scale function and then to calculate a second location functional by solving

 (9.26)

If now satisfies ( )-( ) then this new functional will exist only under the assumption that the scale functional exists and is non-zero. Furthermore the functional can be made as efficient as desired by a suitable choice of removing the conflict between breakdown and efficiency. One possible choice for is the MAD of (9.20) which is simple, highly robust and which performed well in the Princeton robustness study ([2]).

In some situations there is an interest in downweighting outlying observations completely rather than in just bounding their effect. A downweighting to zero is not possible for a -function which satisfies ( ) but can be achieved by using so called redescending -functions such as Tukey's biweight

 (9.27)

In general there will be many solutions of (9.26) for such -functions and to obtain a well defined functional some choice must be made. One possibility is to take the solution closest to the median, another is to take

 (9.28)

where . Both solutions pose algorithmic problems. The effect of downweighting outlying observations to zero can be attained by using a so called one-step functional defined by

 (9.29)

where is as above and is redescending. We refer to [54] and [92] for more details.

So far all scale functionals have been defined in terms of a deviation from a location functional. This link can be broken as follows. Consider the functional  defined to be the solution of

 (9.30)

where satisfies the conditions above. It may be shown that the solution is unique with if

 (9.31)

where the denote the countably many atoms of . The main disadvantage of this method is the computational complexity of (9.30) requiring as it does operations for a sample of size . If is of the form

then reduces to a quantile of the and much more efficient algorithms exist which allow the functional to be calculated in operations (see [22], Rousseeuw and Croux (1992, 1993)).

Although we have defined -functionals as a solution of (9.21) and (9.22) there are sometimes advantages in defining them as a solution of a minimization problem. Consider the Cauchy distribution with density

 (9.32)

We now define by

 (9.33)

This is simply the standard maximum likelihood estimate for a Cauchy distribution but there is no suggestion here that the data are so distributed. If it can be shown that the solution exists and is unique. Moreover there exists a simple convergent algorithm for calculating for a data sample. We refer to [61] for this and the multidimensional case to be studied below. By differentiating the right hand side of (9.33) it is seen that may be viewed as an M-functional with a redescending -function.

Another class of functionals defined by a minimization problem is the class of -functionals. Given a function which is symmetric, continuous on the right and non-increasing on with and . We define by

 (9.34)

A special case is a minor variation of the shortest-half functional of (9.8) which is obtained by taking to be the indicator function of the interval . Although the existence of solutions of (9.34) is guaranteed if the problem of uniqueness is not trivial and requires the existence of a density subject to certain conditions. If is smooth then by differentiation it is seen that may be regarded as an M-functional with a redescending -function given by . The minimization problem (9.34) acts as a choice function. We refer to [23].

## 9.2.4 Bias and Breakdown

Given a location functional the bias is defined by

 (9.35)

where by convention if is not defined at . For a scale functional we set

 (9.36)

where again by convention if is not defined at . A popular although weaker form of bias function based on the so called gross error neighbourhood is given by

 (9.37)

with a corresponding definition for . We have

 (9.38)

We refer to [58] for more details.

The breakdown point of at with respect to is defined by

 (9.39)

with the corresponding definitions for scale functionals and the gross error neighbourhood. Corresponding to (9.38) we have

 (9.40)

If a functional has a positive breakdown point at a distribution then it exhibits a certain degree of stability in a neighbourhood of as may be seen as follows. Consider a sample and add to it further observations . If and denote the empirical measures based on and respectively then . In particular if then it follows that remains bounded whatever the added observations . This finite sample concept of breakdown was introduced by [33]. Another version replaces observations by other values instead of adding observations and is as follows. Let denote a sample differing from in at most readings. We denote the empirical distributions by and define

 (9.41)

where ranges over all possible . This version of the finite sample breakdown point is called the replacement version as of the original observations can be replaced by arbitrary values. The two breakdown points are related (see Zuo, 2001). There are corresponding versions for scale functionals.

For location and scale functionals there exist upper bounds for the breakdown points. For location functionals we have

Theorem 1

 (9.42) (9.43) (9.44)

We refer to [58]. It may be shown that all breakdown points of the mean are zero whereas the median attains the highest possible breakdown point in each case.The corresponding result for scale functionals is more complicated. Whereas we know of no reasonable metric in (9.42) of Theorem 1 which leads to a different upper bound this is not the case for scale functionals. [58] shows that for the Kolmogoroff metric the corresponding upper bound is but is for the gross error neighbourhood. If we replace the Kolmogoroff metric by the standard Kuiper metric defined by

 an interval (9.45)

then we again obtain an upper bound of . For scale functionals we have

Theorem 2

 (9.46) (9.47) (9.48)

Similarly all breakdown points of the standard deviation are zero but, in contrast to the median, the MAD does not attain the upper bounds of (9.44). We have

A simple modification of the MAD, namely

 (9.49)

where and can be shown to obtain the highest possible finite sample breakdown point of (9.48).

The M-functional defined by (9.21) and (9.22) has a breakdown point which satisfies

 (9.50)

(see [58]). For the functions defined by (9.24) and (9.25) the breakdown point is a decreasing function of . As tends to zero the breakdown point tends to . Indeed, as tends to zero the location part of the functional tends to the median. For numerical calculations show that the breakdown point is 0.48. The calculation of breakdown points is not always simple. We refer to [58] and [41].

The breakdown point is a simple but often effective measure of the robustness of a statistical functional. It does not however take into account the size of the bias. This can be done by trying to quantify the minimum bias over some neighbourhood of the distribution and if possible to identify a functional which attains it. We formulate this for and consider the Kolmogoroff ball of radius . We have ([58])

Theorem 3   For every we have

for any translation functional .

In other words the median minimizes the bias over any Kolmogoroff neighbourhood of the normal distribution. This theorem can be extended to other symmetric distributions and to other situations (Riedel, 1989a, 1989b). It is more difficult to obtain such a theorem for scale functionals because of the lack of a property equivalent to symmetry for location. Nevertheless some results in this direction have been obtained and indicate that the length of the shortest half of (9.8) has very good bias properties ([74]).

## 9.2.5 Confidence Intervals and Differentiability

Given a sample with empirical measure we can calculate a location functional which in some sense describes the location of the sample. Such a point value is rarely sufficient and in general should be supplemented by a confidence interval, that is a range of values consistent with the data. If is differentiable (9.12) and the data are i.i.d. random variables with distribution then it follows from (9.3) (see Sect. 9.1.3) that an asymptotic -confidence interval for is given by

 (9.51)

Here denotes the -quantile of the standard normal distribution and

 (9.52)

At first glance this cannot lead to a confidence interval as is unknown. If however is also Fréchet differentiable at then we can replace by with an error of order . This leads to the asymptotic -confidence interval

 (9.53)

A second problem is that (9.53) depends on asymptotic normality and the accuracy of the interval in turn will depend on the rate of convergence to the normal distribution which in turn may depend on . Both problems can be overcome if is locally uniformly Fréchet differentiable at . If we consider the M-functionals of Sect. 9.2.3 then they are locally uniformly Fréchet differentiable if the - and -functions are sufficiently smooth (see [11], [9], [10], and [27]). The influence function is given by

 (9.54)

where

 (9.55) (9.56) (9.57) (9.58)

Simulations suggest that the covering probabilities of the confidence interval (9.53) are good for sample sizes of or more as long as the distribution is almost symmetric. For the sample this leads to the interval

 (9.59)

with given by (9.52) and by (9.54). Similar intervals can be obtained for the variations on M-functionals discussed in Sect. 9.2.3.

## 9.2.6 Efficiency and Bias

The precision of the functional at the distribution can be quantified by the length of the asymptotic confidence interval (9.51). As the only quantity which depends on is we see that an increase in precision is equivalent to reducing the size of . The question which naturally arises is then that of determining how small can be made. A statistical functional which attains this lower bound is asymptotically optimal and if we denote this lower bound by , the efficiency of the functional can be defined as . The efficiency depends on and we must now decide which or indeed s to choose. The arguments given in Sect. 9.1.2 suggest choosing a which maximizes over a class of models. This holds for the normal distribution which maximizes over the class of all distributions with a given variance. For this reason and for simplicity and familiarity we shall take the normal distribution as the reference distribution. If a reference distribution is required which also produces outliers then the slash distribution is to be preferred to the Cauchy distribution. We refer to [19] and the discussion given there.

If we consider the M-functionals defined by (9.24) and (9.25) the efficiency at the normal distribution is an increasing function of the tuning parameter . As the breakdown point is a decreasing function of this would seem to indicate that there is a conflict between efficiency and breakdown point. This is the case for the M-functional defined by (9.24) and (9.25) and is due to the linking of the location and scale parts of the functional. If this is severed by, for example, recalculating a location functional as in (9.26) then there is no longer a conflict between efficiency and breakdown. As however the efficiency of the location functional increases the more it behaves like the mean with a corresponding increase in the bias function of (9.35) and (9.37). The conflict between efficiency and bias is a real one and gives rise to an optimality criterion, namely that of minimizing the bias subject to a lower bound on the efficiency. We refer to [73].

## 9.2.7 Outliers in

One of the main uses of robust functionals is the labelling of so called outliers (see [5], [55], [3], [40], [42], and Simonoff (1984, 1987)). In the data of Table 9.1 the laboratories 1 and 3 are clearly outliers which should be flagged. The discussion in Sect. 9.1.1 already indicates that the mean and standard deviation are not appropriate tools for the identification of outliers as they themselves are so strongly influenced by the very outliers they are intended to identify. We now demonstrate this more precisely. One simple rule is to classify all observations more than three standard deviations from the mean as outliers. A simple calculation shows that this rule will fail to identify arbitrarily large outliers with the same sign. More generally if all observations more than standard deviations from the mean are classified as outliers then this rule will fail to identify a proportion of outliers with the same sign. This is known as the masking effect ([79]) where the outliers mask their presence by distorting the mean and, more importantly, the standard deviation to such an extent as to render them useless for the detection of the outliers. One possibility is to choose a small value of but clearly if is too small then some non-outliers will be declared as outliers. In many cases the main body of the data can be well approximated by a normal distribution so we now investigate the choice of for samples of i.i.d. normal random variables. One possibility is to choose dependent on the sample size so that with probability say 0.95 no observation will be flagged as an outlier. This leads to a value of of about ([29]) and the largest proportion of one-sided outliers which can be detected is approximately which tends to zero with . It follows that there is no choice of which can detect say outliers and at the same time not falsely flag non-outliers. In order to achieve this the mean and standard deviation must be replaced by functionals which are less effected by the outliers. In particular these functionals should be locally bounded (9.11). Considerations of asymptotic normality or efficiency are of little relevance here. Two obvious candidates are the median and MAD and if we use them instead of the mean and standard deviation we are led to the identification rule ([53]) of the form

 (9.60)

[52] proposed setting as a general all purpose value. The concept of an outlier cannot in practice be very precise but in order to compare different identification rules we require a precise definition and a precise measure of performance. To do this we shall restrict attention to the normal model as one which is often reasonable for the main body of data. In other situations such as waiting times the exponential distribution may be more appropriate. The following is based on [29]. To define an outlier we introduce the concept of an -outlier. For the normal distribution and we define the -outlier region by

 (9.61)

which is just the union of the lower and the upper -tail regions. Here denotes the -quantile of the standard normal distribution. For the exponential distribution with parameter we set

 (9.62)

which is the upper -tail region ([43]). The extension to other distributions is clear. Each point located in the outlier region is called an -outlier, otherwise it is called an -inlier. This definition of an outlier refers only to its position in relation to the statistical model for the good data. No assumptions are made concerning the distribution of these outliers or the mechanism by which they are generated.

We can now formulate the task of outlier identification for the normal distribution as follows: For a given sample which contains at least i.i.d. observations distributed according to , we have to find all those that are located in . The level can be chosen to be dependent on the sample size. If for some we set

 (9.63)

then the probability of finding at least one observation of a -sample of size  within is not larger than . Consider now the general Hampel identifier which classifies all observations in

 (9.64)

as outliers. The region may be regarded as an empirical version of the outlier region . The constant standardizes the behaviour of the procedure for i.i.d. normal samples which may be done in several ways. One is to determine the constant so that with probability at least no observation is identified as an outlier, that is

 (9.65)

A second possibility is to require that

 (9.66)

If we use (9.65) and set then for and give and 5.52 respectively. For the normalizing constants can also be approximated according to the equations given in Sect. 5 of [40].

To describe the worst case behaviour of an outlier identifier we can look at the largest nonidentifiable outlier, which it allows. From [29] we report some values of this quantity for the Hampel identifier (HAMP) and contrast them with the corresponding values of a sophisticated high breakdown point outwards testing identifier (ROS), based on the non-robust mean and standard deviation ([87]; [107]). Both identifiers are standardized by (9.65) with . Outliers are then observations with absolute values greater than , and . For outliers and the average sizes of the largest non-detected outlier are 6.68 (HAMP) and 8.77 (ROS), for outliers and the corresponding values are 4.64 (HAMP) and 5.91 (ROS) and finally for outliers and the values are 5.07 (HAMP) and 9.29 (ROS).

Next: 9.3 Location and Scale Up: 9. Robust Statistics Previous: 9.1 Robust Statistics; Examples