# 1.2 Quantile Regression Estimation

Some key definitions related to quantile regression are introduced in this section. Besides that, we demonstrate how to use XploRe for the estimation of quantile regression models.

## 1.2.1 Definitions

Given a random sample , it seems natural to find the approximation of a quantile (e.g., the median ), in terms of the order statistics , i.e., by means of sorting. The crucial point for the concept of quantile regression estimation is that the sample analogue of can be also found as the argument of the minimum of a specific objective function, because the optimization approach yields a natural generalization of the quantiles to the regression context. The -th sample quantile can be found as

 (1.5)

where

 (1.6)

(see Figure 1.1) and represents the indicator function.

Any one-dimensional -statistics (including the least squares estimator and (1.5)) for estimating a parameter of location

can be readily extended to the regression context, i.e., to the estimation of conditional expectation function by solving

where is a vector of responses and is an matrix of explanatory variables. From now on, will always refer to the number of observations and to the number of unknown parameters. As the sample quantile estimation is just a special case of -statistics for , it can be adapted for the estimation of the conditional quantile function along the same way. Thus, the unknown parameters in the conditional quantile function are to be estimated as

 (1.7)

The special case of is equivalent to minimizing the sum of absolute values of residuals, the well-known -estimator.

Before proceeding to the description of how such an estimate can be computed in XploRe , two issues have to be discussed. First, given formula (1.7), it is clear that in most cases there exists no general closed-form solution like in the case of the least squares estimator. Therefore, it is natural to ask whether any solution of (1.7) exists at all and whether it is unique. The answer is positive under some rather general conditions. Let represent the set of all -element subsets of , and let denote a submatrix of composed from rows for any and . Similarly, let for a vector be . Notice that this convention applies also for , that is, for single numbers. The rows of taken as column vectors are referred by --therefore, . Now we can write Theorem 3.3 of Koenker and Bassett (1978) in the following way:

Let be regression observations, . If are in general position, i.e., the system of linear equations has no solution for any , then there exists a solution to the quantile regression problem (1.7) of the form if and only if for some holds

 (1.8)

where , is defined by (1.6), and is the vector of ones. Moreover, is the unique solution if and only if the inequalities are strict, otherwise the solution set is the convex hull of several solutions of the form .

The presented result deserves one additional remark. Whereas situations in which observations are not in general position are not very frequent unless the response variable is of discrete nature, weak inequality in (1.8), and consequently multiple optimal solutions, can occur when all explanatory variables are discrete.

The second issue we have to mention is related to the numerical computation of estimates. The solution of (1.7) can be found by techniques of the linear programming, because

may be rewritten as the minimization of a linear function subject to linear constraints
 (1.9)

The linearity of the objective function and constraints implies that the solution has to lie in one of the vertices of the polyhedron defined by the constraints in (1.9). It is possible to derive that these vertices correspond to elements of and take form

Apparently, there are always at least indices from such that the corresponding residuals are equal to zero. Therefore, traversing between vertices of the polyhedron corresponds to switching between --hence the method belongs to the group of the so-called exterior-point methods. In order to find the optimal (or equivalently vertex), we usually employ a modified simplex method (Koenker and D'Orey; 1987). Although this minimization approach has some considerable advantages (for small problems, it is even faster than the least squares computation), it becomes rather slow with an increasing number of observations. Thus, it is not very suitable for large problems ( ). Koenker and Portnoy (1997) developed an interior-point method that is rather fast when applied on large data sets.

## 1.2.2 Computation

 z = rqfit (x, y{, tau, ci, alpha, iid, interp, tcrit}) estimates noninteractively a quantile regression model

The quantlet of metrics quantlib which serves for the quantile regression estimation is rqfit . We explain just the basic usage of rqfit quantlet in this section, other features will be discussed in the following sections. See Subsection 1.5.1 for detailed description of the quantlet.

The quantlet expects at least two input parameters: an matrix x that contains observations of explanatory variables and an vector y of observed responses. If the intercept is to be included in the regression model, the vector of ones can be concatenated to the matrix x in the following way:

  x = matrix(rows(x))~x

Neither the matrix x, nor the vector y should contain missing (NaN) or infinite values (Inf,-Inf). Their presence can be identified by isNaN or isNumber and the invalid observations should be processed before running rqfit , e.g., omitted using paf .

Quantlet rqfit provides a noninteractive way for quantile regression estimation. The basic invocation method is quite simple:

  z = rqfit(x,y,tau)

where parameter tau indicates which conditional quantile function has to be estimated. It is even possible to omit it:
  z = rqfit(x,y)

In this case, the predefined value is used. The output of rqfit might be little bit too complex, but for now it is sufficient to note that z.coefs refers to the vector of the estimated coefficients and z.res is the vector of regression residuals. If you want to have also the corresponding confidence intervals, you have to specify extra parameters in the call of rqfit --the fourth one, ci, equal to one, which indicates that you want to get confidence intervals, and optionally the fifth one, alpha, that specifies the nominal coverage probability 1-alpha for the confidence intervals (the default value of alpha is 0.1):
  z = rqfit(x,y,tau,1,alpha)

Then z.intervals gives you the access to the matrix of confidence intervals (the first column contains lower bounds, the second one upper bounds). Read Subsection 1.4.3 for more information.

To have a real example, let us use data set nicfoo supplied with XploRe . The data set is two-dimensional, having only one explanatory variable x, a household's net income, in the first column and the response variable y, food expenditures of the household, in the second column. In order to run, for example, the median regression ( ) of y on constant term, x and x, you have to type at the command line or in the editor window

  data = read("nicfoo")
x = matrix(rows(data)) ~ data[,1] ~ (data[,1]^2)
y = data[,2]
z = rqfit(x,y)
z.coefs


Do not forget to load quantlib metrics before running rqfit :
  library("metrics")

The result of the above example should appear in the XploRe output window as follows:
  Contents of coefs
[1,]  0.12756
[2,]   1.1966
[3,] -0.24616