7.1 Partial Linear Models

Let us first consider the PLM in order to discuss the main ideas for the estimation of the components of the model. Our goal is to find the coefficients $ {\boldsymbol{\beta}}$ and the values $ m(\bullet)$ in the following structural equation

$\displaystyle Y={\boldsymbol{U}}^\top {\boldsymbol{\beta}}+m({\boldsymbol{T}})+\varepsilon,$ (7.2)

where $ \varepsilon$ shall denote an error term with zero mean and finite variance. In the following, we outline how such a model can be estimated and state the properties of the resulting estimators.

Now, let us take expectations conditioned on $ {\boldsymbol{T}}$, i.e.

$\displaystyle E(Y\vert{\boldsymbol{T}})=E({\boldsymbol{U}}^\top {\boldsymbol{\b...
...m({\boldsymbol{T}})\vert{\boldsymbol{T}}\} +E(\varepsilon\vert{\boldsymbol{T}})$ (7.3)

We subtract equation (7.3) from equation (7.2) to obtain

$\displaystyle Y-E(Y\vert{\boldsymbol{T}})=\{{\boldsymbol{U}}-E({\boldsymbol{U}}...
...})\}^\top {\boldsymbol{\beta}} +\varepsilon-E(\varepsilon\vert{\boldsymbol{T}})$ (7.4)

since $ E\{m({\boldsymbol{T}})\vert{\boldsymbol{T}}\}=m({\boldsymbol{T}})$. Note that by definition $ E(\varepsilon\vert{\boldsymbol{U}},{\boldsymbol{T}})=0$. Applying the law of iterated expectations it can be shown that $ E\{\varepsilon-E(\varepsilon\vert{\boldsymbol{T}})\}=0$ holds as well.

Once we have calculated one component ( $ {\boldsymbol{\beta}}$ or $ m$) of a PLM, the computation of the remaining component is straightforward. There are two alternative approaches to this task:

In the following section, we review both estimation procedures for the PLM and its extension for the GPLM in more detail. We restrict ourselves to Nadaraya-Watson type regression for the nonparametric component. It will become clear, however, that other techniques can be used in the same way. In the later part of this chapter we also discuss tests on the correct specification of the GPLM (vs. a parametric GLM).