Statistical inference infers
from the i.i.d. random
sample the properties of the population: typically, some unknown
characteristic
of its distribution. In parametric
statistics,
is a
-variate vector
characterizing
the unknown properties of the population pdf
: this could be the
mean, the covariance matrix, kurtosis, etc.
The aim will be to estimate from the sample
through estimators
which are functions of the sample:
.
When an estimator
is proposed, we must derive its
sampling distribution to analyze its properties (is it related to the unknown
quantity
it is supposed to estimate?).
In this chapter the basic theoretical tools are developed which are needed to derive estimators and to determine their properties in general situations. We will basically rely on the maximum likelihood theory in our presentation. In many situations, the maximum likelihood estimators indeed share asymptotic optimal properties which make their use easy and appealing.
We will illustrate the multivariate normal population and also the linear regression model where the applications are numerous and the derivations are easy to do. In multivariate setups, the maximum likelihood estimator is at times too complicated to be derived analytically. In such cases, the estimators are obtained using numerical methods (nonlinear optimization). The general theory and the asymptotic properties of these estimators remain simple and valid. The following chapter, Chapter 7, concentrates on hypothesis testing and confidence interval issues.