18.2 Classical Method: Kalman Filter


{filtX, KG, PreviousPs} = 31216 kfilter2 (y, mu, Sig, H, F, Q, R)
calculates the classical Kalman filter $ x_{t\vert t}$ for a (multivariate) time series

For the definition and notation of the Kalman filter we refer to Härdle, Klinke, and Müller (2000, Section 10.2). As for our purposes, the state is more interesting, we have modified the quantlet 31219 kfilter by changing the output.


18.2.1 Features of the Classical Kalman Filter

At this point, we only recall the features that are--in our point of view--central for the Kalman filter:

These features--except for the last one--are to be preserved in our filtering procedures.


18.2.2 Optimality of the Kalman Filter

The classical Kalman filter is characterized by an optimality property that coincides with various different notions of optimality in the case of a normal state-space model.

best linear filter:

The Kalman filter is obtained as the linear filter minimizing the mean squared error (MSE). This is shown with Hilbert space theory, considering the closed linear spaces generated by the observations $ \bar y_t:=(\mu,y_1,\ldots,y_t)^T$ and orthogonal decompositions as follows:
$\displaystyle {\rm lin} (\bar y_t)$ $\displaystyle =$ $\displaystyle {\rm lin} (\bar y_{t-1}) \oplus
{\rm lin} ( \Delta y_t),\quad \Delta y_t=y_t-Hx_{t\vert t-1}$ (18.2)
$\displaystyle x_{t\vert t}$ $\displaystyle =$ $\displaystyle {\rm oP}(x_t\vert\bar y_{t-1}) + {\rm oP}(x_t\vert\Delta y_t)$  
  $\displaystyle =$ $\displaystyle x_{t\vert t-1} +{\rm oP}(\Delta x_t\vert\Delta y_t), \quad \Delta x_t=x_t-x_{t\vert t-1},$ (18.3)

where $ {\rm oP}(\cdot\vert Z)$ denotes the orthogonal projection onto the closed linear space generated by $ Z$ and $ \Delta y_t$ is called the innovation induced by $ y_t$.
This decomposition will be the basis for the quantlet 31281 rlsfil .
conditional expectation (under normality assumptions):

A very nice property is that under normality, classical Kalman filter and conditional expectation $ x_{t\vert t}=\mathop{\rm {{}E{}}}\nolimits [x_t\vert\bar y_t]$ coincide, so that $ x_{t\vert t}$ is not only optimal among all linear filters based on $ \bar y_t$, but among all $ \bar y_t$-measurable filters.
posterior mode (under normality):

Again under normality, $ x_{t\vert t}$ coincides with the posterior mode of $ {\cal L}(x_t\vert\bar y_t)$, which is the basis for robustifications done by Fahrmeir and Künstler (1999).
ML-Estimator in a Regression Model (under normality):

Finally, under normality the Kalman filter is also the maximum likelihood estimator (MLE) for a certain regression model with random parameter which is the basis for the 31288 rICfil .