Legacy Documentation

Time Series (2011)

This is documentation for an obsolete product.
Current products and services

Previous section-----Next section

1.9.2 State-Space Form and the Kalman Filter

In general, a state-space representation of a discrete time series model has the form:
where {Yt}, which can be either a scalar or a vector, is the process we observe and (9.6) is called the "observation equation". {Yt} depends on another process {Xt} that cannot be observed directly. Xt, which is a vector in general, is referred to as the "state" of the system at time t, and it evolves according to the "state equation" (9.7). F, G, c, and d are known matrices or vectors that may be dependent on time. {Wt} and {Vt} are independent Gaussian white noise variables with zero mean and covariance matrices given by and , respectively, where the prime denotes transpose.
Often, we would like to obtain an estimate of the unobservable state vector X based on the information available at time t, It, where It contains the observations of Y up to Yt. The Kalman filter provides a recursive procedure for calculating the estimate of the state vector. Let denote the best linear estimator of Xt based on the information up to and including time s and Pts, its mean square error . The following equations constitute the Kalman filter:
where
is called the Kalman gain. The above equations can be used to calculate the estimate and its mean square error Ptt, recursively. Clearly, like all recursive procedures this needs initial values, , to start the recursion. We will first present the function in Mathematica that implements these equations given the initial values and discuss ways of calculating the initial values in Section 1.9.3.
The Mathematica function that performs Kalman filtering ((9.8) to (9.11)) given the initial values and the data Ym+1 is
It yields . If ct=0 and dt=0, the last two arguments can be omitted. When all the known matrices and vectors are independent of time,
gives
However, if any one of F, G, Q, R, c, d is time dependent, the above arguments to KalmanFilter, F, G, Q, R, c, d, should be replaced by {Fm+2, Fm+3, ..., FT+1}, {Gm+1, Gm+2, ..., GT}, {Qm+2, Qm+3, ..., QT+1}, {Rm+1, Rm+2, ..., RT}, {cm+2, cm+3, ..., cT+1}, and {dm+1, dm+2, ..., dT}, respectively.
The Kalman filter gives the estimate of the state variable X at time t given the information up to t, . As more and more information is accumulated, i.e., {Yt+i} (i=1, 2, ..., s) are known, the estimate of Xt can be improved by making use of the extra available information. Kalman smoothing is a way of getting the estimate of Xt, given the information IT, where T>t:
and
where . The two equations given above are often referred to as the Kalman fixed-point smoother.
To obtain and PtT, we first use the Kalman filter to get and Ptt for t up to T. Then using (9.12) and (9.13), and PT-1T, PT-2T, ..., PtT can be calculated recursively. The Mathematica function
KalmanSmoothing[filterresult, F]
gives , where filterresult is the output of KalmanFilter, i.e., , and F is the transition matrix in state equation (9.7). Note that if F is time dependent, the second argument of KalmanSmoothing should be {Fm+2, Fm+3, ..., FT+1}.
The Kalman prediction is the calculation of the estimate of the future values Xt+h (h>0) based on the current information It, i.e., the calculation of . It is easy to see that
and
So starting from obtained from the Kalman filtering, the above equations can be iterated to get . It is straightforward to see that the prediction for Y is
and the corresponding mean square error ft+ht is
The function
KalmanPredictor[, F, Q, c, h],
when the known matrices and vectors are time independent, or
when the known matrices and vectors are time dependent, gives the next h predicted values and their mean square errors . Again, the argument c can be omitted if it is always 0.