Legacy Documentation

Time Series (2011)

This is documentation for an obsolete product.
Current products and services

Previous section-----Next section

1.7.1 Best Linear Predictor

Suppose that the stationary time series model that is fitted to the data {x1, x2, ... , xn} is known and we would like to predict the future values of the series Xn+1, Xn+2, ... , Xn+h based on the realization of the time series up to time n. The time n is called the origin of the forecast and h the lead time. A linear predictor is a linear combination of {X1, X2, ... , Xn} for predicting future values; the best linear predictor is defined to be the linear predictor with the minimum mean square error. Let denote the linear predictor for Xn+h at lead time h with the origin n and the forecast error. Finding the best linear predictor is reduced to finding the coefficients ai(h), i=0, 1, ... , n-1, such that the mean square error is a minimum.
Although the idea is straightforward, the derivation of the best linear predictor is too involved to be presented here. A detailed derivation of the best linear predictor using the projection theorem and the innovations algorithm is provided in Brockwell and Davis (1987), Chapter 5, pp. 159-177.
The function
BestLinearPredictor[data, model, h]
gives the best linear prediction and its mean square error up to h time steps ahead based on the finite sample data and given model. It uses the innovations algorithm to calculate the forecasts and their errors. Here the errors are obtained under the assumption that the model is known exactly. Estimated model parameters can give rise to additional errors. However, they are generally negligible when n is large. See the discussion in Harvey (1981), p. 162.
Example 7.1 In Example 6.9 we have fitted an MA(1) model to the data generated in Example 6.6. Use this information to find the best linear prediction for the next five values in the series.
In[1]:=
In[2]:=
In[3]:=
In[4]:=
Out[4]=
In[5]:=
Out[5]=
The first entry of the above list is the prediction for the next five values of the series; the second entry, mean square errors of the prediction. Note that for an MA(q) model, for h>q since after q time steps we lose all previous information and the best we can predict is the mean value of the series which is zero.