Previous section-----Next section

1.11 Examples of Analysis of Time Series

We illustrate the procedures and Mathematica functions outlined in earlier sections by considering a few examples of time series analysis. The reader should keep in mind that in each of the following examples the interpretation of the data and the model fitted are by no means unique; often there can be several equally valid interpretations consistent with the data. No package can be a substitute for experience and good judgment.
In[1]:=
In[2]:=
As we have emphasized earlier, the first thing to do in analyzing a time series is to look at its time plot.
In[3]:=
Out[3]=
We see that the series rises sharply with time, clearly signaling the nonstationary nature of the series, and indicating the need to transform the data to make it stationary. Here is the time plot of the series after the logarithmic transformation.
In[4]:=
Out[4]=
The variability in the series has been reduced. However, a rising trend still persists. We difference the series to obtain a constant mean series.
In[5]:=
In[6]:=
Out[6]=
Now that the mean appears steady, we assume the series has a constant mean (the reader is encouraged to devise tests to check this) and evaluate it.
In[7]:=
Out[7]=
We subtract this sample mean from the series to make it a zero-mean series.
In[8]:=
In[9]:=
Out[9]=
Now we can fit data to a stationary time series model. In order to select an appropriate model, we first look at its correlation and partial correlation functions.
In[10]:=
In[11]:=
In[12]:=
Out[12]=
In[13]:=
Out[13]=
We see that both the correlation and partial correlation functions decay quickly to small values. This indicates that the series can be fitted to models of relatively low order. We use the Hannan-Rissanen procedure to select models and get preliminary estimates.
In[14]:=
Out[14]=
These selected models with their estimated parameters are used as input to the function that performs the conditional maximum likelihood estimate.
In[15]:=
Out[15]=
We obtain the AIC values for the five models, AR(1), MA(1), AR(2), MA(2), and ARMA(2, 1), specified above.
In[16]:=
Out[16]=
We see that the lowest AIC values correspond to MA(1) and MA(2) models. We choose these two models for further exploration using the maximum likelihood method.
In[17]:=
Out[17]=
In[18]:=
Out[18]=
In[19]:=
Out[19]=
In[20]:=
Out[20]=
Again MA(1) is superior according to the AIC criterion. Note that in this particular example, the estimated parameters using the maximum likelihood method are very close to those obtained using the conditional maximum likelihood method. We now tentatively choose MA(1) as our model and proceed to check its adequacy.
From (5.6) the variance of the sample correlation for an MA(1) model for lags k>1 can be calculated.
In[21]:=
Out[21]=
In[22]:=
Out[22]=
The sample correlation function shown above is within this bound with the exception of . So the behavior of the sample correlation function obtained from the data is consistent with the MA(1) model. Next we calculate the residuals and see if they form a white noise process.
In[23]:=
In[24]:=
Out[24]=
We see that the correlation function of the residuals resembles that of a white noise. The portmanteau test also confirms the adequacy of the MA(1) model.
In[25]:=
Out[25]=
In[26]:=
Out[26]=
So we finally arrive at the model (1-B)lnXt=0.029+Zt+0.31Zt-1 where the noise {Zt} has a variance 0.024.
We can get the standard error of by calculating the asymptotic variance of the estimator.
In[27]:=
Out[27]=
In[28]:=
Out[28]=
The disadvantage of choosing an MA(1) model is that it cannot account for the relatively large value of the sample correlation at lag 5. To account for the large a higher-order model is needed. For example, the AR(5) model obtained from the maximum likelihood estimate (with 3 fixed to be 0)
ARModel[{0.223396, -0.209017, 0, -0.147311, -0.171337}, 0.0219622]
produces a correlation function that better resembles the sample correlation function. It also has a slightly lower AIC value than the MA(1) model.
In[29]:=
In[30]:=
Out[30]=
In[31]:=
Out[31]=
The clear periodic structure in the time plot of the data is reflected in the correlation plot. The pronounced peaks at lags that are multiples of 12 indicate that the series has a seasonal period s=12. We also observe that the correlation function decreases rather slowly to zero both for (k=1, 2, ... ) and for (k is not a multiple of 12). This suggests that the series may be nonstationary and both seasonal and regular differencing may be required.
In[32]:=
In[33]:=
Out[33]=
In[34]:=
Out[34]=
The single prominent dip in the correlation function at lag 12 shows that the seasonal component probably only persists in the MA part and Q=1. This is also consistent with the behavior of the partial correlation function that has dips at lags that are multiples of the seasonal period 12. The correlation plot also suggests that the orders of the regular AR and MA parts are small since is prominent only at k=1. Based on these observations, we consider the models with P=0, Q=1 and p≤1, q≤1 and the reader is encouraged to try higher-order models. We will estimate the parameters of these models using the conditional maximum likelihood method. Specifically, we consider the following three models.
In[35]:=
The initial values -0.3 and -0.5 for the parameters are guessed from looking at the sample correlation and partial correlation functions. For an AR(1) or MA(1) model, (1) is proportional to 1 or 1. So we pick our initial value for 1 or 1 to be . For the partial correlation function of an MA(1) model we have k+1, k+1/k, k-1 (see Example 2.11). From our sample partial correlation function, we see that , so we choose the initial value of 1 to be -0.5. Before estimating the parameters we first make sure that the mean value of the series is zero.
In[36]:=
Out[36]=
In[37]:=
In[38]:=
Out[38]=
In[39]:=
Out[39]=
In[40]:=
Out[40]=
We drop the first model because it has a higher AIC value. We use the exact maximum likelihood method to get better estimates for the parameters of the second and third models.
In[41]:=
Out[41]=
In[42]:=
Out[42]=
In[43]:=
Out[43]=
In[44]:=
Out[44]=
So the second model, SARIMA(0, 1, 1)(0, 1, 1)12, is marginally superior and we choose it to be our model. We check to see if the model is invertible.
In[45]:=
Out[45]=
To test the adequacy of the model we first calculate the residuals.
In[46]:=
In[47]:=
Out[47]=
We see that the residual correlation is within the rough bound . Even though the asymptotic standard deviations of residual correlations can be smaller than for small lags, our residual correlations for small lags are much smaller than the rough bound 0.1.
In[48]:=
Out[48]=
In[49]:=
Out[49]=
So we conclude that our model is (1-B)(1-B12)Xt=0.2835+(1-0.326B)(1-0.663B12)Zt where the white noise {Zt} has variance 2306.8.
To estimate the standard errors of the estimated parameters we invert the estimated information matrix.
In[50]:=
Out[50]=
In[51]:=
Out[51]=
The standard errors are about 0.05 for and 0.04 for .
To forecast the future values we first get the predictors for the differenced, zero-mean series data.
In[52]:=
We have used the option Exact -> False since n is large and hence it makes little difference in the predicted values. To get the predicted values for the original series we use the function IntegratedPredictor.
In[53]:=
Out[53]=
The mean square errors of the prediction are obtained by using the original series and the full model. In this case the full model is obtained by replacing the differencing orders {0, 0} in mamodel (see Out[44]) by {1, 1}.
In[54]:=
Out[54]=
In[55]:=
Out[55]=
We can also define a function, say, plotdata, to plot the data and the predicted values together.
In[56]:=
In[57]:=
In[58]:=
Out[58]=
We can see that the series oscillates with an approximate period of 10. In order to make the variance more uniform, we first take the logarithm (base 10) of the data and subtract out the mean.
In[59]:=
Out[59]=
In[60]:=
Out[60]=
In[61]:=
In[62]:=
Out[62]=
Compared with the original plot the time plot now looks more symmetric. We proceed to calculate and plot the correlation and partial correlation functions.
In[63]:=
Out[63]=
In[64]:=
Out[64]=
We notice that, like the time series itself, the correlation function oscillates with an approximate cycle of period 10. Also the slowly decaying behavior of the correlation signals that the series may be nonstationary or the AR polynomial (x) has zeros very close to the unit circle. As a preliminary estimate we use the Hannan-Rissanen method to select possible models. Since the period here is close to 10 we will use the long AR order and the maximum AR and MA orders greater than 10.
In[65]:=
In[66]:=
Out[66]=
There are two models that have much lower AIC values than the rest.
In[67]:=
Out[67]=
In[68]:=
Out[68]=
In[69]:=
Out[69]=
We further estimate the model parameters using the conditional maximum likelihood method. Note that the above models are used as the input to the function ConditionalMLEstimate.
In[70]:=
Out[70]=
In[71]:=
Out[71]=
We can also try to estimate some nearby models to see if they give a lower AIC value. For example, we can try AR(12), ARMA(10, 1), etc. It turns out that AR(12) has a lower AIC value. The reader can show that AR(13) and ARMA(10, 1) yield higher AIC values than AR(12).
In[72]:=
Out[72]=
In[73]:=
Out[73]=
The correlation and the partial correlation function of the AR(12) model resemble those obtained from data1 (see earlier plots).
In[74]:=
Out[74]=
In[75]:=
Out[75]=
To test the adequacy of the model we first calculate the residuals and their correlation function.
In[76]:=
In[77]:=
Out[77]=
The correlation function of the residuals is well within the bound of .
In[78]:=
Out[78]=
In[79]:=
Out[79]=
To get the standard errors of the estimated parameters we first get their asymptotic covariance matrix. This can be done either by inverting the information matrix or using the function AsymptoticCovariance. Here we use the latter method.
In[80]:=
In[81]:=
Out[81]=
In[82]:=
Out[82]//TableForm=
So we can safely regard 5, 6, 7, 8, and 10 to be zero. However we will use the model ar12 to forecast the future values.
In[83]:=
Out[83]=
In[84]:=
Out[84]=
In[85]:=
In[86]:=
Out[86]=
In[87]:=
Out[87]=
In[88]:=
Out[88]=
Since we do not see a convincing systematic trend, we transform the series to zero-mean by simply subtracting the sample mean of the series.
In[89]:=
Out[89]=
In[90]:=
In[91]:=
In[92]:=
Out[92]=
In[93]:=
Out[93]=
Since the second (AR(3)) and third (ARMA(2, 1)) models have the smallest AIC values, we choose them to be our preliminary models. The conditional maximum likelihood method gives the following parameter estimates.
In[94]:=
Out[94]=
In fact, it is unnecessary to further estimate the parameters of AR(3) model as we have done above, since the Hannan-Rissanen method and the conditional maximum likelihood method give identical parameter estimates for an AR(p) model. The reader should convince himself/herself of this.
In[95]:=
Out[95]=
Since AR(2, 1) model has a lower AIC value, we choose it as our model and plot the residual correlation functions and cross-correlation function.
In[96]:=
In[97]:=
In[98]:=
Out[98]=
In[99]:=
Out[99]=
In[100]:=
Out[100]=
The residual correlation functions resemble those of white noise (since ). The model also passes the portmanteau test.
In[101]:=
Out[101]=
In[102]:=
Out[102]=
In[103]:=
In[104]:=
Out[104]=
These are the standard errors of {(1)11, (1)12, (1)21, ... , (1)22}. We can display the estimated parameters and their standard errors together using TableForm.
In[105]:=
Out[105]//TableForm=
Again we can set several elements of the parameter matrices to zero. We find the forecast values for the next 15 time steps.
In[106]:=
In[107]:=
In[108]:=
Out[108]=
In[109]:=
Out[109]=


Any questions about topics on this page? Click here to get an individual response.Buy NowMore Information