Legacy Documentation

Time Series (2011)

This is documentation for an obsolete product.
Current products and services

Previous section-----Next section

2.2 Analysis of ARMA Time Series

Given a time series, we can analyze its properties. Since many algorithms for estimating model parameters assume the data form a zero-mean, stationary process, appropriate transformations of the data are often needed to make them zero-mean and stationary. The function ListDifference does appropriate differencing on ARIMA or SARIMA data. Note that all time series data should be input as a list of the form {x1, x2, ... } where xi is a number for a scalar time series and is itself a list, xi={xi1, xi2, ... , xim}, for an m-variate time series.
ListDifference[data, d]
difference data d times
ListDifference[data, {d, D}, s]
difference data d times with period 1 and D times with period s

Sample mean and transformations of data.

After appropriate transformations of the time series data, we can calculate the properties of the series. The sample covariance function for a zero-mean time series of n observations is defined to be . The sample correlation function is the sample covariance function of the corresponding standardized series, and the sample partial autocorrelation function is defined here as the last coefficient in a Levinson-Durbin estimate of AR coefficients. The sample power spectrum is the Fourier transform of the sample covariance function . The smoothed spectrum using the spectral window {W(0), W(1), ... , W(M)} is defined to be where W(k)=W(-k), while the smoothed spectrum using the lag window {(0), (1), ... , (M)} is defined by where (k)=(-k).
CovarianceFunction[data, n] CovarianceFunction[data1, data2, n]
give the sample covariance function of data or the cross-covariance function of data1 and data2 up to lag n
CorrelationFunction[data, n] CorrelationFunction[data1, data2, n]
give the sample correlation function of data or the cross-correlation function of data1 and data2 up to lag n
PartialCorrelationFunction[data, n]
give the sample partial correlation function of data up to lag n
Spectrum[data] Spectrum[data1, data2]
give the sample power spectrum of data or the cross-spectrum of data1 and data2
SmoothedSpectrumS[spectrum, window]
give the smoothed spectrum using the spectral window window
SmoothedSpectrumL[cov, window, ]
give the smoothed spectrum as a function of using the lag window window

Properties of observed data.

In[1]:=
In[2]:=
Out[2]=
In[3]:=
Out[3]=
In[4]:=
Out[4]=
In[5]:=
In[6]:=
Out[6]=
In[7]:=
Out[7]=
In[8]:=
Out[8]=
In[9]:=
In[10]:=
Out[10]=
In[11]:=
In[12]:=
Out[12]=
In[13]:=
In[14]:=
Out[14]=
In[15]:=
Out[15]=
When fitting a given set of data to a particular ARMA type of model, the orders have to be selected first. Usually the sample partial correlation or sample correlation function can give an indication of the order of an AR or an MA process. An AIC or a BIC criterion is also used to select a model. The AIC criterion chooses p and q to minimize the value of by fitting the time series of length n to an ARMA(p, q) model (here is the noise variance estimate, usually found via maximum likelihood estimation). The BIC criterion seeks to minimize .
AIC[model, n]
give the AIC value of model fitted to data of length n
BIC[model, n]
give the BIC value of model fitted to data of length n

AIC and BIC values.

Given a model, various methods exist to fit the appropriately transformed data to it and estimate the parameters. HannanRissanenEstimate uses the Hannan-Rissanen procedure to both select orders and perform parameter estimation. As in the long AR method, the data are first fitted to an AR(k) process, where k (less than some given kmax) is chosen by the AIC criterion. The orders p and q are selected among all p≤Min[pmax, k] and q≤"qmax" using BIC.
YuleWalkerEstimate[data, p]
give the Yule-Walker estimate of AR(p) model
LevinsonDurbinEstimate[data, p]
give the Levinson-Durbin estimate of AR(i) model for i = 1, 2, ..., p
BurgEstimate[data, p]
give the Burg estimate of AR(i) model for i = 1, 2, ..., p
InnovationEstimate[data, q]
give the innovation estimate of MA(i) model for i = 1, 2, ..., q
LongAREstimate[data, k, p, q]
give the estimate of ARMA(p, q) model by first finding the residuals from AR(k) process
HannanRissanenEstimate[data, kmax, pmax, qmax]
give the estimate of the model with the lowest BIC value
HannanRissanenEstimate[data, kmax, pmax, qmax, n]
give the estimate of the n models with the lowest BIC values

Estimations of ARMA models.

In[16]:=
In[17]:=
Out[17]//Short=
In[18]:=
Out[18]=
In[19]:=
Out[19]=
In[20]:=
Out[20]=
In[21]:=
Out[21]=
In[22]:=
Out[22]=
In[23]:=
Out[23]=
In[24]:=
Out[24]=
In[25]:=
Out[25]=
In[26]:=
Out[26]=
In[27]:=
Out[27]=
MLEstimate gives the maximum likelihood estimate of an ARMA type of model by maximizing the exact likelihood function that is calculated using the innovations algorithm. The built-in function FindMinimum is used and the same options apply. Two sets of initial values are needed for each parameter, and they are usually taken from the results of various estimation methods given above. Since finding the exact maximum likelihood estimate is generally slow, a conditional likelihood is often used. ConditionalMLEstimate gives an estimate of an ARMA model by maximizing the conditional likelihood using the Levenberg-Marquardt algorithm.
ConditionalMLEstimate[data, p]
fit an AR(p) model to data using the conditional maximum likelihood method
ConditionalMLEstimate[data, model]
fit model to data using the conditional maximum likelihood method with initial values of parameters as the arguments of model
MLEstimate[data, model, {1, {11, 12}}, ...]
fit model to data using maximum likelihood method with initial values of parameters {11,12}, ...
LogLikelihood[data, model]
give the logarithm of Gaussian likelihood for the given data and model

Maximum likelihood estimations and the logarithm of Gaussian likelihood.

option namedefault value
MaxIterations30
maximum number of iterations in searching for minimum

Option for ConditionalMLEstimate.

In[28]:=
In[29]:=
Out[29]=
In[30]:=
Out[30]=
In[31]:=
Out[31]=
In[32]:=
Out[32]=
In[33]:=
In[34]:=
Out[34]=
In[35]:=
In[36]:=
Out[36]=
In[37]:=
Out[37]=
Let =(1, 2, ... , p, 1, 2, ... , q) be the parameters of a stationary and invertible ARMA(p, q) model and the maximum likelihood estimator of . Then, as n→, we have . For a univariate ARMA model, AsymptoticCovariance[model] calculates the asymptotic covariance V from model. The function InformationMatrix[data, model] gives the estimated asymptotic information matrix whose inverse can be used as the estimate for the asymptotic covariance.
AsymptoticCovariance[model]
give the covariance matrix V of the asymptotic distribution of the maximum likelihood estimators
InformationMatrix[data, model]
give the estimated asymptotic information matrix

Asymptotic covariance and information matrix.

In[38]:=
In[39]:=
Out[39]//MatrixForm=
In[40]:=
In[41]:=
Out[41]//MatrixForm=
There are various ways to check the adequacy of a chosen model. The residuals of a fitted ARMA(p, q) process are defined by where t=1, 2, ... , n. One can infer the adequacy of a model by looking at the behavior of the residuals. The portmanteau test uses the statistic where is the sample correlation function of the residuals; Qh is approximately chi-squared with h-p-q degrees of freedom. Qh for an m-variate time series is similarly defined, and is approximately chi-squared with m2(h-p-q) degrees of freedom.
Residual[data, model]
give the residuals of fitting model to data
PortmanteauStatistic[residual, h]
calculate the portmanteau statistic Qh from residual

Residuals and test statistic.

In[42]:=
Out[42]=
After establishing the adequacy of a model, we can proceed to forecast future values of the series. The best linear predictor is defined as the linear combination of observed data points that has the minimum mean-square distance from the true value. BestLinearPredictor gives the exact best linear predictions and their mean squared errors using the innovations algorithm. When the option Exact is set to False the approximate best linear predictor is calculated. For an ARIMA or SARIMA series {Xt} with a constant term, the prediction for future values of {Xt} can be obtained from the predicted values of {Yt}, {}, where Yt=(1-B)d(1-Bs)DXt, using IntegratedPredictor.
BestLinearPredictor[data, model, n]
give the prediction of model for the next n values and their mean square errors
IntegratedPredictor[xlist, {d, D}, s, yhatlist]
give the predicted values of {Xt} from the predicted values yhatlist

Predicting time series.

option namedefault value
ExactTrue
whether to calculate exactly

Option for BestLinearPredictor.

In[43]:=
Out[43]=
In[44]:=
Out[44]=
In[45]:=
Out[45]=