Legacy Documentation

Time Series (2011)

This is documentation for an obsolete product.
Current products and services

Previous section-----Next section

1.2.5 Multivariate ARMA Models

In some cases, at each time t, several related quantities are observed and, therefore, we want to study these quantities simultaneously by grouping them together to form a vector. By so doing we have a vector or multivariate process. It is straightforward to generalize the definition of a univariate ARMA model to the multivariate case. Let "X"t=(Xt1, Xt2, ... , Xtm) and "Z"t=(Zt1, Zt2, ... , Ztm) be m-dimensional random vectors (here denotes transpose). A zero-mean, m-variate ARMA(p, q) model is defined by
where the AR and MA coefficients {i} (i=1, 2, ... , p) and {i}, (i=1, 2, ... , q) are all real mm matrices and the zero-mean white noise {"Z"t} is characterized by a covariance matrix . Again, it is useful to define the matrix AR polynomial (x) and the matrix MA polynomial (x) by
and
where I denotes the mm identity matrix. Now (2.11) can be written as (B)"X"t=(B)"Z"t, where B is the backward shift operator.
As in the univariate case, a multivariate ARMA(p, q) model is represented by the object
ARMAModel[{1, 2,... , p}, {1, 2, ... , q}, ]
where each parameter matrix must be entered according to Mathematica convention. For example, the AR(1) model
with noise covariance (ij=ij) is represented as
ARModel[{{{11, 12}, {21, 22}}}, {{11, 12}, {21, 22}}].
The various quantities defined in the univariate case can be extended to the multivariate case. We proceed to do this and illustrate how they are computed.

Stationarity and Invertibility

The definitions of mean, variance, and covariance, and the stationarity condition can be extended straightforwardly to a multivariate process. Now the mean is a vector and the variance and covariance are matrices. They are defined as follows:
mean: (t)=E"X"t ,
variance: (t)="Var"("X"t)=E("X"t-(t))("X"t-(t)), and
covariance: (s, r)="Cov"("X"s, "X"r)=E("X"s-(s))("X"r-(r)).
The stationarity condition for a multivariate ARMA model can be translated into the algebraic condition that all the roots of the determinantal equation (x)=0 lie outside the unit circle.
In[33]:=
In[34]:=
Out[34]=
In[35]:=
Out[35]=
Since both roots lie outside the unit circle, we conclude that the model is stationary.
In fact, all the Mathematica functions introduced previously for the univariate case have been designed so that they can be used with appropriate input for multivariate time series as well. Specifically, here we use the function StationaryQ directly to check the stationarity of the bivariate AR(1) model of Example 2.12.
In[36]:=
Out[36]=
Similarly, a multivariate ARMA model is invertible if all the roots of (x)=0 are outside the unit circle. The invertibility of a model can be checked using InvertibleQ as in the univariate case.
In[37]:=
Out[37]=
A stationary ARMA model can be expanded as an MA() model "X"t=j"Z"t-j with (B)=-1(B)(B); similarly, an invertible ARMA model can be expressed as an AR() model j"X"t-j="Z"t with (B)=-1(B)(B). Again, the function ToARModel[model, p] can be used to obtain the order p truncation of an AR() expansion. Similarly ToMAModel[model, q] yields the order q truncation of an MA() expansion. {j} and {j} are matrices and they are determined by equating the coefficients of corresponding powers of B in (B)(B)=(B) and (B)=(B)(B), respectively.
In[38]:=
Out[38]=

Covariance and Correlation Functions

The matrix covariance function of a stationary multivariate process is defined by
Note that now (k)≠(-k); instead we have (k)=(-k). It is easy to see that the ith diagonal element of (k), ((k))iiii(k)=EXt+k, iXti, is simply the (auto)covariance of the univariate time series {Xti} and ((k))ijij(k)=EXt+k, iXtj is the cross-covariance of the series {Xti} and {Xtj}. The matrix correlation function R(k) is defined by
Note that unlike the univariate case, we cannot simply divide (k) by (0) to get the correlation function.
We can get the covariance or correlation function of a multivariate ARMA model up to lag h simply by using CovarianceFunction[model, h] or CorrelationFunction[model, h]; the output consists of a list of matrices {(0), (1), ... (h)} or {R(0), R(1), ... , R(h)}.
In[39]:=
Out[39]=
These are covariance matrices {(0), (1), (2), (3)} and they can be put into a more readable form using TableForm .
In[40]:=
Out[40]//TableForm=
Again, the covariance function vanishes for lags greater than q (=1 in this case).
Often we want to get the cross-covariance or cross-correlation of two processes i and j by extracting the (i, j) element of each matrix in the output of CovarianceFunction or CorrelationFunction, respectively. Whenever we want to do the same operation on each entry of a list, we can use the Mathematica function Map. For example, to get the cross-covariance function of processes 1 and 2, 12(k), for k≥0 in the above bivariate MA(1) model, we can do the following.
In[41]:=
Out[41]=
Similarly, we can get the autocovariance of process 2 of the bivariate MA model in Example 2.15 by extracting the (2, 2) element of each covariance matrix.
We use Map again to extract the covariance function of process 2.
In[42]:=
Out[42]=
In[43]:=
Notice that AR and MA parameter matrices are enclosed in separate lists and the noise variance is a symmetric positive-definite matrix. First we extract the cross-correlation 12(k) and then plot it.
In[44]:=
Out[44]=
Here is the cross-correlation plot. The option PlotRange is set to All to prevent the truncation of any large values in the plot.
In[45]:=
Out[45]=
In contrast to autocorrelation, the cross-correlation at lag 0 is, in general, not equal to 1. Note that we have plotted the cross-correlation only for k≥0. Here we demonstrate how to plot the cross-correlation function from lags -h to h. First we must obtain {12(-h), 12(-h+1), ... , 12(0), 12(1), ... , 12(h)}. This can be accomplished using the fact that 12(-k)=21(k).
In[46]:=
Using gamma12 as an example, it is convenient to define a function, say, plotmulticorr, to plot multivariate correlations. The argument should include the multivariate correlation function and i and j indicating which cross-correlation or autocorrelation is to be plotted.
In[47]:=
Now we can plot the cross-correlation function of the above example using plotmulticorr.
In[48]:=
Out[48]=
We can also plot the correlation of series 1 of the bivariate ARMA(1, 1) process of Example 2.16 for lags from -20 to 20 using plotmulticorr.
In[49]:=
Out[49]=

Partial Correlation Function

The direct extension of the partial correlation function to the multivariate case leads to what is often called partial autoregressive matrices. The partial autoregressive matrix at lag k is the solution k, k to the Yule-Walker equations of order k. (See Section 1.6 for a description of Yule-Walker equations and the Levinson-Durbin algorithm.) However, here we will refer to them as the partial correlation function and use PartialCorrelationFunction[model, h] to obtain these matrices up to lag h, but bear in mind that some authors define partial correlation function for a multivariate process differently, for example, Granger and Newbold (1986), p. 246.
In[50]:=
Out[50]=
Note again that the partial correlation function vanishes for lags greater than p.