1.2.5 Multivariate ARMA ModelsIn some cases, at each time t, several related quantities are observed and, therefore, we want to study these quantities simultaneously by grouping them together to form a vector. By so doing we have a vector or multivariate process. It is straightforward to generalize the definition of a univariate ARMA model to the multivariate case. Let "X"t=(Xt1, Xt2, ... , Xtm) and "Z"t=(Zt1, Zt2, ... , Ztm) be m-dimensional random vectors (here denotes transpose). A zero-mean, m-variate ARMA( p, q) model is defined by where the AR and MA coefficients {i} ( i=1, 2, ... , p) and {i}, ( i=1, 2, ... , q) are all real mm matrices and the zero-mean white noise {"Z"t} is characterized by a covariance matrix . Again, it is useful to define the matrix AR polynomial (x) and the matrix MA polynomial (x) by where I denotes the mm identity matrix. Now ( 2.11) can be written as (B)"X"t=(B)"Z"t, where B is the backward shift operator.
As in the univariate case, a multivariate ARMA(p, q) model is represented by the object where each parameter matrix must be entered according to Mathematica convention. For example, the AR(1) model with noise covariance ( ij=ij) is represented as The various quantities defined in the univariate case can be extended to the multivariate case. We proceed to do this and illustrate how they are computed. Stationarity and InvertibilityThe definitions of mean, variance, and covariance, and the stationarity condition can be extended straightforwardly to a multivariate process. Now the mean is a vector and the variance and covariance are matrices. They are defined as follows:
mean: (t)=E"X"t ,
The stationarity condition for a multivariate ARMA model can be translated into the algebraic condition that all the roots of the determinantal equation (x)=0 lie outside the unit circle. Example 2.12 As an exercise in using Mathematica, we illustrate how to explicitly check if the bivariate AR(1) model "X"t-1"X"t-1="Z"t is stationary where 1={{0.6, -0.7}, {1.2, -0.5}}. We first define the determinant. Here IdentityMatrix[2] generates the 22 identity matrix. This solves the equation. Out[34]= | |
We now find the absolute values of the roots. Out[35]= | |
Since both roots lie outside the unit circle, we conclude that the model is stationary.
In fact, all the Mathematica functions introduced previously for the univariate case have been designed so that they can be used with appropriate input for multivariate time series as well. Specifically, here we use the function StationaryQ directly to check the stationarity of the bivariate AR(1) model of Example 2.12. Out[36]= | |
Similarly, a multivariate ARMA model is invertible if all the roots of (x)=0 are outside the unit circle. The invertibility of a model can be checked using InvertibleQ as in the univariate case. Example 2.13 Check if the bivariate MA(2) model is invertible where 1={{-1.2, 0.5}, {-0.3, 0.87}}, and 2={{0.2, 0.76}, {1.1, -0.8}}. The model is not invertible. Out[37]= | |
A stationary ARMA model can be expanded as an MA( ) model "X"t=j"Z"t-j with (B)=-1(B)(B); similarly, an invertible ARMA model can be expressed as an AR( ) model j"X"t-j="Z"t with (B)=-1(B)(B). Again, the function ToARModel[model, p] can be used to obtain the order p truncation of an AR( ) expansion. Similarly ToMAModel[model, q] yields the order q truncation of an MA( ) expansion. {j} and {j} are matrices and they are determined by equating the coefficients of corresponding powers of B in (B)(B)=(B) and (B)=(B)(B), respectively. Example 2.14 Expand the following AR(1) model as an approximate MA(3) model. This expands the given AR(1) as an MA(3) model. Out[38]= | |
Covariance and Correlation FunctionsThe matrix covariance function of a stationary multivariate process is defined by Note that now (k)≠(-k); instead we have (k)=(-k). It is easy to see that the ith diagonal element of (k), ((k))iiii(k)=EXt+k, iXti, is simply the (auto)covariance of the univariate time series {Xti} and ((k))ijij(k)=EXt+k, iXtj is the cross-covariance of the series {Xti} and {Xtj}. The matrix correlation function R(k) is defined by Note that unlike the univariate case, we cannot simply divide (k) by (0) to get the correlation function.
We can get the covariance or correlation function of a multivariate ARMA model up to lag h simply by using CovarianceFunction[model, h] or CorrelationFunction[model, h]; the output consists of a list of matrices {(0), (1), ... (h)} or {R(0), R(1), ... , R(h)}. Example 2.15 Find the covariance function of a bivariate MA(1) model up to lag 3. This gives the covariance function of a bivariate MA(1) model up to lag 3. Out[39]= | |
These are covariance matrices {(0), (1), (2), (3)} and they can be put into a more readable form using TableForm . The covariance function is displayed in a table form. Out[40]//TableForm= | |
Again, the covariance function vanishes for lags greater than q (=1 in this case).
Often we want to get the cross-covariance or cross-correlation of two processes i and j by extracting the ( i, j) element of each matrix in the output of CovarianceFunction or CorrelationFunction, respectively. Whenever we want to do the same operation on each entry of a list, we can use the Mathematica function Map. For example, to get the cross-covariance function of processes 1 and 2, 12(k), for k≥0 in the above bivariate MA(1) model, we can do the following. We extract the cross-covariance function using Map. Out[41]= | |
Similarly, we can get the autocovariance of process 2 of the bivariate MA model in Example 2.15 by extracting the (2, 2) element of each covariance matrix. We use Map again to extract the covariance function of process 2. Out[42]= | |
Example 2.16 Compute the cross-correlation of a bivariate ARMA(1, 1) process and plot it. The correlation function of the given ARMA(1, 1) process is computed. Recall that the semicolon is used at the end of an expression to suppress the display of the output. Notice that AR and MA parameter matrices are enclosed in separate lists and the noise variance is a symmetric positive-definite matrix. First we extract the cross-correlation 12(k) and then plot it. This extracts the cross-correlation function. Out[44]= | |
Here is the cross-correlation plot. The option PlotRange is set to All to prevent the truncation of any large values in the plot. Out[45]= | |
In contrast to autocorrelation, the cross-correlation at lag 0 is, in general, not equal to 1. Note that we have plotted the cross-correlation only for k≥0. Here we demonstrate how to plot the cross-correlation function from lags -h to h. First we must obtain {12(-h), 12(-h+1), ... , 12(0), 12(1), ... , 12(h)}. This can be accomplished using the fact that 12(-k)=21(k). This gives the cross-correlations from lag -20 to lag 20. Drop[list, n] gives list with its first n elements dropped and Drop[list, -n] gives list with its last n elements dropped. Using gamma12 as an example, it is convenient to define a function, say, plotmulticorr, to plot multivariate correlations. The argument should include the multivariate correlation function and i and j indicating which cross-correlation or autocorrelation is to be plotted. This defines the function plotmulticorr. Now we can plot the cross-correlation function of the above example using plotmulticorr. This plots the same cross-correlation function but now from lag -20 to 20. Note that it is not symmetric about the origin. Out[48]= | |
We can also plot the correlation of series 1 of the bivariate ARMA(1, 1) process of Example 2.16 for lags from -20 to 20 using plotmulticorr. This plots the correlation function of series 1. In contrast to the plot of the cross-correlation function, the graph is, of course, symmetric. Out[49]= | |
Partial Correlation FunctionThe direct extension of the partial correlation function to the multivariate case leads to what is often called partial autoregressive matrices. The partial autoregressive matrix at lag k is the solution k, k to the Yule-Walker equations of order k. (See Section 1.6 for a description of Yule-Walker equations and the Levinson-Durbin algorithm.) However, here we will refer to them as the partial correlation function and use PartialCorrelationFunction[model, h] to obtain these matrices up to lag h, but bear in mind that some authors define partial correlation function for a multivariate process differently, for example, Granger and Newbold (1986), p. 246. Example 2.17 Find the partial correlation function (partial autoregressive matrices) of an AR(2) model. This gives the partial correlation function of an AR(2) model. Out[50]= | |
Note again that the partial correlation function vanishes for lags greater than p. |