# Linear Regression Package

The built-in function Fit finds a least-squares fit to a list of data as a linear combination of the specified basis functions. The functions Regress and DesignedRegress provided in this package augment Fit by giving a list of commonly required diagnostics such as the coefficient of determination RSquared, the analysis of variance table ANOVATable, and the mean squared error EstimatedVariance. The output of regression functions can be controlled so that only needed information is produced. The Nonlinear Regression Package provides analogous functionality for nonlinear models.

The basis functions f_{j} specify the predictors as functions of the independent variables. The resulting model for the response variable is y_{i}=β_{1}f_{1i}+β_{2}f_{2i}+…+β_{p}f_{pi}+e_{i}, where y_{i} is the i response, f_{ji} is the j basis function evaluated at the i observation, and e_{i} is the i residual error.

Estimates of the coefficients β_{1},…,β_{p} are calculated to minimize , the error or residual sum of squares. For example, simple linear regression is accomplished by defining the basis functions as f_{1}=1 and f_{2}=x, in which case β_{1} and β_{2} are found to minimize [y_{i}-(β_{1}+β_{2}x_{i})]^{2}.

Regress[data,{1,x,x^{2}},x] | fit a list of data points data to a quadratic model |

Regress[data,{1,x_{1},x_{2},x_{1}x_{2}},{x_{1},x_{2}}] | |

fit data to a model that includes interaction between independent variables x_{1} and x_{2} | |

Regress[data,{f_{1},f_{2}…},vars] | fit data to a model as a linear combination of the functions f_{i} of variables vars |

Using Regress.

The arguments of Regress are of the same form as those of Fit. The data can be a list of vectors, each vector consisting of the observed values of the independent variables and the associated response. The basis functions f_{j} must be functions of the symbols given as variables. These symbols correspond to the independent variables represented in the data. By default, a constant function f_{j}=1 is added to the list of basis functions if not explicitly given in the list of basis functions.

The data can also be a vector of data points. In this case, Regress assumes that this vector represents the values of a response variable with the independent variable having values 1, 2, ….

{y_{1},y_{2},…} | data points specified by a list of response values, where a single independent variable is assumed to take the values 1, 2, … |

{{x_{11},x_{12},…,y_{1}},{x_{21},x_{22},…,y_{2}}} | data points specified by a matrix, where x_{ik} is the value of the i case of the k independent variable, and y_{i} is the i response |

Ways of specifying data in Regress.

In[1]:= |

In[2]:= |

option name | default value | |

IncludeConstant | True | constant automatically included in model |

RegressionReport | SummaryReport | fit diagnostics to include |

Weights | Automatic | list of weights for each point or pure function |

BasisNames | Automatic | names of basis elements for table headings |

Options for Regress.

Two of the options of Regress influence the method of calculation. IncludeConstant has a default setting True, which causes a constant term to be added to the model even if it is not specified in the basis functions. To fit a model without this constant term, specify IncludeConstant->False and do not include a constant in the basis functions.

The Weights option allows you to implement weighted least squares by specifying a list of weights, one for each data point; the default Weights->Automatic implies a weight of unity for each data point. When Weights->{w_{1},…,w_{n}}, the parameter estimates are chosen to minimize the weighted sum of squared residuals w_{i} .

Weights can also specify a pure function of the response. For example, to choose parameter estimates to minimize , set Weights->(Sqrt[#]&).

The options RegressionReport and BasisNames affect the form and content of the output. If RegressionReport is not specified, Regress automatically gives a list including values for ParameterTable, RSquared, AdjustedRSquared, EstimatedVariance and ANOVATable. This set of objects comprises the default SummaryReport. The option RegressionReport can be used to specify a single object or a list of objects so that more (or less) than the default set of results is included in the output. RegressionReportValues[Regress] gives the objects that may be included in the RegressionReport list for the Regress function.

With the option BasisNames, you can label the headings of predictors in tables such as ParameterTable and ParameterCITable.

The regression functions will also accept any option that can be specified for SingularValueList or StudentTCI. In particular, the numerical tolerance for the internal singular value decomposition is specified using Tolerance, and the confidence level for hypothesis testing and confidence intervals is specified using ConfidenceLevel.

BestFit | best fit function |

BestFitParameters | best fit parameter estimates |

ANOVATable | analysis of variance table |

EstimatedVariance | estimated error variance |

ParameterTable | table of parameter information including standard errors and test statistics |

ParameterCITable | table of confidence intervals for the parameters |

ParameterConfidenceRegion | ellipsoidal joint confidence region for the parameters |

ParameterConfidenceRegion[{f_{i1},f_{i2},…}] | |

ellipsoidal conditional joint confidence region for the parameters {f_{i1},f_{i2},…} | |

FitResiduals | differences between the observed responses and the predicted responses |

PredictedResponse | fitted values obtained by evaluating the best fit function at the observed values of the independent variables |

SinglePredictionCITable | table of confidence intervals for predicting a single observation of the response variable |

MeanPredictionCITable | table of confidence intervals for predicting the expected value of the response variable |

RSquared | coefficient of determination |

AdjustedRSquared | adjusted coefficient of determination |

CoefficientOfVariation | coefficient of variation |

CovarianceMatrix | covariance matrix of the parameters |

CorrelationMatrix | correlation matrix of the parameters |

Some RegressionReport values.

ANOVATable, a table for analysis of variance, provides a comparison of the given model to a smaller one including only a constant term. If IncludeConstant->False is specified, then the smaller model is reduced to the data. The table includes the degrees of freedom, the sum of squares and the mean squares due to the model (in the row labeled Model) and due to the residuals (in the row labeled Error). The residual mean square is also available in EstimatedVariance, and is calculated by dividing the residual sum of squares by its degrees of freedom. The F-test compares the two models using the ratio of their mean squares. If the value of F is large, the null hypothesis supporting the smaller model is rejected.

To evaluate the importance of each basis function, you can get information about the parameter estimates from the parameter table obtained by including ParameterTable in the list specified by RegressionReport. This table includes the estimates, their standard errors, and t-statistics for testing whether each parameter is zero. The p-values are calculated by comparing the obtained statistic to the t distribution with n-p degrees of freedom, where n is the sample size and p is the number of predictors. Confidence intervals for the parameter estimates, also based on the t distribution, can be found by specifying ParameterCITable. ParameterConfidenceRegion specifies the ellipsoidal joint confidence region of all fit parameters. ParameterConfidenceRegion[{f_{i1},f_{i2},…}] specifies the joint conditional confidence region of the fit parameters associated with basis functions {f_{i1},f_{i2},…}, a subset of the complete set of basis functions.

The square of the multiple correlation coefficient is called the coefficient of determination R^{2}, and is given by the ratio of the model sum of squares to the total sum of squares. It is a summary statistic that describes the relationship between the predictors and the response variable. AdjustedRSquared is defined as ^{2}=1-()(1-R^{2}), and gives an adjusted value that you can use to compare subsequent subsets of models. The coefficient of variation is given by the ratio of the residual root mean square to the mean of the response variable. If the response is strictly positive, this is sometimes used to measure the relative magnitude of error variation.

Each row in MeanPredictionCITable gives the confidence interval for the mean response at each of the values of the independent variables. Each row in SinglePredictionCITable gives the confidence interval for a single observed response at each of the values of the independent variables. MeanPredictionCITable gives a region likely to contain the regression curve, while SinglePredictionCITable gives a region likely to contain all possible observations.

In[8]:= |

In[10]:= |

This package provides numerous diagnostics for evaluating the data and the fit. The HatDiagonal gives the leverage of each point, measuring whether each observation of the independent variables is unusual. CookD and PredictedResponseDelta are influence diagnostics, simultaneously measuring whether the independent variables and the response variable are unusual. Unfortunately, these diagnostics are primarily useful in detecting single outliers. In particular, the diagnostics may indicate a single outlier, but deleting that observation and recomputing the diagnostics may indicate others. All these diagnostics are subject to this masking effect.

HatDiagonal | diagonal of the hat matrix X(X^{T}X)^{-1}X^{T}, where X is the n by p (weighted) design matrix |

JackknifedVariance | {v_{1},…,v_{n}}, where v_{i} is the estimated error variance computed using the data with the i case deleted |

StandardizedResiduals | fit residuals scaled by their standard errors, computed using the estimated error variance |

StudentizedResiduals | fit residuals scaled by their standard errors, computed using the jackknifed estimated error variances |

CookD | {d_{1},…,d_{n}}, where d_{i} is Cook’s squared distance diagnostic for evaluating whether the i case is an outlier |

PredictedResponseDelta | {d_{1},…,d_{n}}, where d_{i} is Kuh and Welsch’s DFFITS diagnostic giving the standardized signed difference in the i predicted response, between using all the data and the data with the i case deleted |

BestFitParametersDelta | {{d_{11},…,d_{1p}},…,{d_{n1},…,d_{np}}}, where d_{ij} is Kuh and Welsch’s DFBETAS diagnostic giving the standardized signed difference in the j parameter estimate, between using all the data and the data with the i case deleted |

CovarianceMatrixDetRatio | {r_{1},…,r_{n}}, where r_{i} is Kuh and Welsch’s COVRATIO diagnostic giving the ratio of the determinant of the parameter covariance matrix computed using the data with the i case deleted, to the determinant of the parameter covariance matrix computed using the original data |

Diagnostics for detecting outliers.

Some diagnostics indicate the degree to which individual basis functions contribute to the fit, or whether the basis functions are involved in a collinear relationship. The sum of the elements in the SequentialSumOfSquares vector gives the model sum of squares listed in the ANOVATable. Each element corresponds to the increment in the model sum of squares obtained by sequentially adding each nonconstant basis function to the model. Each element in the PartialSumOfSquares vector gives the increase in the model sum of squares due to adding the corresponding nonconstant basis function to a model consisting of all other basis functions. SequentialSumOfSquares is useful in determining the degree of a univariate polynomial model, while PartialSumOfSquares is useful in trimming a large set of predictors. VarianceInflation or EigenstructureTable may also be used for predictor set trimming.

PartialSumOfSquares | a list giving the increase in the model sum of squares due to adding each nonconstant basis function to the model consisting of the remaining basis functions |

SequentialSumOfSquares | a list giving a partitioning of the model sum of squares, one element for each nonconstant basis function added sequentially to the model |

VarianceInflation | {v_{1},…,v_{p}}, where v_{j} is the variance inflation factor associated with the j parameter |

EigenstructureTable | table giving the eigenstructure of the correlation matrix of the nonconstant basis functions |

Diagnostics for evaluating basis functions and detecting collinearity.

The Durbin–Watson d statistic is used for testing the existence of a first-order autoregressive process. The statistic takes on values between 0 and 4, with values near the middle of that range indicating uncorrelated errors, an underlying assumption of the regression model. Critical values for the statistic vary with sample size, the number of parameters in the model, and the desired significance. These values can be found in published tables.

DurbinWatsonD | Durbin–Watson d statistic |

Other statistics not mentioned here can be computed with the help of the catcher matrix. This matrix catches all the information the predictors have about the parameter vector. This matrix can be exported from Regress by specifying CatcherMatrix with the RegressionReport option.

CatcherMatrix | p×n matrix C, where C·y is the estimated parameter vector and y is the response vector |

Matrix describing the parameter information provided by the predictors.

Frequently, linear regression is applied to an existing design matrix rather than the original data. A design matrix is a list containing the basis functions evaluated at the observed values of the independent variable. If your data is already in the form of a design matrix with a corresponding vector of response data, you can use DesignedRegress for the same analyses as provided by Regress. DesignMatrix puts your data in the form of a design matrix.

DesignedRegress[designmatrix,response] | fit the model represented by designmatrix given the vector response of response data |

DesignMatrix[data,{f_{1},f_{2}…},vars] | give the design matrix for modeling data as a linear combination of the functions f_{i} of variables vars |

Functions for linear regression using a design matrix.

DesignMatrix takes the same arguments as Regress. It can be used to get the necessary arguments for DesignedRegress, or to check whether you correctly specified your basis functions. When you use DesignMatrix, the constant term is always included in the model unless IncludeConstant->False is specified. Every option of Regress except IncludeConstant is accepted by DesignedRegress. RegressionReportValues[DesignedRegress] gives the values that may be included in the RegressionReport list for the DesignedRegress function.

DesignedRegress[svd,response] | fit the model represented by svd, the singular value decomposition of a design matrix, given the vector response of response data |

Linear regression using the singular value decomposition of a design matrix.

DesignedRegress will also accept the singular value decomposition of the design matrix. If the regression is not weighted, this approach will save recomputing the design matrix decomposition.

In[16]:= |