Statistics`LinearRegression`The builtin function Fit finds a leastsquares fit to a list of data as a linear combination of the specified basis functions. The functions Regress and DesignedRegress provided in this package augment Fit by giving a list of commonly required diagnostics such as the coefficient of determination RSquared, the analysis of variance table ANOVATable, and the mean squared error EstimatedVariance. The output of regression functions can be controlled so that only needed information is produced. The basis functions specify the predictors as functions of the independent variables. The resulting model for the response variable is , where is the response, is the basis function evaluated at the observation, and is the statistical error. Estimates of the coefficients are calculated to minimize , the error or residual sum of squares. For example, simple linear regression is accomplished by defining the basis functions as and , in which case and are found to minimize . Using Regress. The arguments of Regress are of the same form as those of Fit. The data can be a list of vectors, each vector consisting of the observed values of the independent variables and the associated response. The basis functions must be functions of the symbols given as variables. These symbols correspond to the independent variables represented in the data. The data can also be a vector of data points. In this case, Regress assumes that this vector represents the values of a response variable with the independent variable having values , , ... . Ways of specifying data in Regress. This loads the package. In this data, the first element in each pair gives the value of the independent variable, while the second gives the observed response. This is a plot of the data.
Out[3]=  
This is the regression output for fitting the model . Chop replaces the pvalues below with 0.
Out[4]=  
You can use Fit if you only want the fit function.
Out[5]=  
Options for Regress. Two of the options of Regress influence the method of calculation. IncludeConstant has a default setting True, which causes a constant term to be added to the model even if it is not specified in the basis functions. To fit a model without this constant term, specify IncludeConstant > False and do not include a constant in the basis functions. The Weights option allows you to implement weighted least squares by specifying a list of weights, one for each data point; the default Weights > Automatic implies a weight of unity for each data point. When Weights > {, ... , }, the parameter estimates are chosen to minimize the weighted sum of squared residuals . Weights can also specify a pure function of the response. For example, to choose parameter estimates to minimize , set Weights > (Sqrt[#] &). The options RegressionReport and BasisNames affect the form and content of the output. If RegressionReport is not specified, Regress automatically gives a list including values for ParameterTable, RSquared, AdjustedRSquared, EstimatedVariance, and ANOVATable. This set of objects comprises the default SummaryReport. The option RegressionReport can be used to specify a single object or a list of objects so that more (or less) than the default set of results is included in the output. RegressionReportValues[Regress] gives the objects that may be included in the RegressionReport list for the Regress function. With the option BasisNames, you can label the headings of predictors in tables such as ParameterTable and ParameterCITable. The regression functions will also accept any option that can be specified for SingularValues or StudentTCI. In particular, the numerical tolerance for the internal singular value decomposition is specified using Tolerance, and the confidence level for hypothesis testing and confidence intervals is specified using ConfidenceLevel. Some option settings for RegressionReport or objects that may be included in a list specified by RegressionReport. ANOVATable, a table for analysis of variance, provides a comparison of the given model to a smaller one including only a constant term. If IncludeConstant > False is specified, then the smaller model is reduced to the data. The table includes the degrees of freedom, the sum of squares and the mean squares due to the model (in the row labeled Model) and due to the residuals (in the row labeled Error). The residual mean square is also available in EstimatedVariance, and is calculated by dividing the residual sum of squares by its degrees of freedom. The test compares the two models using the ratio of their mean squares. If the value of is large, the null hypothesis supporting the smaller model is rejected. To evaluate the importance of each basis function, you can get information about the parameter estimates from the parameter table obtained by setting RegressionReport to ParameterTable, or by including ParameterTable in the list specified by RegressionReport. This table includes the estimates, their standard errors, and statistics for testing whether each parameter is zero. The values are calculated by comparing the obtained statistic to the distribution with degrees of freedom, where is the sample size and is the number of predictors. Confidence intervals for the parameter estimates, also based on the distribution, can be found by specifying ParameterCITable. ParameterConfidenceRegion specifies the ellipsoidal joint confidence region of all fit parameters. ParameterConfidenceRegion[{, , ... }] specifies the joint conditional confidence region of the fit parameters associated with basis functions {, , ... }, a subset of the complete set of basis functions. The square of the multiple correlation coefficient is called the coefficient of determination , and is given by the ratio of the model sum of squares to the total sum of squares. It is a summary statistic that describes the relationship between the predictors and the response variable. AdjustedRSquared is defined as , and gives an adjusted value that you can use to compare subsequent subsets of models. The coefficient of variation is given by the ratio of the residual root mean square to the mean of the response variable. If the response is strictly positive, this is sometimes used to measure the relative magnitude of error variation. Each row in MeanPredictionCITable gives the confidence interval for the mean response at each of the values of the independent variables. Each row in SinglePredictionCITable gives the confidence interval for a single observed response at each of the values of the independent variables. MeanPredictionCITable gives a region likely to contain the regression curve, while SinglePredictionCITable gives a region likely to contain all possible observations. In this example, only the residuals, the confidence interval table for the predicted response of single observations, and the parameter joint confidence region are produced.
Out[6]=  
This is a list of the residuals extracted from the output.
Out[7]=  
The observed response, the predicted response, the standard errors of the predicted response, and the confidence intervals may also be extracted. You can now plot the predicted responses against the residuals for diagnostic purposes.
Out[9]=  
Here the predicted responses and lower and upper confidence limits are paired with the corresponding values. This loads the function MultipleListPlot. This displays the raw data, fitted curve, and the confidence intervals for the predicted responses of single observations.
Out[12]=  
The functions Show and Graphics may be used to display an Ellipsoid object. This is the joint confidence region of the regression parameters.
Out[13]=  
This package provides numerous diagnostics for evaluating the data and the fit. The HatDiagonal gives the leverage of each point, measuring whether each observation of the independent variables is unusual. CookD and PredictedResponseDelta are influence diagnostics, simultaneously measuring whether the independent variables and the response variable are unusual. Unfortunately, these diagnostics are primarily useful in detecting single outliers. In particular, the diagnostics may indicate a single outlier, but deleting that observation and recomputing the diagnostics may indicate others. All of these diagnostics are subject to this masking effect. They are described in greater detail in Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, by D. A. Belsley, E. Kuh, and R. E. Welsch (John Wiley & Sons, 1980), and "Detection of influential observations in linear regression", by R. D. Cook, Technometrics, 19, 1977. Diagnostics for detecting outliers. Some diagnostics indicate the degree to which individual basis functions contribute to the fit, or whether the basis functions are involved in a collinear relationship. The sum of the elements in the SequentialSumOfSquares vector gives the model sum of squares listed in the ANOVATable. Each element corresponds to the increment in the model sum of squares obtained by sequentially adding each (nonconstant) basis function to the model. Each element in the PartialSumOfSquares vector gives the increase in the model sum of squares due to adding the corresponding (nonconstant) basis function to a model consisting of all other basis functions. SequentialSumOfSquares is useful in determining the degree of a univariate polynomial model, while PartialSumOfSquares is useful in trimming a large set of predictors. VarianceInflation or EigenstructureTable may also be used for predictor set trimming. Diagnostics for evaluating basis functions and detecting collinearity. The DurbinWatson d statistic is used for testing the existence of a firstorder autoregressive process. The statistic takes on values between 0 and 4, with values near the middle of that range indicating uncorrelated errors, an underlying assumption of the regression model. Critical values for the statistic vary with sample size, the number of parameters in the model, and the desired significance. These values can be found in published tables. Correlated errors diagnostic. Other statistics not mentioned here can be computed with the help of the catcher matrix. This matrix catches all the information the predictors have about the parameter vector. This matrix can be exported from Regress by specifying CatcherMatrix with the RegressionReport option. Matrix describing the parameter information provided by the predictors. Frequently, linear regression is applied to an existing design matrix rather than the original data. A design matrix is a list containing the basis functions evaluated at the observed values of the independent variable. If your data are already in the form of a design matrix with a corresponding vector of response data, you can use DesignedRegress for the same analyses as provided by Regress. DesignMatrix puts your data in the form of a design matrix. Functions for linear regression using a design matrix. DesignMatrix takes the same arguments as Regress. It can be used to get the necessary arguments for DesignedRegress, or to check whether you correctly specified your basis functions. When you use DesignMatrix, the constant term is always included in the model unless IncludeConstant > False is specified. Every option of Regress except IncludeConstant is accepted by DesignedRegress. RegressionReportValues[DesignedRegress] gives the values that may be included in the RegressionReport list for the DesignedRegress function. This is the design matrix used in the previous regression analysis.
Out[14]=  
Here is the vector of observed responses.
Out[15]=  
The result of DesignedRegress is identical to that of Regress. Note that the predictor names that were specified for the output appear in the ParameterTable.
Out[16]=  
Linear regression using the singular value decomposition of the design matrix. DesignedRegress will also accept the singular value decomposition of the design matrix. If the regression is not weighted, this approach will save recomputing the design matrix decomposition. This is the singular value decomposition of the design matrix. When several responses are of interest, this will save recomputing the design matrix decomposition.
Out[18]=  
