This is documentation for Mathematica 8, which was
based on an earlier version of the Wolfram Language.

# LeastSquares

 LeastSquares finds an x that solves the linear least-squares problem for the matrix equation .
• The vector x is uniquely determined by the minimization only if Length[x]==MatrixRank[m].
• The argument b can be a matrix, in which case the least-squares minimization is done independently for each column in b.
• A Method option can also be given. Settings for arbitrary-precision numerical matrices include and , and for sparse arrays and . The default setting of Automatic switches between these methods depending on the matrix given.
Solve a simple least-squares problem:
Solve a simple least-squares problem:
 Out[1]=
 Scope   (4)
Use symbolic input:
m is a 3×4 matrix and b is a length-4 vector:
Use exact arithmetic to find a vector x that minimizes :
Use machine arithmetic:
Use 20-digit precision arithmetic:
Solve the least-squares problem for a random complex matrix:
Use a sparse matrix:
b can be a matrix:
The first column of the b matrix is used to generate the first column of the result:
 Options   (1)
m is a 20×20 Hilbert matrix and b is a vector such that the solution of is known:
With the default tolerance, numerical roundoff is limited so errors are distributed:
With Tolerance, numerical roundoff can introduce excessive error:
Specifying a higher tolerance will limit roundoff errors at the expense of a larger residual:
 Applications   (1)
Here is some data:
Define cubic basis functions centered at t with support on the interval :
Set up a sparse design matrix for basis functions centered at 0, 1, ..., 10:
Solve the least-squares problem:
When can be solved exactly, LeastSquares is equivalent to LinearSolve:
m is a 5×2 matrix and b is a length-5 vector:
Solve the least-squares problem:
This is the minimizer of :
It is also gives the coefficients for the line with least-squares distance to the points:
LeastSquares gives the parameter estimates for a linear model with normal errors: