# LeastSquares

LeastSquares[m,b]

finds an x that solves the linear least-squares problem for the matrix equation m.x==b.

# Details and Options • LeastSquares[m,b] gives a vector x that minimizes Norm[m.x-b].
• The vector x is uniquely determined by the minimization only if Length[x]==MatrixRank[m].
• The argument b can be a matrix, in which case the least-squares minimization is done independently for each column in b, which is the x that minimizes Norm[m.x-b,"Frobenius"].
• LeastSquares works on both numerical and symbolic matrices, as well as SparseArray objects.
• A Method option can also be given. Settings for arbitrary-precision numerical matrices include "Direct" and "IterativeRefinement", and for sparse arrays "Direct" and "Krylov". The default setting of Automatic switches between these methods, depending on the matrix given.

# Examples

open allclose all

## Basic Examples(1)

Solve a simple least-squares problem:

## Scope(4)

Use symbolic input:

m is a 4×3 matrix, and b is a length-4 vector:

Use exact arithmetic to find a vector x that minimizes :

Use machine arithmetic:

Use 20-digit-precision arithmetic:

Solve the least-squares problem for a random complex matrix:

Use a sparse matrix:

## Generalizations & Extensions(1)

b can be a matrix:

The first column of the b matrix is used to generate the first column of the result:

## Options(1)

### Tolerance(1)

m is a 20×20 Hilbert matrix, and b is a vector such that the solution of m.x==b is known:

With the default tolerance, numerical roundoff is limited, so errors are distributed:

With Tolerance->0, numerical roundoff can introduce excessive error:

Specifying a higher tolerance will limit roundoff errors at the expense of a larger residual:

## Applications(1)

Here is some data:

Define cubic basis functions centered at t with support on the interval [t-2,t+2]:

Set up a sparse design matrix for basis functions centered at 0, 1, ..., 10:

Solve the least-squares problem:

## Properties & Relations(6)

If m.x==b can be solved, LeastSquares is equivalent to LinearSolve:

LeastSquares is related to PseudoInverse:

For a vector b, LeastSquares is equivalent to ArgMin[Norm[m.x-b],x]:

It is also equivalent to ArgMin[Norm[m.x-b,"Frobenius"],x]:

For a matrix b, LeastSquares is equivalent to ArgMin[Norm[m.x-b,"Frobenius"],x]:

m is a 5×2 matrix, and b is a length-5 vector:

Solve the least-squares problem:

This is the minimizer of :

It is also gives the coefficients for the line with least-squares distance to the points:

LeastSquares gives the parameter estimates for a linear model with normal errors: