## 3.7.10 Advanced Topic: Matrix Decompositions

Singular value decomposition and pseudoinverse.

Singular value decomposition is an important element of many numerical matrix algorithms. The basic idea is to write any matrix in the form , where is a diagonal matrix, and are row orthonormal matrices, and is the Hermitian transpose of u.
The function SingularValues[m] returns a list containing the matrix , the list of diagonal elements of , and the matrix .
The diagonal elements of are known as the singular values of the matrix . One interpretation of the singular values is as follows. If you take a unit sphere in -dimensional space, and multiply each vector in it by an matrix , you will get an ellipsoid in -dimensional space. The singular values give the lengths of the principal axes of the ellipsoid. If the matrix is singular in some way, this will be reflected in the shape of the ellipsoid. In fact, the ratio of the largest singular value of a matrix to the smallest one gives a condition number of the matrix, which determines, for example, the accuracy of numerical matrix inverses.
Very small singular values are usually numerically meaningless. SingularValues removes any singular values that are smaller than a certain tolerance multiplied by the largest singular value. The option Tolerance specifies the tolerance to use.

The standard definition for the inverse of a matrix fails if the matrix is not square. Using singular value decomposition, however, it is possible to define a pseudoinverse even for non-square matrices, or for singular square ones. The pseudoinverse is defined in terms of the objects , and as . The pseudoinverse has the property that the sum of the squares of all the entries in , where is an identity matrix, is minimized. The pseudoinverse found in this way is important, for example, in carrying out fits to numerical data. The pseudoinverse is sometimes known as the generalized inverse, or the Moore-Penrose inverse.

Other matrix decompositions.

Singular value decomposition writes any matrix as a product of a diagonal matrix with row and column orthonormal matrices. In some algorithms, it is also important to be able to decompose matrices as products involving triangular matrices.
QR decomposition writes any matrix as a product , where is an orthonormal matrix, denotes Hermitian transpose, and is a triangular matrix, in which all entries below the leading diagonal are zero. The function QRDecomposition[m] returns a list containing the matrices and . QR decomposition is often used in solving least-squares fitting problems, and is typically faster than singular value decomposition.