This is documentation for Mathematica 6, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)

Advanced Matrix Operations

SingularValueList[m]the list of nonzero singular values of m
SingularValueList[m,k]the k largest singular values of m
SingularValueList[{m,a}]the generalized singular values of m with respect to a
Norm[m,p]the p-norm of m
Norm[m,"Frobenius"]the Frobenius norm of m

Finding singular values and norms of matrices.

The singular values of a matrix m are the square roots of the eigenvalues of m.m*, where * denotes Hermitian transpose. The number of such singular values is the smaller dimension of the matrix. SingularValueList sorts the singular values from largest to smallest. Very small singular values are usually numerically meaningless. With the option setting Tolerance->t, SingularValueList drops singular values that are less than a fraction t of the largest singular value. For approximate numerical matrices, the tolerance is by default slightly greater than zero.
If you multiply the vector for each point in a unit sphere in n-dimensional space by an m×n matrix m, then you get an m-dimensional ellipsoid, whose principal axes have lengths given by the singular values of m.
The 2-norm of a matrix Norm[m, 2] is the largest principal axis of the ellipsoid, equal to the largest singular value of the matrix. This is also the maximum 2-norm length of m.v for any possible unit vector v.
The p-norm of a matrix Norm[m, p] is in general the maximum p-norm length of m.v that can be attained. The cases most often considered are p=1, p=2 and p=. Also sometimes considered is the Frobenius norm Norm[m, "Frobenius"], which is the square root of the trace of m.m*.
LUDecomposition[m]the LU decomposition
CholeskyDecomposition[m]the Cholesky decomposition

Decomposing square matrices into triangular forms.

When you create a LinearSolveFunction using LinearSolve[m], this often works by decomposing the matrix m into triangular forms, and sometimes it is useful to be able to get such forms explicitly.
LU decomposition effectively factors any square matrix into a product of lower- and upper-triangular matrices. Cholesky decomposition effectively factors any Hermitian positive-definite matrix into a product of a lower-triangular matrix and its Hermitian conjugate, which can be viewed as the analog of finding a square root of a matrix.
PseudoInverse[m]the pseudoinverse
QRDecomposition[m]the QR decomposition
SingularValueDecomposition[m]the singular value decomposition
SingularValueDecomposition[{m,a}]the generalized singular value decomposition

Orthogonal decompositions of matrices.

The standard definition for the inverse of a matrix fails if the matrix is not square or is singular. The pseudoinverse m (-1) of a matrix m can however still be defined. It is set up to minimize the sum of the squares of all entries in m.m (-1)-I, where I is the identity matrix. The pseudoinverse is sometimes known as the generalized inverse, or the Moore-Penrose inverse. It is particularly used for problems related to least-squares fitting.
QR decomposition writes any matrix m as a product q*r, where q is an orthonormal matrix, * denotes Hermitian transpose, and r is a triangular matrix, in which all entries below the leading diagonal are zero.
Singular value decomposition, or SVD, is an underlying element in many numerical matrix algorithms. The basic idea is to write any matrix m in the form usv*, where s is a matrix with the singular values of m on its diagonal, u and v are orthonormal matrices, and v* is the Hermitian transpose of v.
JordanDecomposition[m]the Jordan decomposition
SchurDecomposition[m]the Schur decomposition
SchurDecomposition[{m,a}]the generalized Schur decomposition
HessenbergDecomposition[m]the Hessenberg decomposition

Functions related to eigenvalue problems.

Most square matrices can be reduced to a diagonal matrix of eigenvalues by applying a matrix of their eigenvectors as a similarity transformation. But even when there are not enough eigenvectors to do this, one can still reduce a matrix to a Jordan form in which there are both eigenvalues and Jordan blocks on the diagonal. Jordan decomposition in general writes any square matrix in the form as sjs-1.
Numerically more stable is the Schur decomposition, which writes any square matrix m in the form qtq*, where q is an orthonormal matrix, and t is block upper-triangular. Also related is the Hessenberg decomposition, which writes a square matrix m in the form php*, where p is an orthonormal matrix, and h can have nonzero elements down to the diagonal below the leading diagonal.