This is documentation for Mathematica 5, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.1)

Documentation / Mathematica / The Mathematica Book / Advanced Mathematics in Mathematica / Linear Algebra /

3.7.10 Advanced Matrix Operations

Finding singular values and norms of matrices.

The singular values of a matrix are the square roots of the eigenvalues of , where denotes Hermitian transpose. The number of such singular values is the smaller dimension of the matrix. SingularValueList sorts the singular values from largest to smallest. Very small singular values are usually numerically meaningless. With the option setting Tolerance -> t, SingularValueList drops singular values that are less than a fraction t of the largest singular value. For approximate numerical matrices, the tolerance is by default slightly greater than zero.

If you multiply the vector for each point in a unit sphere in -dimensional space by an matrix , then you get an -dimensional ellipsoid, whose principal axes have lengths given by the singular values of .

The 2-norm of a matrix Norm[m, 2] is the largest principal axis of the ellipsoid, equal to the largest singular value of the matrix. This is also the maximum 2-norm length of for any possible unit vector .

The -norm of a matrix Norm[m, p] is in general the maximum -norm length of that can be attained. The cases most often considered are , and . Also sometimes considered is the Frobenius norm, whose square is the trace of .

Decomposing matrices into triangular forms.

When you create a LinearSolveFunction using LinearSolve[m], this often works by decomposing the matrix m into triangular forms, and sometimes it is useful to be able to get such forms explicitly.

LU decomposition effectively factors any square matrix into a product of lower- and upper-triangular matrices. Cholesky decomposition effectively factors any Hermitian positive-definite matrix into a product of a lower-triangular matrix and its Hermitian conjugate, which can be viewed as the analog of finding a square root of a matrix.

Orthogonal decompositions of matrices.

The standard definition for the inverse of a matrix fails if the matrix is not square or is singular. The pseudoinverse of a matrix can however still be defined. It is set up to minimize the sum of the squares of all entries in , where is the identity matrix. The pseudoinverse is sometimes known as the generalized inverse, or the Moore-Penrose inverse. It is particularly used in doing problems related to least-squares fitting.

QR decomposition writes any matrix as a product , where is an orthonormal matrix, denotes Hermitian transpose, and is a triangular matrix, in which all entries below the leading diagonal are zero.

Singular value decomposition, or SVD, is an underlying element in many numerical matrix algorithms. The basic idea is to write any matrix in the form , where is a matrix with the singular values of on its diagonal, and are orthonormal matrices, and is the Hermitian transpose of v.

Functions related to eigenvalue problems.

Most matrices can be reduced to a diagonal matrix of eigenvalues by applying a matrix of their eigenvectors as a similarity transformation. But even when there are not enough eigenvectors to do this, one can still reduce a matrix to a Jordan form in which there are both eigenvalues and Jordan blocks on the diagonal. Jordan decomposition in general writes any matrix in the form as .

Numerically more stable is the Schur decomposition, which writes any matrix in the form , where is an orthonormal matrix, and is block upper triangular.