Additional functionality related to this tutorial has been introduced in subsequent versions of Mathematica. For the latest information, see Matrices and Linear Algebra.

Matrix Computations

This tutorial reviews the functions that Mathematica provides for carrying out matrix computations. Further information on these functions can be found in standard mathematical texts by such authors as Golub and van Loan or Meyer. The operations described in this tutorial are unique to matrices; an exception is the computation of norms, which also extends to scalars and vectors.

Basic Operations

This section gives a review of some basic concepts and operations that will be used throughout the tutorial to discuss matrix operations.

Norms

The norm of a mathematical object is a measurement of the length, size, or extent of the object. In Mathematica norms are available for scalars, vectors, and matrices.

Norm[num]norm of a number
Norm[vec]2-norm of a vector
Norm[vec,p]p-norm of a vector
Norm[mat]2-norm of a matrix
Norm[mat,p]p-norm of a matrix
Norm[mat,"Frobenius"]Frobenius norm of a matrix

Computing norms in Mathematica.

For numbers, the norm is the absolute value.
In[1]:=
Click for copyable input
Out[1]=
In[2]:=
Click for copyable input
Out[2]=

Vector Norms

For vector spaces, norms allow a measure of distance. This allows the definition of many familiar concepts such as neighborhood, closeness of approximation, and goodness of fit. A vector norm is a function that satisfies the following relations.

Typically this function uses the notation . The subscript is used to distinguish different norms, of which the p-norms are particularly important. For , the p-norm is defined as follows.

Some common p-norms are the 1-, 2-, and -norms.

In Mathematica, vector p-norms can be computed with the function Norm. The 1-, 2-, and -norms are demonstrated in the following examples.
In[1]:=
Click for copyable input
In[2]:=
Click for copyable input
Out[2]=
In[3]:=
Click for copyable input
Out[3]=
In[4]:=
Click for copyable input
Out[4]=
The 2-norm is particularly useful and this is the default.
In[5]:=
Click for copyable input
Out[5]=
Norms are implemented for vectors with exact numerical entries.
In[5]:=
Click for copyable input
Out[5]=
Norms are also implemented for vectors with symbolic entries.
In[6]:=
Click for copyable input
Out[6]=
In addition, if a symbolic p is used, the result is symbolic.
In[7]:=
Click for copyable input
Out[7]=

Matrix Norms

Matrix norms are used to give a measure of distance in matrix spaces. This is necessary if you want to quantify what it means for one matrix to be near to another, for example, to say that a matrix is nearly singular.

Matrix norms also use the double bar notation of vector norms. One of the most common matrix norms is the Frobenius norm (also called the Euclidean norm).

Other common norms are the p-norms. These are defined in terms of vector p-norms as follows.

Thus the matrix p-norms show the maximum expansion that a matrix can apply to any vector.

In Mathematica the Frobenius norm can be computed as follows.
In[1]:=
Click for copyable input
In[2]:=
Click for copyable input
Out[2]=
The matrix 2-norm can be computed as follows.
In[3]:=
Click for copyable input
Out[3]=
It is also possible to give an argument to specify the 1, 2, or matrix p-norms.
In[4]:=
Click for copyable input
Out[4]=
In[5]:=
Click for copyable input
Out[5]=
In[6]:=
Click for copyable input
Out[6]=
This computes the expansion that a collection of different vectors gets from an input matrix. You can see that the maximum value is still less than the 2-norm.
In[7]:=
Click for copyable input
Out[7]=
All the p-norms are supported for exact numerical matrices.
In[8]:=
Click for copyable input
Out[8]=
In[9]:=
Click for copyable input
Out[9]=
In[10]:=
Click for copyable input
Out[10]=
However, only the 1 and matrix p-norms are supported for symbolic matrices.
In[11]:=
Click for copyable input
Out[11]=
In[12]:=
Click for copyable input
Out[12]=

NullSpace

One of the fundamental subspaces associated with each matrix is the null space. Vectors in the null space of a matrix are mapped to zero by the action of the matrix.

NullSpace[m]a list of basis vectors whose linear combinations satisfy
For some matrices (for instance, nonsingular square matrices), the null space is empty.
In[1]:=
Click for copyable input
Out[2]=
This matrix has a null space with one vector.
In[3]:=
Click for copyable input
Out[4]=
In[5]:=
Click for copyable input
Out[5]=
This matrix has a null space with two basis vectors.
In[6]:=
Click for copyable input
Out[7]=

Rank

The rank of a matrix corresponds to the number of linearly independent rows or columns in the matrix.

MatrixRank[mat]give the rank of the matrix mat
If an m×n matrix has no linear dependencies between its rows, its rank is equal to and it is said to be full rank.
In[1]:=
Click for copyable input
Out[2]=
If a matrix has any dependencies in its rows then its rank is less than . In this case the matrix is said to be rank deficient.
In[3]:=
Click for copyable input
Out[4]=
Note that the rank of a matrix is equal to the rank of its transpose. This means that the number of linearly independent rows is equal to the number of linearly independent columns.
In[5]:=
Click for copyable input
Out[5]=

For an m×n matrix A the following relations hold: Length[NullSpace[A]]+MatrixRank[A]n, and Length[NullSpace[AT]]+MatrixRank[AT]m. From this it follows that the null space is empty if and only if the rank is equal to n and that the null space of the transpose of A is empty if and only if the rank of A is equal to m.

If the rank is equal to the number of columns, it is said to have full column rank. If the rank is equal to the number of rows, it is said to have full row rank. One way to understand the rank of a matrix is to consider the row echelon form.

Reduced Row Echelon Form

A matrix can be reduced to a row echelon form by a combination of row operations that start with a pivot position at the top-left element and subtract multiples of the pivot row from following rows so that all entries in the column below the pivot are zero. The next pivot is chosen by going to the next row and column. If this pivot is zero and any nonzero entries are in the column beneath, the rows are exchanged and the process is repeated. The process finishes when the last row or column is reached.

A matrix is in row echelon form if any row that consists entirely of zeros is followed only by other zero rows and if the first nonzero entry in row is in the column , then elements in columns from 1 to in all rows below are zero. The row echelon form is not unique but its form (in the sense of the positions of the pivots) is unique. It also gives a way to determine the rank of a matrix as the number of nonzero rows.

A matrix is in reduced row echelon form if it is in row echelon form, each pivot is one, and all the entries above (and below) each pivot are zero. It can be formed by a procedure similar to the procedure for the row echelon form, also taking steps to reduce the pivot to one (by division) and reduce entries in the column above each pivot to zero (by subtracting multiples of the current pivot row). The reduced row echelon form of a matrix is unique.

The reduced row echelon form (and row echelon form) give a way to determine the rank of a matrix as the number of nonzero rows. In Mathematica the reduced row echelon form of a matrix can be computed by the function RowReduce.

RowReduce[mat]give the reduced row echelon form of the matrix mat
The reduced row echelon form of this matrix only has one nonzero row. This means that the rank is 1.
In[1]:=
Click for copyable input
Out[2]//MatrixForm=
This is a 3×2 random matrix whose columns are linearly independent.
In[3]:=
Click for copyable input
Out[4]//MatrixForm=
The reduced row echelon form has two nonzero rows; thus, its rank should be 2.
In[5]:=
Click for copyable input
Out[5]//MatrixForm=
MatrixRank also computes the rank as 2.
In[6]:=
Click for copyable input
Out[6]=
The reduced row echelon form of the transpose of the matrix also has two nonzero rows. This is consistent with the fact that the rank of a matrix is equal to the rank of its transpose, even when the matrix is rectangular.
In[7]:=
Click for copyable input
Out[7]//MatrixForm=

Inverse

The inverse of a square matrix A is defined by

where is the identity matrix. The inverse can be computed in Mathematica with the function Inverse.

Inverse[mat]give the inverse of the matrix mat
Here is a sample 2×2 matrix.
In[1]:=
Click for copyable input
Out[3]//MatrixForm=
This demonstrates that the matrix satisfies the definition.
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
Not all matrices have an inverse; if a matrix does not have an inverse it is said to be singular. If a matrix is rank-deficient then it is singular.
In[5]:=
Click for copyable input
Out[6]=
In[7]:=
Click for copyable input
Out[7]=

The matrix inverse can in principle be used to solve the matrix equation

by multiplying it by the inverse of .

However, it is always better to solve the matrix equation directly. Such techniques are discussed in the section "Solving Linear Systems".

PseudoInverse

When the matrix is singular or is not square it is still possible to find an approximate inverse that minimizes . When the 2-norm is used this will find a least squares solution known as the pseudoinverse.

PseudoInverse[mat]give the pseudoinverse of the matrix mat
This finds the pseudoinverse for the singular matrix defined earlier.
In[1]:=
Click for copyable input
Out[3]//MatrixForm=
The result is not the identity matrix, but it is "close" to the identity matrix.
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
This computes the pseudoinverse of a rectangular matrix.
In[5]:=
Click for copyable input
Out[7]//MatrixForm=
The result is quite close to the identity matrix.
In[8]:=
Click for copyable input
Out[8]//MatrixForm=

The solution found by the pseudoinverse is a least squares solution. These are discussed in more detail under "Least Squares Solutions".

Determinant

The determinant of an n×n matrix is defined as follows.

It can be computed in Mathematica with the function Det.

Det[mat]determinant of the matrix mat
Minors[mat]matrix of minors of mat
Minors[mat,k]k^(th) minors of mat
Here is a sample 2×2 matrix.
In[1]:=
Click for copyable input
Out[2]=
One useful property of the determinant is that if and only if is singular.
In[3]:=
Click for copyable input
Out[4]=
As already pointed out, if the matrix is singular, it does not have full rank.
In[5]:=
Click for copyable input
Out[5]=

Minors

A minor of a matrix is the determinant of any k×k submatrix. This example uses the function Minors to compute all the 2×2 minors.
In[1]:=
Click for copyable input
Out[2]=
The size of the submatrices can be controlled by a second option. Here the 1×1 minors are computed; these are just the entries of the matrix.
In[3]:=
Click for copyable input
Out[3]=
Note that Minors[m] is equivalent to Minors[m, n-1]. In general, Minors[m, k] computes determinants of all the possible k×k submatrices that can be generated from the matrix m (this is done by picking different k rows and k columns). For the 2×2 minors of a 4×4 matrix you obtain a 6×6 matrix because there are only 6 different ways to pick 2 rows from 4 rows, and to pick 2 columns from 4 columns.
In[4]:=
Click for copyable input
Out[4]=
One of the properties of the rank of a matrix is that it is equal to the size of the largest nonsingular submatrix. This can be demonstrated with Minors. In this example, the following matrix has a rank of 2.
In[5]:=
Click for copyable input
Out[6]=
The determinant of the matrix is zero.
In[7]:=
Click for copyable input
Out[7]=
When the 2×2 minors are computed you can see that not all are zero, confirming that the rank of the matrix is 2.
In[8]:=
Click for copyable input
Out[8]=

Solving Linear Systems

One of the most important uses of matrices is to represent and solve linear systems. This section discusses how to solve linear systems with Mathematica. It makes strong use of LinearSolve, the main function provided for this purpose.

Solving a linear system involves solving a matrix equation . Because is an × matrix this is a set of linear equations in unknowns.

When the system is said to be square. If there are more equations than unknowns and the system is overdetermined. If there are fewer equations than unknowns and the system is underdetermined.

square
; solution may or may not exist
overdetermined
; solutions may or may not exist
underdetermined
; no solutions or infinitely many solutions exist
nonsingular
number of independent equations equal to the number of variables and determinant nonzero; a unique solution exists
consistent
at least one solution exists
inconsistent
no solutions exist

Classes of linear systems represented by rectangular matrices.

Note that even though you could solve the matrix equation by computing the inverse via , this is not a recommended method. You should use a function that is designed to solve linear systems directly and in Mathematica this is provided by LinearSolve.

LinearSolve[m,b]a vector x that solves the matrix equation
LinearSolve[m]a function that can be applied repeatedly to many b

Solving linear systems with LinearSolve.

This uses LinearSolve to solve a matrix equation.
In[1]:=
Click for copyable input
Out[3]=
The solution can be tested to demonstrate that it solves the problem.
In[4]:=
Click for copyable input
Out[4]=
You can always generate the linear equations from the input matrices.
In[5]:=
Click for copyable input
Out[5]=
Solve can then be used to find the solution.
In[6]:=
Click for copyable input
Out[6]=
LinearSolve also works when the right-hand side of the matrix equation is also a matrix.
In[7]:=
Click for copyable input
Out[9]=
In[10]:=
Click for copyable input
Out[10]=
If the system is overdetermined, LinearSolve will find a solution if it exists.
In[11]:=
Click for copyable input
In[13]:=
Click for copyable input
Out[13]=

If no solution can be found, the system is inconsistent. In this case it might still be useful to find a best-fit solution. This is often done with a least squares technique, which is explored under "Least Squares Solutions".

If the system is underdetermined, there is either no solution or infinitely many solutions and LinearSolve will return one.
In[14]:=
Click for copyable input
In[16]:=
Click for copyable input
Out[16]=
A system is consistent if and only if the rank of the matrix equals the rank of the matrix formed by adding as an extra column to (known as the augmented matrix). The following demonstrates that the previous system is consistent.
In[17]:=
Click for copyable input
Out[17]=
LinearSolve can work with all the different types of matrices that Mathematica supports, including symbolic and exact, machine number, and arbitrary precision. It also works with both dense and sparse matrices. When a sparse matrix is used in LinearSolve, special techniques suitable for sparse matrices are used and the result is the solution vector.
In[18]:=
Click for copyable input
Out[20]=
option name
default value
MethodAutomaticthe method to use for solving
Modulus0take the equation to be modulo n
ZeroTest(#=0&)the function to be used by symbolic methods for testing for zero

Options for LinearSolve.

LinearSolve has three options that allow users more control. The Modulus and ZeroTest options are used by symbolic techniques and are discussed in the section "Symbolic and Exact Matrices". The Method option allows you to choose methods that are known to be suitable for a particular problem; the default setting of Automatic means that LinearSolve chooses a method based on the input.

Singular Matrices

A matrix is singular if its inverse does not exist. One way to test for singularity is to compute a determinant; this will be zero if the matrix is singular. For example, the following matrix is singular.
In[1]:=
Click for copyable input
Out[2]=
For many right-hand sides there is no solution.
In[3]:=
Click for copyable input
Out[4]=
However, for certain values there will be a solution.
In[5]:=
Click for copyable input
Out[6]=
For the first example, the rank of does not equal that of the augmented matrix. This confirms that the system is not consistent and cannot be solved by LinearSolve.
In[7]:=
Click for copyable input
Out[7]=
For the second example, the rank of does equal that of the augmented matrix. This confirms that the system is consistent and can be solved by LinearSolve.
In[8]:=
Click for copyable input
Out[8]=

In those cases where a solution cannot be found, it is still possible to find a solution that makes a best fit to the problem. One important class of best-fit solutions involves least squares solutions and is discussed in "Least Squares Solutions".

Homogeneous Equations

The homogeneous matrix equation involves a zero right-hand side.

This equation has a nonzero solution if the matrix is singular. To test if a matrix is singular, you can compute the determinant.

In[1]:=
Click for copyable input
Out[2]=
To find the solution of the homogeneous equation you can use the function NullSpace. This returns a set of orthogonal vectors, each of which solves the homogeneous equation. In the next example, there is only one vector.
In[3]:=
Click for copyable input
Out[3]=
This demonstrates that the solution in fact solves the homogeneous equation.
In[4]:=
Click for copyable input
Out[4]=
The function Chop can be used to replace approximate numbers close to 0.
In[5]:=
Click for copyable input
Out[5]=
The solution to the homogeneous equation can be used to form an infinite number of solutions to the inhomogeneous equation. This solves an inhomogeneous equation.
In[6]:=
Click for copyable input
Out[7]=
The solution does in fact solve the equation.
In[8]:=
Click for copyable input
Out[8]=
If you add to an arbitrary factor times the homogeneous solution, this new vector also solves the matrix equation.
In[9]:=
Click for copyable input
Out[9]=

Estimating and Calculating Accuracy

An important way to quantify the accuracy of the solution of a linear system is to compute the condition number . This is defined as follows for a suitable choice of norm.

It can be shown that for the matrix equation the relative error in is times the relative error in and . Thus, the condition number quantifies the accuracy of the solution. If the condition number of a matrix is large, the matrix is said to be ill-conditioned. You cannot expect a good solution from an ill-conditioned system. For some extremely ill-conditioned systems it is not possible to obtain any solution.

In Mathematica an approximation of the condition number can be computed with the function .

LinearAlgebra`MatrixConditionNumber[mat]
infinity-norm approximate condition number of a matrix of approximate numbers
LinearAlgebra`MatrixConditionNumber[mat,Norm->p]
p-norm approximate condition number of a matrix, where p may be 1, 2, or
This computes the condition number of a matrix.
In[1]:=
Click for copyable input
Out[2]=
This matrix is singular and the condition number is .
In[3]:=
Click for copyable input
Out[4]=
This matrix has a large condition number and is said to be ill-conditioned.
In[5]:=
Click for copyable input
Out[6]=
If a matrix is multiplied with itself, the condition number increases.
In[7]:=
Click for copyable input
Out[8]=
When you solve a matrix equation involving an ill-conditioned matrix, the result may not be as accurate.
In[9]:=
Click for copyable input
Out[11]=
The solution is less accurate for the matrix .
In[12]:=
Click for copyable input
Out[13]=
The solution to these problems is to avoid constructing matrices that may be ill-conditioned, for example, by avoiding multiplying matrices by themselves. Another method of solving problems more accurately is to use Mathematica's arbitrary-precision computation; this is discussed under "Arbitrary-Precision Matrices".
In[14]:=
Click for copyable input
Out[16]=

Symbolic and Exact Matrices

LinearSolve works for all the different types of matrices that can be represented in Mathematica. These are described in more detail in "Matrix Types".

Here is an example of working with purely symbolic matrices.
In[1]:=
Click for copyable input
Out[2]//MatrixForm=
In[3]:=
Click for copyable input
Out[4]=
The original matrix can be multiplied into the solution.
In[5]:=
Click for copyable input
Out[5]=
The result of multiplication needs some postprocessing with Simplify to reach the expected value. This demonstrates the need for intermediate processing in symbolic linear algebra computation. Without care these intermediate results can become extremely large and make the computation run slowly.
In[6]:=
Click for copyable input
Out[6]=

In order to simplify the intermediate expressions, the ZeroTest option might be useful.

LinearSolve can also be used to solve exact problems.
In[7]:=
Click for copyable input
Out[8]//MatrixForm=
In[9]:=
Click for copyable input
Out[10]=
In[11]:=
Click for copyable input
Out[11]=

There are a number of methods that are specific to symbolic and exact computation: , , and . These are discussed in the section "Symbolic Methods".

Row Reduction

Row reduction involves adding multiples of rows together so as to produce zero elements when possible. The final matrix is in reduced row echelon form as described previously. If the input matrix is a nonsingular square matrix, the result will be an identity matrix.
In[1]:=
Click for copyable input
Out[1]=
This can be used as a way to solve a system of equations.
In[2]:=
Click for copyable input
Out[2]=
This can be compared with the following use of LinearSolve.
In[3]:=
Click for copyable input
Out[3]=

Saving the Factorization

Many applications of linear systems involve the same matrix but different right-hand sides . Because a significant part of the effort involved in solving the system involves processing , it is common to save the factorization and use it to solve repeated problems. In Mathematica, this is done by using a one-argument form of LinearSolve; this returns a functional that you can apply to different vectors to obtain each solution.
In[1]:=
Click for copyable input
Out[2]=
When you can apply the LinearSolveFunction to a particular right-hand side the solution is obtained.
In[3]:=
Click for copyable input
Out[4]=
This solves the matrix equation.
In[5]:=
Click for copyable input
Out[5]=
A different right-hand side yields a different solution.
In[6]:=
Click for copyable input
Out[7]=
This new solution solves this matrix equation.
In[8]:=
Click for copyable input
Out[8]=

The one-argument form of LinearSolve works in a completely equivalent way to the two-argument form. It works with the same range of input matrices, for example, returning the expected results for symbolic, exact, or sparse matrix input. It also accepts the same options.

One issue with the one-argument form is that for certain input matrices the factorization cannot be saved. For example, if the system is overdetermined, the exact solution is not certain to exist. In this case a message is generated that warns you that the factorization will be repeated each time the functional is applied to a particular right-hand side.
In[9]:=
Click for copyable input
Out[10]=
For this right-hand side there is a solution, and this is returned.
In[11]:=
Click for copyable input
Out[12]=
However, for this right-hand side there is no solution, and an error is encountered.
In[13]:=
Click for copyable input
Out[14]=

Methods

LinearSolve provides a number of different techniques to solve matrix equations that are specialized to particular problems. You can select between these using the option Method. In this way a uniform interface is provided to all the functionality that Mathematica provides for solving matrix equations.

The default setting of the option Method is Automatic. As is typical for Mathematica, this means that the system will make an automatic choice of method to use. For LinearSolve if the input matrix is numerical and dense, then a method using LAPACK routines is used; if it is numerical and sparse, a multifrontal direct solver is used. If the matrix is symbolic then specialized symbolic routines are used.

The details of the different methods are now described.

LAPACK

LAPACK is the default method for solving dense numerical matrices. When the matrix is square and nonsingular the routines dgesv, dlange, and dgecon are used for real matrices and zgesv, zlange, and zgecon for complex matrices. When the matrix is nonsquare or singular dgelss is used for real matrices and zgelss for complex matrices. More information on LAPACK is available in the references.

If the input matrix uses arbitrary-precision numbers, then LAPACK algorithms extended for arbitrary-precision computation are used.

Multifrontal

The multifrontal method is a direct solver used by default if the input matrix is sparse; it can also be selected by specifying a method option.

This reads in a sample sparse matrix stored in the Matrix Market format. More information on the importing of matrices is under "Import and Export of Sparse Matrices".
In[1]:=
Click for copyable input
Out[1]=
In[2]:=
Click for copyable input
This solves the matrix equation using the sample matrix.
In[3]:=
Click for copyable input

If the input matrix to the multifrontal method is dense, it is converted to a sparse matrix.

The implementation of the multifrontal method uses the "UMFPACK" library.

Krylov

The Krylov method is an iterative solver that is suitable for large sparse linear systems, such as those arising from numerical solving of PDEs. In Mathematica two Krylov methods are implemented: conjugate gradient (for symmetric positive definite matrices) and BiCGSTAB (for nonsymmetric systems). It is possible to set the method that is used and a number of other parameters by using appropriate suboptions.

option name
default value
MaxIterationsAutomaticthe maximum number of iterations
MethodBiCGSTABthe method to use for solving
PreconditionerNonethe preconditioner
ResidualNormFunctionAutomaticthe norm to minimize
ToleranceAutomaticthe tolerance to use to terminate the iteration

Suboptions of the Krylov method.

The default method for Krylov, BiCGSTAB, is more expensive but more generally applicable. The conjugate gradient method is suitable for symmetric positive definite systems, always converging to a solution (though the convergence may be slow). If the matrix is not symmetric positive definite, the conjugate gradient may not converge to a solution.

In this example, the matrix is not symmetric positive definite and the conjugate gradient method does not converge.
In[1]:=
Click for copyable input
Out[3]=
The default method, BiCGSTAB, converges and returns the solution.
In[4]:=
Click for copyable input
Out[4]=
This tests the solution that was found.
In[5]:=
Click for copyable input
Out[5]=
Here is an example that shows the benefit of the Krylov method and the use of a preconditioner. First, a function that builds a structured banded sparse matrix is defined.
In[6]:=
Click for copyable input
This demonstrates the structure of the matrices generated by this function.
In[7]:=
Click for copyable input
Out[7]=
This builds a larger system, with a 10000×10000 matrix.
In[8]:=
Click for copyable input
This will use the default sparse solver, a multifrontal direct solver.
In[11]:=
Click for copyable input
Out[11]=
The Krylov method is faster.
In[12]:=
Click for copyable input
Out[12]=
The use of the ILU preconditioner is even faster. Currently, the preconditioner can only work if the matrix has nonzero diagonal elements.
In[14]:=
Click for copyable input
Out[14]=

At present, only the ILU preconditioner is built-in. You can still define your own preconditioner by defining a function that is applied to the input matrix. An example that involves solving a diagonal matrix is shown next.

First the problem is set up and solved without a preconditioner.
In[15]:=
Click for copyable input
Out[18]=
Now, a preconditioner function is used; there is a significant improvement to the speed of the computation.
In[19]:=
Click for copyable input
Out[20]=

Generally, a problem will not be structured so that it can benefit so much from such a simple preconditioner. However, this example is useful since it shows how to create and use your own preconditioner.

If the input matrix to the Krylov method is dense, the result is still found because the method is based on matrix/vector multiplication.

The Krylov method can be used to solve systems that are too large for a direct solver. However, it is not a general solver, being particularly suitable for those problems that have some form of diagonal dominance.

Cholesky

The Cholesky method is suitable for solving symmetric positive definite systems.
In[1]:=
Click for copyable input
Out[1]=
In[2]:=
Click for copyable input
In[3]:=
Click for copyable input
Out[3]=
The Cholesky method is faster for this matrix.
In[3]:=
Click for copyable input
Out[3]=
The Cholesky method is also more stable. This can be demonstrated with the following matrix.
In[4]:=
Click for copyable input
For this matrix the default LU decomposition method reports a potential error.
In[7]:=
Click for copyable input
Out[7]=
The Cholesky method succeeds.
In[8]:=
Click for copyable input
Out[8]=

For dense matrices the Cholesky method uses LAPACK functions such as dpotrf and dpotrs for real matrices and zpotrf and zpotrs for complex matrices. For sparse matrices the Cholesky method uses the "TAUCS" library.

Symbolic Methods

There are a number of methods that are specific to symbolic and exact computation: , , and . These can be demonstrated for the following symbolic matrix.
In[1]:=
Click for copyable input
The computation is repeated for all three methods.
In[3]:=
Click for copyable input
Out[3]=
In[4]:=
Click for copyable input
Out[4]=
In[5]:=
Click for copyable input
Out[5]=

Least Squares Solutions

This section follows from the previous section. It is concerned with finding solutions to the matrix equation , where is an × matrix and (i.e., when there are more equations than unknowns). In this case, there may well be no solution, and LinearSolve will issue an error message.
In[1]:=
Click for copyable input
Out[3]=

In these cases it is possible to find a best-fit solution that minimizes . A particularly common choice of p is 2; this generates a least squares solution. These are often used because the function is differentiable in and because the 2-norm is preserved under orthogonal transformations.

If the rank of is (i.e., it has full column rank), it can be shown (Golub and van Loan) that there is a unique solution to the least squares problem and it solves the linear system.

These are called the normal equations. Although they can be solved directly to get a least squares solution, this is not recommended because if the matrix is ill-conditioned then the product will be even more ill-conditioned. Ways of finding least squares solutions for full rank matrices are explored in "Examples: Full Rank Least Squares Solutions".

It should be noted that best-fit solutions that minimize other norms, such as 1 and , are also possible. A number of ways to find solutions to overdetermined systems that are suitable for special types of matrices or to minimize other norms are given under "Minimization of 1 and Infinity Norms".

A general way to find a least squares solution to an overdetermined system is to use a singular value decomposition to form a matrix that is known as the pseudoinverse of a matrix. In Mathematica this is computed with PseudoInverse. This technique works even if the input matrix is rank deficient. The basis of the technique follows.

Here is an overdetermined system; you can see that the matrix is rank deficient.
In[4]:=
Click for copyable input
Out[6]=
The least squares solution can still be found using PseudoInverse.
In[7]:=
Click for copyable input
Out[7]=
This shows how close the solution actually is.
In[8]:=
Click for copyable input
Out[8]=

Data Fitting

Scientific measurements often result in collections of ordered pairs of data, . In order to make predictions at times other than those that were measured, it is common to try to fit a curve through the data points. If the curve is a straight line, then the aim is to find unknown coefficients and for which

for all the data pairs. This can be written in a matrix form as shown here and is immediately recognized as being equivalent to solving an overdetermined system of linear equations.

Fitting a more general curve, for example,

is equivalent to the matrix equation

In this case the left-hand matrix is a Vandermonde matrix. In fact any function that is linear in the unknown coefficients can be used.

An example of how to find best-fit parameters will now be given. This will start with the measurements shown next.
In[1]:=
Click for copyable input
These can be plotted graphically.
In[2]:=
Click for copyable input
Out[2]=
You could try to solve this with the PseudoInverse technique introduced in "PseudoInverse". First, split the data into and components.
In[2]:=
Click for copyable input
Now form the left-hand side matrix.
In[4]:=
Click for copyable input
Out[5]//MatrixForm=
The best-fit parameter can be found as follows.
In[6]:=
Click for copyable input
Out[6]=
Mathematica provides the function FindFit that can find best-fit solutions to data. The solution agrees with that found by the PseudoInverse.
In[7]:=
Click for copyable input
Out[7]=
Finally, a plot is made that shows the data points and the fitted curve.
In[8]:=
Click for copyable input
Out[8]=

The function FindFit is quite general; it can also fit functions to data that is not linear in the parameters.

Eigensystem Computations

The solution of the eigenvalue problem is one of the major areas for matrix computations. It has many applications in physics, chemistry, and engineering. For an × matrix the eigenvalues are the roots of its characteristic polynomial, . The set of roots, , are called the spectrum of the matrix. For each eigenvalue, , the vectors, , that satisfy

are described as eigenvectors. The matrix of the eigenvectors, , if it exists and is nonsingular, may be used as a similarity transformation to form a diagonal matrix with the eigenvalues on the diagonal elements. Many important applications of eigenvalues involve the diagonalization of matrices in this way.

Mathematica has various functions for computing eigenvalues and eigenvectors.

Eigenvalues[m]a list of the eigenvalues of m
Eigenvectors[m]a list of the eigenvectors of m
Eigensystem[m]a list of the form
CharacteristicPolynomial[m,x]the characteristic polynomial of m

Eigenvalues and eigenvectors.

Here is a sample 2×2 matrix.
In[1]:=
Click for copyable input
This computes the eigenvalues.
In[2]:=
Click for copyable input
Out[2]=
This computes the eigenvectors.
In[3]:=
Click for copyable input
Out[3]=
You can compute both the eigenvalues and eigenvectors with Eigensystem.
In[4]:=
Click for copyable input
Out[4]=
You can confirm that the first eigenvalue and its eigenvector satisfy the definition.
In[5]:=
Click for copyable input
Out[5]=
You should note that Eigenvectors and Eigensystem return a list of eigenvectors. This means that if you want a matrix with columns that are the eigenvectors you must take a transpose. Here, the eigenvalues and their corresponding eigenvectors are shown to meet the definition of the eigenvectors.
In[6]:=
Click for copyable input
Out[6]=
You can also obtain the characteristic polynomial of the test matrix.
In[7]:=
Click for copyable input
Out[7]=
This finds the roots of the characteristic polynomial; they are seen to match the eigenvalues of the matrix.
In[8]:=
Click for copyable input
Out[8]=
Eigenvalue computations work for sparse matrices as well as for dense matrices. In the following example, one of the eigenvalues is zero.
In[9]:=
Click for copyable input
Out[10]//MatrixForm=
In[11]:=
Click for copyable input
Out[11]=
An eigenvalue being zero means that the null space of the matrix is not empty.
In[12]:=
Click for copyable input
Out[12]=

Certain applications of eigenvalues do not require all of the eigenvalues to be computed. Mathematica provides a mechanism for obtaining only some eigenvalues.

Eigenvalues[m,k]the largest k eigenvalues of m
Eigenvectors[m,k]the corresponding eigenvectors of m
Eigenvalues[m,-k]the smallest k eigenvalues of m
Eigenvectors[m,-k]the corresponding eigenvectors of m
This is particularly useful if the matrix is very large. Here is a 10000×10000 sparse matrix.
In[13]:=
Click for copyable input
Out[13]=
This is the largest eigenvalue.
In[14]:=
Click for copyable input
Out[14]=
option name
default value
CubicsFalsewhether to return radicals for 3×3 matrices
MethodAutomaticthe method to use for solving
QuarticsFalsewhether to return radicals for 4×4 matrices
ZeroTest(#=0&)the function to be used by symbolic methods for testing for zero

Options for Eigensystem.

Eigensystem has four options that allow users more control. The Cubics, Quartics, and ZeroTest options are used by symbolic techniques and are discussed under "Symbolic and Exact Matrices". The Method option allows knowledgeable users to choose methods that are particularly appropriate for their problems; the default setting of Automatic means that Eigensystem makes a choice of a method that is generally suitable.

Eigensystem Properties

This section describes certain properties of eigensystem computations. It should be remembered that because the eigenvalues of an × matrix can be associated with the roots of an ^(th) degree polynomial, there will always be eigenvalues.

Even if a matrix only contains real numbers, its eigenvalues may not be real.
In[1]:=
Click for copyable input
Out[2]=
The eigenvalues of a matrix may be repeated; eigenvalues that are not repeated are called simple. In this example, the repeated eigenvalue has as many eigenvectors as the number of times it is repeated; this is a semisimple eigenvalue.
In[3]:=
Click for copyable input
Out[4]=
The repeated eigenvalue may have fewer eigenvectors than the number of times it is repeated. In this case it is referred to as a defective eigenvalue and the matrix is referred to as a defective matrix.
In[5]:=
Click for copyable input
Out[6]=
A matrix may have a zero eigenvalue, but can still have a complete set of eigenvectors.
In[7]:=
Click for copyable input
Out[8]=
A matrix may have a zero eigenvalue, but not have a complete set of eigenvectors.
In[9]:=
Click for copyable input
Out[10]=
If a matrix is real and symmetric, all of the eigenvalues are real, and there is an orthonormal basis of eigenvectors.
In[11]:=
Click for copyable input
Out[12]=
The eigenvectors are an orthonormal basis.
In[13]:=
Click for copyable input
Out[13]=
A symmetric real matrix that has positive eigenvalues is known as positive definite.
In[14]:=
Click for copyable input
Out[15]=
A symmetric real matrix that has nonzero eigenvalues is known as positive semidefinite.
In[16]:=
Click for copyable input
Out[17]=
If a matrix is complex and Hermitian (i.e., equal to its conjugate transpose), all of the eigenvalues are real and there is an orthonormal basis of eigenvectors.
In[18]:=
Click for copyable input
Out[19]=
The eigenvectors are an orthonormal basis.
In[20]:=
Click for copyable input
Out[20]=
A Hermitian matrix that has positive eigenvalues is known as positive definite.
In[21]:=
Click for copyable input
Out[22]=

Diagonalizing a Matrix

One of the uses of eigensystem computations is to provide a similarity transformation that diagonalizes a matrix.
In[1]:=
Click for copyable input
Out[3]//MatrixForm=

Similarity transformations preserve a number of useful properties of a matrix such as determinant, rank, and trace.

If the matrix of eigenvectors is singular then the matrix cannot be diagonalized. This happens if the matrix is defective (i.e., there is an eigenvalue that does not have a complete set of eigenvectors) as in this example.
In[4]:=
Click for copyable input
Out[7]//MatrixForm=
The matrix of column eigenvectors is singular and so a similarity transformation that diagonalizes the original matrix does not exist.
In[8]:=
Click for copyable input
Out[8]=

Symbolic and Exact Matrices

As is typical for Mathematica, if a computation is done for a symbolic or exact matrix it will use symbolic computer algebra techniques and return a symbolic or exact result.

This is a sample 3×3 matrix of integers.
In[1]:=
Click for copyable input
This computes the eigenvalues; the result uses Root objects.
In[2]:=
Click for copyable input
Out[2]=
A machine-precision approximation to the result can be computed by using N.
In[3]:=
Click for copyable input
Out[3]=
The result can be returned in terms of radicals by setting the Cubics option to True.
In[4]:=
Click for copyable input
Out[4]=

Generalized Eigenvalues

For × matrices the generalized eigenvalues are the roots of its characteristic polynomial, . For each generalized eigenvalue, , the vectors, , that satisfy

are described as generalized eigenvectors.

Eigenvalues[{a,b}]the generalized eigenvalues of a and b
Eigenvectors[{a,b}]the generalized eigenvectors of a and b
Eigensystem[{a,b}]the generalized eigensystem of a and b

Generalized eigenvalues and eigenvectors.

Here are two 2×2 matrices.
In[1]:=
Click for copyable input
This computes the generalized eigenvalues.
In[3]:=
Click for copyable input
Out[3]=
If the two matrices share a common nonzero null space, there will be an indeterminate eigenvalue.
In[4]:=
Click for copyable input
These two matrices share a common one-dimensional null space.
In[6]:=
Click for copyable input
Out[6]=
In[7]:=
Click for copyable input
Out[7]=
One of the generalized eigenvalues is Indeterminate.
In[8]:=
Click for copyable input
Out[8]=

Methods

Eigenvalue computations are solved with a number of different techniques that are specialized to particular problems. You can select between these using the option Method. In this way a uniform interface is provided to all the functionality that Mathematica provides.

The default setting of the option Method is Automatic. As is typical for Mathematica, this means that the system will make an automatic choice of method to use. For eigenvalue computation when the input is an × matrix of machine numbers and the number of eigenvalues requested is less than 20% of an Arnoldi method is used. Otherwise, if the input matrix is numerical then a method using LAPACK routines is used. If the matrix is symbolic then specialized symbolic routines are used.

The details of the different methods are now described.

LAPACK

LAPACK is the default method for computing the entire set of eigenvalues and eigenvectors. When the matrix is unsymmetric, dgeev is used for real matrices and zgeev is used for complex matrices. For symmetric matrices, dsyevr is used for real matrices and zheevr is used for complex matrices. For generalized eigenvalues the routine dggev is used for real matrices and zggev for complex matrices. More information on LAPACK is available in the references section.

It should be noted that the computation of eigenvalues for a symmetric matrix is faster because Mathematica detects this case and switches to the appropriate method.
In[1]:=
Click for copyable input
In[3]:=
Click for copyable input
Out[3]=
In[4]:=
Click for copyable input
Out[4]=

If the input matrix uses arbitrary-precision numbers, LAPACK algorithms extended for arbitrary-precision computation are used.

Arnoldi

The Arnoldi method is an iterative method used to compute a finite number of eigenvalues. The implementation of the Arnoldi method uses the "ARPACK" library. The most general problem that can be solved with this technique is to compute a few selected eigenvalues for

Because it is an iterative technique and only finds a few eigenvalues, it is often suitable for sparse matrices. One of the features of the technique is the ability to apply a shift and solve for a given eigenvector and eigenvalue (where ).

This is effective for finding eigenvalues near to .

The following suboptions can be given to the Arnoldi method.

option name
default value
BasisSizeAutomaticthe size of the Arnoldi basis
CriteriaMagnitudethe method to use for solving
MaxIterationsAutomaticthe maximum number of iterations
ShiftNonean Arnoldi shift
StartingVectorAutomaticthe initial vector to use
ToleranceAutomaticthe tolerance to use to terminate the iteration

Suboptions of the Arnoldi method.

Here is a 3×3 sparse matrix.
In[1]:=
Click for copyable input
This computes the largest eigenvalue, specifying that the Arnoldi method is used.
In[2]:=
Click for copyable input
Out[2]=
This computes the eigenvalue near to -1.
In[3]:=
Click for copyable input
Out[3]=
If a shift is given that matches an eigenvalue an error may result.
In[4]:=
Click for copyable input
Out[4]=
This tries to find eigenvalues of a 1500×1500 tridiagonal matrix with 2 on the diagonal and -1 off the diagonal. The algorithm does not converge.
In[5]:=
Click for copyable input
Out[7]=
When the technique does not converge it returns a result, but this may not be very accurate. One way to accelerate the convergence would be to use the results generated as a shift.
In[8]:=
Click for copyable input
Out[8]=
An alternative way to get convergence is to increase the number of iterations.
In[9]:=
Click for copyable input
Out[9]=
Alternatively, the convergence may be achieved by increasing the Arnoldi basis size in addition to the number of iterations. In this example, the increased basis size makes the algorithm converge in a smaller number of iterations. Even though each iteration is more expensive, the overall computation is faster.
In[10]:=
Click for copyable input
Out[10]=
The Arnoldi algorithm with a basis size of 30 is still often faster and less memory intensive than the dense eigenvalue computation. (It should be noted that specific timings on individual machines can vary.)
In[11]:=
Click for copyable input
Out[12]=

For many large sparse systems that occur in practical computations the Arnoldi algorithm is able to converge quite quickly.

Symbolic Methods

Symbolic eigenvalue computations work by interpolating the characteristic polynomial.

Matrix Decompositions

This section will discuss a number of standard techniques for working with matrices. These are often used as building blocks for solving matrix problems. The decompositions fall into the categories of factorizations, orthogonal transformations, and similarity transformations.

LU Decomposition

The LU decomposition of a matrix is frequently used as part of a Gaussian elimination process for solving a matrix equation. In Mathematica there is no need to use an LU decomposition to solve a matrix equation, because the function LinearSolve does this for you, as discussed in the section "Solving Linear Systems". Note that if you want to save the factorization of a matrix, so that it can be used to solve the same left-hand side, you can use the one-argument form of LinearSolve, as demonstrated in the section "Saving the Factorization".

The LU decomposition with partial pivoting of a square nonsingular matrix involves the computation of the unique matrices , , and such that

where is lower triangular with ones on the diagonal, is upper triangular, and is a permutation matrix. Once it is found, it can be used to solve the matrix equation by finding the solution to the following equivalent system.

This can be done by first solving for in the following.

Then solve for to get the desired solution.

Because both and are triangular matrices it is particularly easy to solve these systems.

Despite the fact that in Mathematica you do not need to use the LU decomposition for solving matrix equations, Mathematica provides the function LUDecomposition for computing the LU decomposition.

LUDecomposition[m]the LU decomposition with partial pivoting

LUDecomposition returns a list of three elements. The first element is a combination of upper and lower triangular matrices, the second element is a vector specifying rows used for pivoting (a permutation vector which is equivalent to the permutation matrix), and the third element is an estimate of the condition number.

In this example, the three results of the LUDecomposition computation are shown.
In[189]:=
Click for copyable input
Out[190]=

One of the features of the LU decomposition is that the lower and upper matrices are stored in the same matrix. The individual and factors can be obtained as follows. Note how they use a vectorized operation for element-by-element multiplication with a sparse matrix. This makes the process more efficient; these techniques are discussed further in "Programming Efficiency".

This generates the matrix.
In[3]:=
Click for copyable input
Out[7]//MatrixForm=
This generates the matrix.
In[8]:=
Click for copyable input
Out[11]//MatrixForm=
The original matrix was nonsingular and so this is a factorization. Therefore multiplying and recreates the original matrix permuted with the row pivot vector.
In[12]:=
Click for copyable input
Out[12]=
The following subtracts the product of the and matrices from the permuted original matrix. The difference is very close to zero.
In[13]:=
Click for copyable input
Out[13]=
The diagonal elements in the matrix are known as the pivots. If any zero pivots emerge during a computation of the LU factorization this means that the matrix is singular. In such a case LUDecomposition produces a warning.
In[14]:=
Click for copyable input
Out[15]=

Cholesky Decomposition

The Cholesky decomposition of a symmetric positive definite matrix is a factorization into a unique upper triangular such that

This factorization has a number of uses, one of which is that, because it is a triangular factorization, it can be used to solve systems of equations involving symmetric positive definite matrices. (The way that a triangular factorization can be used to solve a matrix equation is shown in the section "LU Decomposition".) If a matrix is known to be of this form it is preferred over the LU factorization because the Cholesky factorization is faster to compute. If you want to solve a matrix equation using the Cholesky factorization you can do this directly from LinearSolve using the Cholesky method; this is described in a previous section.

The Cholesky factorization can be computed in Mathematica with the function CholeskyDecomposition.

CholeskyDecomposition[m]the Cholesky decomposition
In[1]:=
Click for copyable input
Out[3]//MatrixForm=
This reconstructs the original matrix.
In[4]:=
Click for copyable input
Out[4]=
If the matrix is not positive definite then the Cholesky decomposition does not exist.
In[5]:=
Click for copyable input
Out[6]=
There are a number of equivalent definitions of positive definite, such as the eigenvalues being all positive.
In[7]:=
Click for copyable input
Out[8]=

One way to test if a matrix is positive definite is to see if the Cholesky decomposition exists.

Note that if the matrix is complex, the factorization is defined by the conjugate transpose. The following computes the Cholesky decomposition of a complex matrix.
In[9]:=
Click for copyable input
Out[10]=
This demonstrates that the factorization is correct.
In[11]:=
Click for copyable input
Out[11]//MatrixForm=

Cholesky and LU Factorizations

The Cholesky factorization is related to the LU factorization as

where is the diagonal matrix of pivots.

This can be demonstrated as follows.
In[1]:=
Click for copyable input
Out[4]//MatrixForm=
Now you can compute the matrix.
In[5]:=
Click for copyable input
Out[9]//MatrixForm=
This step computes the matrix of pivots.
In[10]:=
Click for copyable input
Out[12]//MatrixForm=

Finally, this computes ; its transpose is equal to the Cholesky decomposition.

In[13]:=
Click for copyable input
Out[13]//MatrixForm=

Orthogonalization

Orthogonalization generates an orthonormal basis from an arbitrary basis. An orthonormal basis is a basis, , for which

In Mathematica a set of vectors can be orthogonalized with the function Orthogonalize.

Orthogonalize[{v1,v2,...}]generate an orthonormal set from the given list of real vectors
This creates a set of three vectors that form a basis for .
In[1]:=
Click for copyable input
A plot visualizes the vectors; they all tend to lie in the same direction.
In[4]:=
Click for copyable input
Out[4]=
This computes an orthonormal basis.
In[5]:=
Click for copyable input
Out[5]=
The orthonormal vectors are obviously much more spread out.
In[6]:=
Click for copyable input
Out[6]=
The vectors , , and are orthonormal; thus, the dot product of each vector with itself is 1.
In[7]:=
Click for copyable input
Out[7]=
In addition, the dot product of a vector with another vector is 0.
In[8]:=
Click for copyable input
Out[8]=
This uses Outer to compare all vectors with all other vectors.
In[9]:=
Click for copyable input
Out[9]=

By default a Gram-Schmidt orthogonalization is computed, but a number of other orthogonalizations can be computed.

GramSchmidt
ModifiedGramSchmidt
Reorthogonalization
Householder

Orthogonalization methods.

This uses a method.
In[10]:=
Click for copyable input
Out[10]=

QR Decomposition

The QR decomposition of a rectangular matrix with linearly independent columns involves the computation of matrices and such that

where the columns of form an orthonormal basis for the columns of and is upper triangular. It can be computed in Mathematica with the function QRDecomposition.

QRDecomposition[m]the QR decomposition of a matrix
This computes the QR decomposition of a sample matrix. The result is a list of the two matrices.
In[1]:=
Click for copyable input
Out[2]=
In fact the first argument that is returned is a list of the orthonormal columns. To generate the matrix it is necessary to compute the transpose.
In[3]:=
Click for copyable input
Out[4]=
This is a matrix factorization, so is equal to .
In[5]:=
Click for copyable input
Out[5]=
In addition, the columns of are orthonormal. Because the result of QRDecomposition returns the list of columns, they can be extracted with one part index. The following two examples demonstrate the orthonormal properties of the columns.
In[6]:=
Click for copyable input
Out[6]=
In[7]:=
Click for copyable input
Out[7]=
The QR decomposition is also defined for rectangular matrices. In this case .
In[8]:=
Click for copyable input
Out[10]=
For an input matrix where , the result of QRDecomposition is two matrices: an × matrix and an × matrix . When the matrix represents an overdetermined system of equations, that is, , QRDecomposition returns an × matrix and an × matrix . (Note that to generate the matrix , it is necessary to transpose the result from QRDecomposition.) This is demonstrated in the following example.
In[11]:=
Click for copyable input
Out[13]=
This is often referred to as the thin QR decomposition; see for example Golub and van Loan. If you wish to compute the full QR decomposition, it is possible to pad out with extra rows of zeros and with extra orthonormal columns. This can be done with the following function.
In[14]:=
Click for copyable input
The and computed by this function are × and ×, respectively.
In[15]:=
Click for copyable input
Out[17]=
In[18]:=
Click for copyable input
Out[18]=

Solving Systems of Equations

For a square nonsingular matrix the QR decomposition can be used to solve the matrix equation , as is also the case for the LU decomposition. However, when the matrix is rectangular, the QR decomposition is also useful for solving the matrix equation.

One particular application of the QR decomposition is to find least squares solutions to overdetermined systems, by solving the system of normal equations

Because , this can be simplified as

Thus the normal equations simplify to

and because is nonsingular this simplifies to

Because is triangular this is a particularly easy system to solve; sample code to implement this technique is given in "Examples: Least Squares QR".

Singular Value Decomposition

The singular value decomposition of a rectangular matrix involves the computation of orthogonal matrices and and a diagonal matrix such that

The diagonal elements of the matrix are called the singular values of . In Mathematica, the function SingularValueList computes the singular values and the function SingularValueDecomposition computes the full singular value decomposition.

SingularValueList[m]a list of the nonzero singular values of m
SingularValueDecomposition[m]the singular value decomposition of m
In[1]:=
Click for copyable input
Out[2]=
In[3]:=
Click for copyable input
Out[4]=
This is a factorization so the original matrix can be restored.
In[5]:=
Click for copyable input
Out[5]//MatrixForm=
Because the matrices and are orthogonal, they can be used as an orthogonal transformation of the original matrix to generate the diagonal matrix with the singular values on the diagonal.
In[6]:=
Click for copyable input
Out[6]//MatrixForm=
If the matrix is singular then some of the singular values will be zero.
In[7]:=
Click for copyable input
Out[9]//MatrixForm=
SingularValueList only returns the nonzero singular values.
In[10]:=
Click for copyable input
Out[10]=

Note that if the matrix is complex, the definition of the singular value decomposition uses the conjugate transpose.

There are many applications of the singular value decomposition. The singular values of a matrix give information on the linear transformation represented by . For example, the action of on a unit sphere generates an ellipsoid with semi-axes given by the singular values. The singular values can be used to compute the rank of a matrix; the number of nonzero singular values is equal to the rank. The singular values can be used to compute the 2-norm of a matrix, and the columns of the matrix that correspond to zero singular values are the null space of the matrix.

Certain applications of singular values do not require all of the singular values to be computed. Mathematica provides a mechanism for obtaining only some singular values.

SingularValueList[m,k]the largest k singular values of m
SingularValueDecomposition[m,k]the corresponding singular value decomposition of m
SingularValueList[m,-k]the smallest k singular values of m
SingularValueDecomposition[m,-k]the corresponding singular value decomposition of m
This returns the smallest singular value of the input matrix; because it is zero, this demonstrates the matrix is not full rank.
In[11]:=
Click for copyable input
Out[12]=

Generalized Singular Values

For an × matrix and a × matrix the generalized singular values are given by the pair of factorizations

where is ×, is ×, and is ×; and are orthogonal, and is invertible.

SingularValueList[{a,b}]the generalized singular values of a and b
SingularValueDecomposition[{a,b}]the generalized singular value decomposition of a and b
These are some sample matrices.
In[1]:=
Click for copyable input
This returns the generalized singular values of the matrices and .
In[3]:=
Click for copyable input
Out[3]=
This computes the full generalized singular value decomposition of the matrices and .
In[4]:=
Click for copyable input
This demonstrates the identity for .
In[5]:=
Click for copyable input
Out[5]//MatrixForm=
This demonstrates the identity for .
In[6]:=
Click for copyable input
Out[6]//MatrixForm=
Finally, the singular values are computed by dividing the corresponding diagonal elements.
In[7]:=
Click for copyable input
Out[7]=

Options

The functions SingularValueList and SingularValueDecomposition both take a Tolerance option.

option name
default value
ToleranceAutomaticthe tolerance for treating small singular values as zero

The option controls the size at which singular values are treated as zero. By default, values that are tol times smaller than the largest singular value are dropped, where tol is 100×$MachineEpsilon for machine-number matrices. For arbitrary-precision matrices, it is , where is the precision of the matrix.

The smallest singular value of this matrix is just larger than the default setting for the tolerance. It is not treated as being equivalent to a zero singular value.
In[1]:=
Click for copyable input
Out[2]=
Increasing the setting for the tolerance causes the small singular value to be treated as zero.
In[3]:=
Click for copyable input
Out[3]=

Schur Decomposition

The Schur decomposition of a square matrix involves finding a unitary matrix that can be used for a similarity transformation of to form a block upper triangular matrix with 1×1 and 2×2 blocks on the diagonal (the 2×2 blocks correspond to complex conjugate pairs of eigenvalues for a real matrix ). A block upper triangular matrix of this form can be called upper quasi-triangular.

The Schur decomposition always exists and so the similarity transformation of to upper triangular always exists. This contrasts with the eigensystem similarity transformation, used to diagonalize a matrix, which does not always exist.

SchurDecomposition[m]the Schur decomposition of a matrix
This computes the Schur decomposition of a sample matrix. The result is a list of the two matrices.
In[1]:=
Click for copyable input
Out[3]=
The matrix can be used for a similarity transformation on the matrix to generate the upper triangular result .
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
For this particular matrix, a similarity transformation that generates an even simpler form can be found, because the matrix can be diagonalized.
In[5]:=
Click for copyable input
Out[7]//MatrixForm=
This matrix cannot be diagonalized because the matrix of eigenvectors is singular.
In[8]:=
Click for copyable input
Out[10]=
However, the Schur decomposition can be found and the matrix transformed to an upper triangular form.
In[11]:=
Click for copyable input
Out[12]=
In this example, the matrix has complex eigenvalues.
In[13]:=
Click for copyable input
Out[14]=
Now the resulting matrix has a 2×2 block on the diagonal.
In[15]:=
Click for copyable input
Out[16]=
The matrix can be used for a similarity transformation on the matrix to generate the upper quasi-triangular result. Note that an upper triangular result (with 1×1 blocks on the diagonal) that may involve complex numbers can be obtained by using the option RealBlockForm. When the result is upper triangular (i.e., has 1×1 blocks on the diagonal) the eigenvalues of the matrix are always found on the diagonal.
In[17]:=
Click for copyable input
Out[17]//MatrixForm=
Note that if the matrix is complex the definition of the Schur decomposition uses the conjugate transpose and returns an upper triangular result. This is demonstrated for the following complex matrix.
In[18]:=
Click for copyable input
Out[20]=
This demonstrates that the result satisfies the definition of the Schur decomposition.
In[21]:=
Click for copyable input
Out[21]//MatrixForm=
The diagonal of contains the eigenvalues of .
In[22]:=
Click for copyable input
Out[22]=

Generalized Schur Decomposition

For × matrices and , the generalized Schur decomposition is defined as

where and are unitary, is upper triangular, and is upper quasi-triangular.

SchurDecomposition[{a,b}]the generalized Schur decomposition of a and b
These are some sample matrices.
In[1]:=
Click for copyable input
This returns the generalized Schur decomposition.
In[3]:=
Click for copyable input
Out[3]=
This demonstrates the results are consistent with the definition of the decomposition.
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
In[5]:=
Click for copyable input
Out[5]//MatrixForm=

For real input, a result involving complex numbers and an upper triangular result can be obtained with the option .

Options

SchurDecomposition takes two options.

option name
default value
PivotingFalsewhether to carry out pivoting and scaling
RealBlockFormTruewhether to return 2×2 real blocks for complex eigenvalues

Options for SchurDecomposition.

The option can be used to allow pivoting and scaling to improve the quality of the result. When it is set to True, pivoting and scaling may be used and a matrix that represents the changes to is returned. With this new matrix the definition of the Schur decomposition can be seen as follows.

The use of pivoting and scaling is now demonstrated for the following matrix. When is set to True, pivoting and scaling are used if necessary, and an extra matrix (which here only represents scaling) is returned.
In[1]:=
Click for copyable input
Out[3]=
This demonstrates the transformation to an upper triangular form.
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
The diagonal elements of are the eigenvalues of .
In[5]:=
Click for copyable input
Out[5]=
When the Schur decomposition is computed without pivoting and scaling, the diagonal elements of are not as close to the eigenvalues of . This demonstrates the utility of the option.
In[6]:=
Click for copyable input
Out[7]//MatrixForm=
The option controls the generation of the upper quasi-triangular result. If this is set to True, a result that may have 2×2 blocks on the diagonal is generated. If it is set to False, the result is an upper triangular with 1×1 blocks on the diagonal (but which may involve complex numbers). This is demonstrated for the following real matrix, which has complex eigenvalues and a Schur decomposition with 2×2 blocks on the diagonal.
In[8]:=
Click for copyable input
Out[9]=
Setting to False generates a matrix that is upper triangular; both and are complex.
In[10]:=
Click for copyable input
Out[11]=
This generates the upper triangular result .
In[12]:=
Click for copyable input
Out[12]//MatrixForm=

Jordan Decomposition

The Jordan decomposition of a square matrix involves finding the nonsingular matrix that can be used for a similarity transformation of to generate a matrix (known as the Jordan form) that has a particularly simple triangular structure.

The Jordan decomposition always exists, but it is hard to compute with floating-point arithmetic. However, computation with exact arithmetic avoids these problems.

JordanDecomposition[m]the Jordan decomposition of a matrix
This demonstrates the Jordan form for a sample matrix.
In[1]:=
Click for copyable input
Out[3]//MatrixForm=
The Jordan form has the eigenvalues of the matrix along its diagonal. Any defective eigenvalues are grouped into blocks by 1s just above the diagonal. The Jordan form of the above matrix is shown here.
In[4]:=
Click for copyable input
Out[4]//MatrixForm=
The Jordan form shows that there are two eigenvalues: -1 and 2. The eigenvalue -1 is repeated twice and has a complete set of eigenvectors. The eigenvalue 2 is repeated four times. It appears once with its own eigenvector, and then three times with only one full eigenvector. This is demonstrated when the eigensystem for the matrix is computed.
In[5]:=
Click for copyable input
Out[5]=

Functions of Matrices

The computation of functions of matrices is a general problem with applications in many areas such as control theory. The function of a square matrix is not the same as the application of the function to each element in the matrix. Clearly element-wise application would not maintain properties consistent with the application of the function to a scale.

For example, each element of the following matrix is raised to the zero power.
In[1]:=
Click for copyable input
Out[2]=
However, a much better result would be the identity matrix. Mathematica has a function for raising a matrix to a power, and when a matrix is raised to the zero power the result is the identity matrix.
In[3]:=
Click for copyable input
Out[3]=

There are a number of ways to define functions of matrices; one useful way is to consider a series expansion. For the exponential function this works as follows.

One way to compute this series involves diagonalizing , so that and . Therefore, the exponential of can be computed as follows.

This technique can be generalized to functions of the eigenvalues of . Note that while this is one way to define functions of matrices, it does not provide a good way to compute them.

Mathematica does not have a function for computing general functions of matrices, but it has some specific functions.

MatrixPower[m,n]n^(th) matrix power
MatrixExp[m]matrix exponential
m1.m2 matrix multiplication
Inverse[m]matrix inverse
Here is a sample matrix.
In[4]:=
Click for copyable input
This raises the matrix to the fourth power.
In[5]:=
Click for copyable input
Out[5]=
The result is equivalent to squaring the square of the matrix.
In[6]:=
Click for copyable input
Out[6]=
This computes the exponential of the matrix.
In[7]:=
Click for copyable input
Out[7]=
It is equivalent to the computation that uses the eigensystem of the matrix. (It should be noted that this is not an efficient way to compute a function of a matrix; the example here is only for exposition.)
In[8]:=
Click for copyable input
Out[11]=

A technique for computing parametrized functions of matrices by solving differential equations is given in "Examples: Matrix Functions with NDSolve".

New to Mathematica? Find your learning path »
Have a question? Ask support »