This is documentation for Mathematica 5.2, which was
based on an earlier version of the Wolfram Language.

## Methods

LinearSolve provides a number of different techniques to solve matrix equations that are specialized to particular problems. You can select between these using the option Method. In this way a uniform interface is provided to all the functionality that Mathematica provides for solving matrix equations.

The default setting of the option Method is Automatic. As is typical for Mathematica, this means that the system will make an automatic choice of method to use. For LinearSolve if the input matrix is numerical and dense, then a method using LAPACK routines is used; if it is numerical and sparse, a multifrontal direct solver is used. If the matrix is symbolic then specialized symbolic routines are used.

The details of the different methods are now described.

### LAPACK

LAPACK is the default method for solving dense numerical matrices. When the matrix is square and non-singular the routines dgesv, dlange, and dgecon are used for real matrices and zgesv, zlange, and zgecon for complex matrices. When the matrix is non-square or singular dgelss is used for real matrices and zgelss for complex matrices. More information on LAPACK is available in the references section.

If the input matrix uses arbitrary-precision numbers, then LAPACK algorithms extended for arbitrary-precision computation are used.

### Multifrontal

The Multifrontal method is a direct solver used by default if the input matrix is sparse; it can also be selected by specifying a method option.

This reads in a sample sparse matrix stored in the Matrix Market format. More information on the importing of matrices is in the section Import and Export of Sparse Matrices.

 In[1]:=
 Out[1]=
 In[2]:=

This solves the matrix equation using the sample matrix.

 In[3]:=

If the input matrix to the Multifrontal method is dense, it is converted to a sparse matrix.

The implementation of the Multifronal method uses the UMFPACK library.

### Krylov

The Krylov method is an iterative solver that is suitable for large sparse linear systems, such as those arising from numerical solving of PDEs. In Mathematica two Krylov methods are implemented: Conjugate Gradient (for symmetric positive definite matrices) and BiCGSTAB (for non-symmetric systems). It is possible to set the method that is used and a number of other parameters by using appropriate sub-options.

Sub-options of the Krylov method.

The default method for Krylov, BiCGSTAB, is more expensive but more generally applicable. The ConjugateGradient method is suitable for symmetric positive definite systems, always converging to a solution (though the convergence may be slow). If the matrix is not symmetric positive definite the ConjugateGradient may not converge to a solution. In this example, the matrix is not symmetric positive definite and the ConjugateGradient method does not converge.

 In[1]:=

 Out[3]=

The default method, BiCGSTAB, converges and returns the solution.

 In[4]:=
 Out[4]=

This tests the solution that was found.

 In[5]:=
 Out[5]=

Here is an example that shows the benefit of the Krylov method and the use of a preconditioner. First, a function that builds a structured banded sparse matrix is defined.

 In[6]:=

This demonstrates the structure of the matrices generated by this function.

 In[7]:=

This builds a larger system, with a 1000010000 matrix.

 In[9]:=

This will use the default sparse solver, a mutifrontal direct solver.

 In[12]:=
 Out[12]=

The Krylov method is faster.

 In[13]:=
 Out[13]=

The use of the ILU preconditioner is even faster. Currently, the preconditioner can only work if the matrix has nonzero diagonal elements.

 In[14]:=
 Out[14]=

At present, only the ILU preconditioner is built-in. You can still define your own preconditioner by defining a function that is applied to the input matrix. An example that involves solving a diagonal matrix is shown below. First the problem is set up and solved without a preconditioner.

 In[15]:=
 Out[18]=

Now, a preconditioner function fun is used; there is a significant improvement to the speed of the computation.

 In[19]:=
 Out[20]=

Generally, a problem will not be structured so that it can benefit so much from such a simple preconditioner. However, this example is useful since it shows how to create and use your own preconditioner.

If the input matrix to the Krylov method is dense, the result is still found because the method is based on matrix/vector multiplication.

The Krylov method can be used to solve systems that are too large for a direct solver. However, it is not a general solver, being particularly suitable for those problems that have some form of diagonal dominance.

### Cholesky

The Cholesky method is suitable for solving symmetric positive definite systems.

 In[1]:=
 Out[1]=
 In[2]:=
 In[3]:=
 Out[3]=

The Cholesky method is faster for this matrix.

 In[4]:=
 Out[4]=

The Cholesky method is also more stable. This can be demonstrated with the following matrix.

 In[5]:=

For this matrix the default LU decomposition method reports a potential error.

 In[8]:=

 Out[8]=

The Cholesky method succeeds.

 In[9]:=
 Out[9]=

For dense matrices the Cholesky method uses LAPACK functions such as dpotrf and dpotrs for real matrices and zpotrf and zpotrs for complex matrices. For sparse matrices the Cholesky method uses the TAUCS library.

### Symbolic Methods

There are a number of methods that are specific to symbolic and exact computation: CofactorExpansion, DivisionFreeRowReduction, and OneStepRowReduction. These can be demonstrated for the following symbolic matrix.

 In[1]:=

The computation is repeated for all three methods.

 In[3]:=
 Out[3]=
 In[4]:=
 Out[4]=
 In[5]:=
 Out[5]=