This is documentation for Mathematica 4, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)
Wolfram Research, Inc.

Basic Matrix OperationsEigenvalues and Eigenvectors

3.7.8 Solving Linear Systems

Many calculations involve solving systems of linear equations. In many cases, you will find it convenient to write down the equations explicitly, and then solve them using Solve.

In some cases, however, you may prefer to convert the system of linear equations into a matrix equation, and then apply matrix manipulation operations to solve it. This approach is often useful when the system of equations arises as part of a general algorithm, and you do not know in advance how many variables will be involved.

A system of linear equations can be stated in matrix form as , where is the vector of variables.

Note that if your system of equations is sparse, so that most of the entries in the matrix are zero, then you will usually find it much more efficient to use Solve to work directly with your original equations, without ever converting them to matrix form.

Functions for solving linear systems.

Here is a matrix.

In[1]:= m = {{1, 5}, {2, 1}}

Out[1]=

This gives two linear equations.

In[2]:= m . {x, y} == {a, b}

Out[2]=

You can use Solve directly to solve these equations.

In[3]:= Solve[ %, {x, y} ]

Out[3]=

You can also get the vector of solutions by calling LinearSolve. The result is equivalent to the one you get from Solve.

In[4]:= LinearSolve[m, {a, b}]

Out[4]=

Another way to solve the equations is to invert the matrix m, and then multiply {a, b} by the inverse. This is not as efficient as using LinearSolve.

In[5]:= Inverse[m] . {a, b}

Out[5]=

RowReduce performs a version of Gaussian elimination and can also be used to solve the equations.

In[6]:= RowReduce[{{1, 5, a}, {2, 1, b}}]

Out[6]=

If you have a square matrix with a non-zero determinant, then you can always find a unique solution to the matrix equation for any . If, however, the matrix has determinant zero, then there may be either no vector, or an infinite number of vectors which satisfy for a particular . This occurs when the linear equations embodied in are not independent.

When has determinant zero, it is nevertheless always possible to find non-zero vectors that satisfy . The set of vectors satisfying this equation form the null space or kernel of the matrix . Any of these vectors can be expressed as a linear combination of a particular set of basis vectors, which can be obtained using NullSpace[m].

Here is a simple matrix, corresponding to two identical linear equations.

In[7]:= m = {{1, 2}, {1, 2}}

Out[7]=

The matrix has determinant zero.

In[8]:= Det[ m ]

Out[8]=

LinearSolve cannot find a solution to the equation in this case.

In[9]:= LinearSolve[m, {a, b}]

Out[9]=

There is a single basis vector for the null space of m.

In[10]:= NullSpace[ m ]

Out[10]=

Multiplying the basis vector for the null space by m gives the zero vector.

In[11]:= m . %[[1]]

Out[11]=

Here is a simple symbolic matrix with determinant zero.

In[12]:= m = {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}}

Out[12]=

The basis for the null space of m contains two vectors. Any linear combination of these vectors gives zero when multiplied by m.

In[13]:= NullSpace[ m ]

Out[13]=

An important feature of LinearSolve and NullSpace is that they work with rectangular, as well as square, matrices.

When you represent a system of linear equations by a matrix equation of the form , the number of columns in gives the number of variables, and the number of rows gives the number of equations. There are a number of cases.

Classes of linear systems represented by rectangular matrices.

This asks for the solution to the inconsistent set of equations and .

In[14]:= LinearSolve[{{1}, {1}}, {1, 0}]

Out[14]=

This matrix represents two equations, for three variables.

In[15]:= m = {{1, 3, 4}, {2, 1, 3}}

Out[15]=

LinearSolve gives one of the possible solutions to this underdetermined set of equations.

In[16]:= v = LinearSolve[m, {1, 1}]

Out[16]=

When a matrix represents an underdetermined system of equations, the matrix has a non-trivial null space. In this case, the null space is spanned by a single vector.

In[17]:= NullSpace[m]

Out[17]=

If you take the solution you get from LinearSolve, and add any linear combination of the basis vectors for the null space, you still get a solution.

In[18]:= m . (v + 4 %[[1]])

Out[18]=

You can find out the number of redundant equations corresponding to a particular matrix by evaluating Length[NullSpace[m]]. Subtracting this quantity from the number of columns in m gives the rank of the matrix m.

Efficient repeated solution of linear systems.

If you have a large number of numerical matrix equations to solve of the form , all with the same , then you can often save time by first preprocessing using LUDecomposition[m], and then taking the data you get and repeatedly applying LUBackSubstitution[data, b] for each .

Basic Matrix OperationsEigenvalues and Eigenvectors