# Eigensystem

Eigensystem[m]

gives a list {values,vectors} of the eigenvalues and eigenvectors of the square matrix m.

Eigensystem[{m,a}]

gives the generalized eigenvalues and eigenvectors of m with respect to a.

Eigensystem[m,k]

gives the eigenvalues and eigenvectors for the first k eigenvalues of m.

Eigensystem[{m,a},k]

gives the first k generalized eigenvalues and eigenvectors.

# Details and Options   • Eigensystem finds numerical eigenvalues and eigenvectors if m contains approximate real or complex numbers.
• For approximate numerical matrices m, the eigenvectors are normalized.
• For exact or symbolic matrices m, the eigenvectors are not normalized.
• All the nonzero eigenvectors given are independent. If the number of eigenvectors is equal to the number of nonzero eigenvalues, then corresponding eigenvalues and eigenvectors are given in corresponding positions in their respective lists. The eigenvalues correspond to rows in the eigenvector matrix.
• If there are more eigenvalues than independent eigenvectors, then each extra eigenvalue is paired with a vector of zeros. »
• If they are numeric, eigenvalues are sorted in order of decreasing absolute value.
• The eigenvalues and eigenvectors satisfy the matrix equation m.Transpose[vectors]==Transpose[vectors].DiagonalMatrix[values]. »
• The generalized finite eigenvalues and eigenvectors satisfy m.Transpose[vectors]==a.Transpose[vectors].DiagonalMatrix[values].
• {vals,vecs}=Eigensystem[m] can be used to set vals and vecs to be the eigenvalues and eigenvectors, respectively. »
• Eigensystem[m,spec] is equivalent to applying Take[,spec] to each element of Eigensystem[m].
• Eigensystem[m,UpTo[k]] gives k eigenvalues and corresponding eigenvectors, or as many as are available.
• SparseArray objects and structured arrays can be used in Eigensystem.
• Eigensystem has the following options and settings:
•  Cubics False whether to use radicals to solve cubics Method Automatic select a method to use Quartics False whether to use radicals to solve quartics ZeroTest Automatic test for when expressions are zero
• The ZeroTest option only applies to exact and symbolic matrices.
• Explicit Method settings for approximate numeric matrices include:
•  "Arnoldi" Arnoldi iterative method for finding a few eigenvalues "Banded" direct banded matrix solver for Hermitian matrices "Direct" direct method for finding all eigenvalues "FEAST" FEAST iterative method for finding eigenvalues in an interval (applies to Hermitian matrices only)
• The "Arnoldi" method is also known as a Lanczos method when applied to symmetric or Hermitian matrices.
• The "Arnoldi" and "FEAST" methods take suboptions Method->{"name",opt1->val1,}, which can be found in the Method subsection.

# Examples

open allclose all

## Basic Examples(5)

Eigenvalues and eigenvectors computed with machine precision:

Eigenvalues and vectors of an arbitrary-precision matrix:

Exact eigenvalues and eigenvectors:

Symbolic eigenvalues and eigenvectors:

Set vals and vecs to be the eigenvalues and eigenvectors, respectively:

## Scope(18)

Find the eigensystem of a machine-precision matrix:

Approximate 18-digit precision eigenvalues and eigenvectors:

Eigensystem of a complex matrix:

Exact eigensystem:

The eigenvalues and eigenvectors of large numerical matrices are computed efficiently:

### Subsets of Eigensystems(5)

Compute the three largest eigenvalues and their corresponding eigenvectors:

Visualize the three vectors, using the eigenvalue as a label:

Eigensystem corresponding to the three smallest eigenvalues:

Find the eigensystem corresponding to the four largest eigenvalues, or as many as there are if fewer:

Repeated eigenvalues are considered when extracting a subset of the eigensystem:

Zero vectors are used when there are more eigenvalues than independent eigenvectors:

### Generalized Eigenvalues(4)

Compute machine-precision generalized eigenvalues and eigenvectors:

Generalized exact eigensystem:

Compute the result at finite precision:

Compute a symbolic generalized eigensystem:

Find the two smallest generalized eigenvalues and corresponding generalized eigenvectors:

### Special Matrices(4)

Eigensystems of sparse matrices:

Eigensystems of structured matrices:

The units of a QuantityArray object are in the eigenvalues, leaving the eigenvectors dimensionless:

The eigenvectors of IdentityMatrix form the standard basis for a vector space:

Eigenvectors of HilbertMatrix:

If the matrix is first numericized, the eigenvectors (but not eigenvalues) change significantly:

## Options(10)

### Cubics(1)

A 3×3 Vandermonde matrix:

In general, for exact 3×3 matrices the result will be given in terms of Root objects:

To get the result in terms of radicals, use the Cubics option:

Note that the result with Root objects is better suited to subsequent numerical evaluation:

### Method(8)

#### "Arnoldi"(5)

The Arnoldi method can be used for machine- and arbitrary-precision matrices. The implementation of the Arnoldi method is based on the "ARPACK" library. It is most useful for large sparse matrices.

The following suboptions can be specified for the method "Arnoldi":

•  "BasisSize" the size of the Arnoldi basis "Criteria" which criteria to use "MaxIterations" the maximum number of iterations "Shift" the Arnoldi shift "StartingVector" the initial vector to start iterations "Tolerance" the tolerance used to terminate iterations
• Possible settings for "Criteria" include:

•  "Magnitude" based on Abs "RealPart" based on Re "ImaginaryPart" based on Im "BothEnds" a few eigenvalues from both ends of the symmetric real matrix spectrum
• Compute the largest eigenpair using different "Criteria" settings. The matrix m has eigenvalues : By default, "Criteria"->"Magnitude" selects a largest-magnitude eigenpair:

Find the largest real-part eigenpair:

Find the largest imaginary-part eigenpair:

Find two eigenpairs from both ends of the symmetric matrix spectrum:

Use "StartingVector" to avoid randomness:

Different starting vectors may converge to different eigenpairs:

Use "Shift"->μ to shift the eigenvalues by transforming the matrix to . This preserves the eigenvectors but changes the eigenvalues by -μ. The method compensates for the changed eigenvalues. "Shift" is typically used to find eigenpairs when there are no criteria such as largest or smallest magnitude that can select them:

Manually shift the matrix and adjust the resulting eigenvalue:

Automatically shift and adjust the eigenvalue:

#### "Banded"(1)

The banded method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding all eigenpairs.

Compute the two largest eigenpairs for a banded matrix:

Check the solution:

#### "FEAST"(2)

The FEAST method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding eigenvalues in a given interval.

The following suboptions can be specified for the method "FEAST":

•  "ContourPoints" select the number of contour points "Interval" an interval for finding eigenvalues "MaxIterations" the maximum number of refinement loops "NumberOfRestarts" the maximum number of restarts "SubspaceSize" the initial size of subspace "Tolerance" the tolerance to terminate refinement "UseBandedSolver" whether to use a banded solver
• Compute eigenpairs in the interval :

Use "Interval" to specify the interval:

The interval end points are not included in the interval FEAST finds eigenvalues in.

Check the solution:

### Quartics(1)

A 4×4 matrix:

In general, for a 4×4 matrix, the result will be given in terms of Root objects:

You can get the result in terms of radicals using the Cubics and Quartics options:

## Applications(2)

Here is a quadratic form in 3 dimensions:

Plot the surface :

Get the symmetric matrix for the quadratic form, using CoefficientArrays:

Numerically compute its eigenvalues and eigenvectors:

Show the principal axes of the ellipsoid:

The Lorenz equations:

Find the Jacobian for the right-hand side of the equations:

Find the equilibrium points:

Find the eigenvalues and eigenvectors of the Jacobian at one in the first octant:

A function that integrates backward from a small perturbation of pt in the direction dir:

Show the stable curve for the equilibrium point on the right:

Find the stable curve for the equilibrium point on the left:

Show the stable curves along with a solution of the Lorenz equations:

## Properties & Relations(4)

Any square matrix satisfies its similarity relation:

Any pair of square matrices satisfies the generalized similarity relation for finite eigenvalues:

The eigensystem for a circulant matrix:

Compute the matrix exponential using diagonalization:

Compute the matrix exponential using MatrixExp:

The eigenvalues and eigenvectors of a random symmetric matrix:

The eigenvectors are all real:

The numerical eigenvectors are orthonormal to the precision of the computation:

## Possible Issues(6)

Not all matrices have a complete set of eigenvectors:

Use JordanDecomposition for exact computation:

Use SchurDecomposition for numeric computation:

The general symbolic case very quickly gets very complicated:

The expression sizes increase faster than exponentially:

Construct a 10,000×10,000 sparse matrix:

The eigenvector matrix is a dense matrix and too large to represent: Computing the few largest or smallest eigenvalues is usually possible:

When eigenvalues are closely grouped the iterative method for sparse matrices may not converge:

The iteration has not converged well after 1000 iterations: You can give the algorithm a shift near the expected value to speed up convergence:

Generalized exact eigenvalues and eigenvectors cannot be computed for some matrices:

When an eigenvector cannot be determined, a zero vector is returned:  The end points given to an interval as specified for the FEAST method are not included.

Set up a matrix with eigenvalues at 3 and 9 and find the eigenvalues and eigenvectors in the interval :

Enlarge the to interval such that FEAST finds the eigenvalues 3 and 9 and their corresponding eigenvectors: