# Eigenvectors

Eigenvectors[m]

gives a list of the eigenvectors of the square matrix m.

Eigenvectors[{m,a}]

gives the generalized eigenvectors of m with respect to a.

Eigenvectors[m,k]

gives the first k eigenvectors of m.

Eigenvectors[{m,a},k]

gives the first k generalized eigenvectors.

# Details and Options  • Eigenvectors finds numerical eigenvectors if m contains approximate real or complex numbers.
• For approximate numerical matrices m, the eigenvectors are normalized.
• For exact or symbolic matrices m, the eigenvectors are not normalized.
• Eigenvectors corresponding to degenerate eigenvalues are chosen to be linearly independent.
• For an nn matrix, Eigenvectors always returns a list of length n. The list contains each of the independent eigenvectors of the matrix, supplemented if necessary with an appropriate number of vectors of zeros. »
• Eigenvectors with numeric eigenvalues are sorted in order of decreasing absolute value of their eigenvalues.
• Eigenvectors[m,spec] is equivalent to Take[Eigenvectors[m],spec].
• Eigenvectors[m,UpTo[k]] gives k eigenvectors, or as many as are available.
• SparseArray objects and structured arrays can be used in Eigenvectors.
• Eigenvectors has the following options and settings:
•  Cubics False whether to use radicals to solve cubics Method Automatic method to use Quartics False whether to use radicals to solve quartics ZeroTest Automatic test to determine when expressions are zero
• The ZeroTest option only applies to exact and symbolic matrices.
• Explicit Method settings for approximate numeric matrices include:
•  "Arnoldi" Arnoldi iterative method for finding a few eigenvalues "Banded" direct banded matrix solver for Hermitian matrices "Direct" direct method for finding all eigenvalues "FEAST" FEAST iterative method for finding eigenvalues in an interval (applies to Hermitian matrices only)
• The "Arnoldi" method is also known as a Lanczos method when applied to symmetric or Hermitian matrices.
• The "Arnoldi" and "FEAST" methods take suboptions Method ->{"name",opt1->val1,}, which can be found in the Method subsection.

# Examples

open allclose all

## Basic Examples(4)

Machine-precision numerical eigenvectors:

Eigenvectors of an arbitrary-precision matrix:

Exact eigenvectors:

Symbolic eigenvectors:

## Scope(18)

### Basic Uses(5)

Find the eigenvectors of a machine-precision matrix:

Approximate 18-digit precision eigenvectors:

Eigenvectors of a complex matrix:

Exact eigenvectors:

The eigenvectors of large numerical matrices are computed efficiently:

### Subsets of Eigenvectors(5)

Compute the eigenvectors corresponding to the three largest eigenvalues:

Visualize the three vectors:

Eigenvectors corresponding to the three smallest eigenvalues:

Find the eigenvectors corresponding to the 4 largest eigenvalues, or as many as there are if fewer:

Repeats are considered when extracting a subset of the eigenvalues:

The first two vectors both correspond to the eigenvalue 4:

The third corresponds to the eigenvalue 3:

Zero vectors are used when there are more eigenvalues than independent eigenvectors:

### Generalized Eigenvalues(4)

Compute machine-precision generalized eigenvectors:

Generalized exact eigenvectors:

Compute the result at finite precision:

Compute symbolic generalized eigenvectors:

Find the generalized eigenvectors corresponding to the two smallest generalized eigenvalues:

### Special Matrices(4)

Eigenvectors of sparse matrices:

Eigenvectors of structured matrices:

The eigenvectors of IdentityMatrix form the standard basis for a vector space:

Eigenvectors of HilbertMatrix:

## Options(10)

### Cubics(1)

A 3×3 Vandermonde matrix:

In general, for exact 3×3 matrices the result will be given in terms of Root objects:

To get the result in terms of radicals, use the Cubics option:

Note that the result with Root objects is better suited to subsequent numerical evaluation:

### Method(8)

#### "Arnoldi"(5)

The Arnoldi method can be used for machine- and arbitrary-precision matrices. The implementation of the Arnoldi method is based on the "ARPACK" library. It is most useful for large sparse matrices.

The following suboptions can be specified for the method "Arnoldi":

•  "BasisSize" the size of the Arnoldi basis "Criteria" which criteria to use "MaxIterations" the maximum number of iterations "Shift" the Arnoldi shift "StartingVector" the initial vector to start iterations "Tolerance" the tolerance used to terminate iterations
• Possible settings for "Criteria" include:

•  "Magnitude" based on Abs "RealPart" based on Re "ImaginaryPart" based on Im "BothEnds" a few eigenvalues from both ends of the symmetric real matrix spectrum
• Compute the largest eigenvectors using different "Criteria" settings. The matrix m has eigenvalues : By default, "Criteria"->"Magnitude" selects an eigenvector corresponding to a largest-magnitude eigenvalue:

Find an eigenvector corresponding to a largest real-part eigenvalue:

Find an eigenvector corresponding to a largest imaginary-part eigenvalue:

Find two eigenvectors from both ends of the matrix spectrum:

Use "StartingVector" to avoid randomness:

Different starting vectors may converge to different eigenvectors:

Use "Shift"->μ to shift the eigenvalues by transforming the matrix to . This preserves the eigenvectors but changes the eigenvalues by -μ. The method compensates for the changed eigenvalues. "Shift" is typically used to find eigenpairs where there is no criteria such as largest or smallest magnitude that can select them:

Manually shift the matrix to get the eigenvector:

Automatically shift to get the eigenvectors:

#### "Banded"(1)

The banded method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding all eigenvectors.

Compute the two largest eigenvectors for a banded matrix:

#### "FEAST"(2)

The FEAST method can be used for real symmetric or complex Hermitian machine-precision matrices. The method is most useful for finding eigenvectors in a given interval.

The following suboptions can be specified for the method "FEAST":

•  "ContourPoints" select the number of contour points "Interval" interval for finding eigenvalues "MaxIterations" the maximum number of refinement loops "NumberOfRestarts" the maximum number of restarts "SubspaceSize" the initial size of subspace "Tolerance" the tolerance to terminate refinement "UseBandedSolver" whether to use a banded solver
• Compute eigenvectors corresponding to eigenvalues from the interval :

Use "Interval" to specify the interval:

### Quartics(1)

A 4×4 matrix:

In general, for a 4×4 matrix, the result will be given in terms of Root objects:

You can get the result in terms of radicals using the Cubics and Quartics options:

## Applications(2)

The eigenvectors of a 3×3 matrix m:

Diagonalize m:

The eigenvalues of a nondiagonalizable matrix:

Find the dimension of the span of all the eigenvectors:

Estimate the probability that a random 4×4 matrix of ones and zeros is not diagonalizable:

## Properties & Relations(2)

Compute the eigenvectors for a random symmetric matrix:

The numerical eigenvectors are orthonormal to the precision of the computation:

Diagonalization of the matrix r:

The diagonal elements are essentially the same as the eigenvalues:

The first eigenvector of a random matrix:

The position of the largest component in v:

Compute the eigenvalue corresponding to eigenvector v:

## Possible Issues(6)

Not all matrices have a complete set of eigenvectors:

Use JordanDecomposition for exact computation:

Use SchurDecomposition for numeric computation:

The general symbolic case quickly gets very complicated:

The expression sizes increase faster than exponentially:

Construct a 10,000×10,000 sparse matrix:

The eigenvector matrix is a dense matrix, and too large to represent: Computing the few eigenvectors corresponding to the largest eigenvalues is much easier:

When eigenvalues are closely grouped, the iterative method for sparse matrices may not converge:

The iteration has not converged well after 1000 iterations: You can give the algorithm a shift near an expected eigenvalue to speed up convergence:

Generalized exact eigenvalues and eigenvectors cannot be computed for some matrices:

When an eigenvector cannot be determined, a zero vector is returned:  Eigenvectors and Eigenvalues are not absolutely guaranteed to give results in corresponding order:

The sixth and seventh eigenvalues are essentially equal and opposite:

In this particular case, the seventh eigenvector does not correspond to the seventh eigenvalue:

Instead it corresponds to the sixth eigenvalue:

Use Eigensystem[mat] to ensure corresponding results always match:

## Neat Examples(1)

The first four eigenvectors of a banded matrix:

A plot of the first four eigenvectors: