PseudoInverse
finds the pseudoinverse of a rectangular matrix.
Details and Options
- PseudoInverse works on both symbolic and numerical matrices.
- For a square matrix, PseudoInverse gives the Moore–Penrose inverse.
- For numerical matrices, PseudoInverse is based on SingularValueDecomposition.
- PseudoInverse[m,Tolerance->t] specifies that singular values smaller than t times the maximum singular value should be dropped.
- With the default setting Tolerance->Automatic, singular values are dropped when they are less than 100 times 10-p, where p is Precision[m].
- For non‐singular square matrices M, the pseudoinverse M(-1) is equivalent to the standard inverse.
Examples
open allclose allBasic Examples (5)
Find the pseudoinverse of an invertible matrix:
The pseudoinverse is merely the inverse:
Find the pseudoinverse of a singular matrix:
The determinant of is zero, so it does not have a true inverse:
For a pseudoinverse, both and :
However, in this particular case neither nor is an identity matrix:
Find the pseudoinverse of a rectangular matrix:
In this particular case, is an identity matrix:
Scope (10)
Basic Uses (6)
Special Matrices (4)
The pseudoinverse of a sparse matrix is returned as a normal matrix:
When possible, the pseudoinverse of a structured matrix is returned as another structured matrix:
IdentityMatrix[n] is its own pseudoinverse:
The pseudoinverse of IdentityMatrix[{m,n}] is a transposition:
Compute the pseudoinverse for HilbertMatrix:
Options (1)
Tolerance (1)
Some singular values are below the default tolerance for machine precision:
Compute the pseudoinverse with the default tolerance:
It is not a true inverse since some singular values were considered to be effectively zero:
Compute the pseudoinverse with no tolerance:
Even though no singular values were considered zero, it is worse due to numerical error:
Applications (8)
Equation Solving (4)
Solve the following system of equations using PseudoInverse:
Rewrite the system in matrix form:
The general solution is given by for an arbitrary vector :
Since the dropped out, the solution is unique, as can be verified using SolveValues:
Find all solutions of the following system of equations:
First, write the coefficient matrix , vector variable and constant vector :
The general solution is given by for an arbitrary vector :
Although there are three parameters, this solution represents a line:
This is because the null space of is one-dimensional:
Hence it is possible to reparameterize to eliminate two of the parameters:
This parameterization gives the answer in the same form as SolveValues:
Find the minimum Frobenius-norm solution to , with and as follows:
The minimum norm solution is :
Compute the Frobenius norm of :
As in the vector case, the general solution is given by , with now an arbitrary matrix:
The minimum occurs when all the are zero, confirming that is the minimum-norm solution:
In this case there is no solution to :
An approximate solution that minimizes the norm of is given by :
Compare to general minimization:
A more general solution is given by :
Although there are three parameters in , it represents a line:
Least Squares and Curve Fitting (4)
For the matrix and vector that follow, find a vector that minimizes :
One solution, in this case unique, is given by :
This result could also have been obtained using LeastSquares[m,b]:
Confirm the answer using Minimize:
For the matrices and that follow, find a matrix that minimizes :
One solution, in this case unique, is given by :
This result could also have been obtained using LeastSquares[m,b]:
Confirm the answer using Minimize:
PseudoInverse can be used to find a best-fit curve to data. Consider the following data:
Extract the and coordinates from the data:
Construct a design matrix, whose columns are and , for fitting to a line :
Get the coefficients and for a linear least‐squares fit:
Verify the coefficients using Fit:
Plot the best-fit curve along with the data:
Find the best-fit parabola to the following data:
Extract the and coordinates from the data:
Construct a design matrix, whose columns are , and , for fitting to a line :
Get the coefficients , and for a least‐squares fit:
Verify the coefficients using Fit:
Properties & Relations (14)
For a nonsingular matrix, the pseudoinverse is the same as the inverse:
PseudoInverse is involutive, :
PseudoInverse commutes with Transpose, i.e :
It also commutes with Conjugate, :
Hence it commutes with ConjugateTranspose, :
PseudoInverse satisfies the Moore–Penrose equations [more info]:
If MatrixRank[m] equals the number of columns of , then :
In particular, PseudoInverse[m] is a left-inverse of m:
If MatrixRank[m] equals the number of rows of , then :
In particular, PseudoInverse[m] is a right-inverse of m:
For a diagonal matrix d, PseudoInverse[d] is the transpose with nonzero elements inverted:
If has the singular value decomposition , then :
If a is an matrix and MatrixRank[a]==m, QRDecomposition will give the pseudoinverse:
A normal matrix commutes with its pseudoinverse:
PseudoInverse[m] can be computed as , where denotes DrazinInverse:
LeastSquares and PseudoInverse can both be used to solve the least-squares problem:
gives the minimum norm that minimizes the residual :
Verify that minimizes the residual:
Adding any vector in the NullSpace of to will leave the residual unchanged:
The minimum norm occurs at , i.e when :
For a vector and a matrix with empty nullspace, equals ArgMin[Norm[m.x-b],x]:
Text
Wolfram Research (1988), PseudoInverse, Wolfram Language function, https://reference.wolfram.com/language/ref/PseudoInverse.html (updated 2003).
CMS
Wolfram Language. 1988. "PseudoInverse." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2003. https://reference.wolfram.com/language/ref/PseudoInverse.html.
APA
Wolfram Language. (1988). PseudoInverse. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/PseudoInverse.html