This is documentation for an obsolete product.
Current products and services

 Documentation /  Signals and Systems /

Cepstral AnalysisUtilities

6 Mathematical Utilities

Signals and Systems contains, in addition to the transforms and analysis tools already described, a variety of functions that are best described as "mathematical utilities". Some of these are essential signal processing operations such as convolution and correlation. Others are utilities for dealing with certain mathematical constructs such as Smith forms of sampling matrices.

6.1 Convolution and Correlation

One of the most widely used mathematical tools in signal processing is convolution. Signals and Systems provides convolution routines for both continuous and discrete signals. It also provides visualization techniques for convolution by way of the "flip-and-slide" technique.

Currently, only time-domain convolution techniques for one-dimensional functions are defined.

Functions to perform convolution.

Convolution is formally defined for continuous functions as

For discrete functions, the integration is replaced by a summation.

The routines optimize this computation in various ways for piecewise functions and sampled data.

First, verify that the signal processing functions are available.


The continuous convolution of two truncated exponentials can be calculated symbolically.

Exp[-a n] UnitStep[n],
Exp[-b n] UnitStep[n],


We can plot the result with particular decay constants substituted. Note that we used discrete functions, which leads to a different result at zero compared to the continuous case.

In[3]:=DiscreteSignalPlot[%/.{a -> .5, b -> .9},
{n, -1, 6}


Remember from Chapter 3 that the piecewise data types can be converted into functions by the application of Normal.

The Hilbert transform can be defined in terms of a convolution. Here is the Hilbert transform of a unit-width pulse function centered at zero. Normal is used to convert the piecewise data type to a formula.

-1/(Pi t),
ContinuousPulse[1, t + 1/2],


Here is a sampled data object representing a noisy signal.

In[5]:=data = SampledData[
Table[Sin[x] + Random[Real, {-0.2, 0.2}],
{x, 0, 2 Pi, .1}
], n

We can plot the signal to understand its characteristics.

In[6]:=DiscreteSignalPlot[data, {n, 0, 60}]


This convolution acts to smooth the data.

In[7]:=newdata = DiscreteConvolution[
SampledData[{0.1, 0.2, 0.4, 0.2, 0.1}, n],

We can easily observe the effect on the signal.

In[8]:=DiscreteSignalPlot[newdata, {n, 0, 60}]


The convolution routines allow functions with purely symbolic endpoints for the intervals to be convolved. This simple example convolves two pulses of identical unknown length, and different unknown offsets. Specifying symbolic endpoints can lead to very complex output, and should be used with care.

DiscretePulse[a, n + o1],
DiscretePulse[a, n + o2],


Options for convolution.

The convolution functions take only one option. The Justification option determines whether an animation detailing the graphic "flip-and-slide" technique for convolution will be displayed. This method of visualization can be particularly useful in understanding the output. Various graphics and animation options can also be specified. These additional options are passed through to the animation routines, but take their defaults from the AnimateContinuousConvolution and AnimateDiscreteConvolution functions. Note that for SampledData objects in a discrete convolution, a faster vector convolution routine is used that ignores the Justification option. If you wish to create an animation of such a convolution, you will need to convert the data into a sum of DiscreteDelta functions and perform a symbolic convolution.

Functions for animating convolution by the "flip-and-slide" technique.

The functions for animating convolution do not return the mathematical result of the convolution. Instead, they return the frames of the animation, allowing the animation to be performed and modified programmatically if necessary. All the standard graphics and animation options are available to these functions. The defaults set for these functions are those used by ContinuousConvolution or DiscreteConvolution when Justification -> All. The functions also have one additional option called ConvolutionPausedFrames. Several frames of the animation (the comparison of the functions to be convolved, the starting position, and the ending position) are generated multiple times to produce a pause when the animation is running. The ConvolutionPausedFrames option takes a number indicating how many frames should be generated for each of these pauses.

This demonstrates the output available from the Justification option of the convolution functions or the convolution animation functions. A three-unit-long ramp function is being convolved with a pair of superimposed pulses that is five units long.

t ContinuousPulse[3, t],
ContinuousPulse[5, t] + ContinuousPulse[2, t],
t, DisplayFunction -> Identity,
ConvolutionPausedFrames -> 1,
PlotStyle -> {{Hue[0]},{Hue[0.3]},{Hue[0.6]}},
Frames -> 8

Functions for autocorrelation.

The standard autocorrelation functions in Signals and Systems are based directly on the convolution routines. Autocorrelation is essentially the convolution of a function with the same function reversed in time.

Here is the autocorrelation of a simple continuous pulse.

ContinuousPulse[4, t],


We can directly display the resulting function.

{t, -5, 5}


The autocorrelation functions accept the same options as the convolution functions.

Functions to perform cross-correlation.

Cross-correlation is implemented in essentially the same fashion as autocorrelation; that is, it calls the convolution routines fairly directly, and supports any options that the convolution routines support.

This computes the cross-correlation of a short ramp and a single cycle of a sine wave. The output is relatively complex, and suppressed to save space.

In[13]:=ans = DiscreteCrosscorrelation[
n DiscretePulse[10, n],
Sin[Pi n/4] DiscretePulse[8, n],

We can plot the result quite easily. Re is used to eliminate small complex-valued portions introduced by numerical computation.

In[14]:=DiscreteSignalPlot[Re[ans], {n, -10, 10}]


Note that Mathematica Version 4 has fast vector convolution routines available as built-in system functionality. For some applications, you may wish to use this instead of the Signals and Systems symbolic convolution routines.

6.2 General Operations

There are a wide variety of general mathematical operations employed in signal processing algorithms. Signals and Systems makes a number of these available for direct use. A special focus on matrix manipulation employed in multidimensional decimation routines is documented in the next section. This section documents other general math functions.

Set and Interval Functions

Operations on sets and intervals are related in principle. Signals and Systems does not rely heavily on this connection, but does provide some functions that deal with this domain of mathematics. The most important is that for determining the nonzero support of a signal as an interval.

Functions for working with intervals.

The NonZeroExtent function is applied internally by many functions. The nonzero portion of a signal is known as its support. This function attempts to determine the smallest interval that encompasses all nonzero intervals of the input function in terms of a given variable. The interval is one-dimensional, though it may be in terms of other variables. There may be zero-valued intervals in the returned interval. The interval is expressed as a pair of numbers, or as None if the signal is zero everywhere. If the interval cannot be determined, it will be assumed to range from -Infinity to Infinity.

The signal given here has an open-ended positive interval.

In[15]:=NonZeroExtent[Sin[x] UnitStep[x], x]


The signal may be discrete. The returned interval may contain zero-valued areas.

DiscretePulse[4, -n - 3] +
DiscretePulse[4, n - 3],


Here is the support of a rotated two-dimensional pulse, which is a nonseparable signal. Note the dependency of the resulting interval on the second variable.

RotateSignal[Pi/4, {t1, t2}][


The extent of a signal with no nonzero component is given as None.

In[18]:=NonZeroExtent[0, t]


The extent of a delta function is given at a point; the interval's endpoints are identical.

DiracDelta[t - 3],


Functions for working with sets.

SetExclusion is a set-theoretic function equivalent to the complement of the union of the sets and the mutual intersection of the sets.

Here is the exclusion of three sets of integers.

In[20]:=SetExclusion[{1,2,3}, {3,4,5}, {6,3,7,8}]


Number-Theoretic Functions

Signals and Systems contains several number-theoretic functions. One context in which these are used is that of computations involving Smith form matrices. (Functions directly involving Smith form matrices are detailed in the section on matrix operations.)

Functions related to number theory in Signals and Systems.

BezoutNumbers[, , ... ] returns a list of integers (whose length is equivalent to the number of inputs to the function) that fulfills the formula

+ + ... == GCD[, , ... ]

For two arguments, this is equivalent to the standard function ExtendedGCD. Note that solutions are not unique.

Here are the Bezout numbers corresponding to three integers.

In[21]:=BezoutNumbers[14, 24, 7]


The EuclidFactors function is closely related to BezoutNumbers. Given integers k, p, and q, you can use EuclidFactors to find {a, b} such that a p + q b == k. The integers p and q must be relatively prime. The solution will be found in a set of integers of length Min[p, q]. The inputs p and q may also be integer matrices of identical size, and k an integer vector, in which case the output will be integer vectors. In this case EuclidFactors is useful in the determination of coset vectors.

These are Euclid factors of two integers.

In[22]:=EuclidFactors[6, 7, 4]


Here the computation involves integer matrices.



The function RelativelyPrimeQ is a boolean test that determines whether the two inputs are relatively prime, that is, containing no common prime factors. The inputs may be integers, polynomials, or square integer matrices (resampling matrices).

Here are two integers that are not relatively prime.

In[24]:=RelativelyPrimeQ[9, 42]


Here are two polynomials that are relatively prime.

In[25]:=RelativelyPrimeQ[x^2 + 2, x - 3]


When resampling matrices are being compared, you must choose whether the divisor is on the left or the right side, since matrix multiplication is noncommutative. This is done with the option MatrixFactor, which can be Left or Right. Left is the default value.

These two matrices are relatively prime on the left.

{{1, -2}, {3, 1}},
{{4, -3}, {5, 5}}


However, they are not relatively prime on the right.

{{1, -2}, {3, 1}},
{{4, -3}, {5, 5}},
MatrixFactor -> Right


Special Polynomials

Functions generating particular polynomials.

The Bessel polynomials can be useful in certain filter design computations. These polynomials can be defined in terms of a recursion relation, B[n_, s_] := (2 n - 1) B[n - 1, s] + s^2 B[n - 2, s], with B[0, s_] := 1 and B[1, s_] := s + 1. It is equivalent to truncating a continued fraction expansion of the unit delay Exp[-s]. Because the roots of this polynomial are quite useful in design computations, a function that specifically generates the roots of this polynomial for a given order is also available.

Here is the third-order Bessel polynomial.

In[28]:=BesselPolynomial[3, x]


These are the roots of the eighth-order Bessel polynomial.



The rational Chebyshev polynomial is the basis for the response of analog elliptic filters. In Mathematica notation, the magnitude-squared response can be written

1 / ( 1 + E^2 RationalChebyshev[n, , L, w]^2 )

where and L are dependent upon the filter parameters and the transition ratio. The function requires an integer argument for the order, and will return a function that is the ratio of two polynomials in the given variable.

Here is the third-order polynomial with specific values set for the parameters.

In[30]:=RationalChebyshevPolynomial[3, 2., 5., x]


Here is a plot of the square of the polynomial for particular values of the parameters. Note the use of Evaluate so the polynomial is computed only once, with values substituted into the result. This improves the plotting performance in this case.

RationalChebyshevPolynomial[5, 1.3, 207.1, x]^2
], {x, -5, 5},
PlotPoints -> 100


The polynomial referred to as ZPolynomial is defined as Product[- n - k, {k, 0, m - 1}] for the discrete variable n and integer order m. It has the special property under the Z transform that the product of this polynomial and a function is equivalent to z^m D[F[z], {z, m}] (where is the Z transform of ).

Here is the third-order polynomial.

In[32]:=ZPolynomial[3, n]


Here we see the property of the Z polynomial under the Z transform.

ZPolynomial[3, n] f[n],
n, z,
TransformPairs -> {f[n] -> F[z]}


Polynomial and Rational Function Manipulation

Functions for manipulating polynomials and rational polynomials.

The GetRootList function numerically finds roots of the input polynomial, and returns them in a convenient list form.

Here are the roots of the fourth-order Bessel polynomial.

BesselPolynomial[4, x],


PartialFractionsDecomposition performs a partial fraction decomposition on the rational polynomial that is given as input. This is more general than the standard Apart function.

For some arguments, PartialFractionsDecomposition is very similar to Apart.

(a - b)(b - a)/((a + x)(b + x)),


However, it will work for a wider variety of computations, particularly those involving complex numbers or radicals.

1/(x^2 + 2),


ScalingFactor returns the greatest common divisor of the coefficients of powers of the specified variable when they appear as a discrete sum of terms in the input expression. This sort of computation is useful for the similarity property of transforms.

The parameter a appears as a common factor in the terms of this polynomial.

3 a x^2 + a^2 x + a b x^4,


Complex Numbers

Operations to find the symbolic magnitude and phase of expressions.

The usual Mathematica functions Abs and Arg are inert when applied to symbolic expressions; they do not attempt to simplify out the expression in terms of the imaginary and real parts of symbols in the expression. However, application of ComplexExpand can be used for this purpose. The functions Magnitude and Phase make this easy by calling ComplexExpand for you. All symbols not given in the second argument are assumed to be real-valued. Symbols that should be considered complex-valued are placed in a list as the second argument to these functions.

Mathematica does not automatically simplify this expression.

In[38]:=Abs[Exp[I Pi t]]


Magnitude can be used instead.

In[39]:=Magnitude[Exp[I Pi t]]


The variable is assumed to be real.

In[40]:=Phase[Exp[I Pi t]]


6.3 Matrix Operations

There are a variety of operations on matrices that are especially relevant to multidimensional signal processing. In particular, operations on resampling matrices are useful in the design of multidimensional decimation systems and filter banks.

A few definitions will be particularly useful for this section.

FilledSmallSquare A resampling matrix is a square nonsingular integer or rational matrix. It can be employed in a mapping of the integer lattice.

FilledSmallSquare Unimodular matrices are matrices whose determinant is , , or . Regular unimodular matrices have determinants that are or . Resampling matrices that are unimodular can be used to shuffle an underlying signal lattice without introducing upsampling or decimation.

FilledSmallSquare A lattice is an infinite set of discrete -tuples in an -dimensional Cartesian space that forms a group under vector addition. That is, given a set of vectors (that correspond to the columns of an by resampling matrix), all tuples in the lattice can be found by a linear combination of those vectors. The integer lattice is the set of all integer tuples. A sublattice is a lattice contained within another lattice.

FilledSmallSquare Given a sublattice characterized by a sampling matrix in the integer lattice, a coset is a copy of the sublattice formed by shifting by an integer vector. A coset vector is vector from the origin to a point in a coset. There are distinct (nonintersecting) cosets in the integer lattice; the distinct coset vectors are a set of coset vectors, one per each coset, that can be used to identify the cosets. The distinct coset vectors all lie within the fundamental parallelepiped of the sampling matrix. The distinct coset vectors can be used to enumerate a block of samples for resampling.

General Matrix Manipulation

Some of the matrix operations in Signals and Systems are applicable to wider classes of matrices than resampling matrices.

General matrix operations.

The DiagonalMatrixQ function is a straightforward identifier of diagonal matrices. It returns True if the matrix is a square matrix with nonzero elements only on the diagonal, and False otherwise.

This matrix is diagonal. There are no constraints on what the diagonal elements may be.

{{z, 0, 0},
{0, 0.5, 0},
{0, 0, 3/2}}


The NormalizeSamplingMatrix function can be used to turn general rational matrices into matrices with integer elements. For square matrices, this operation results in resampling matrices. However, the operation will work on matrices that are not square. The input matrix must have rational numbers as its elements. The result is a pair of matrices, the first of which is diagonal, and the second of which is an integer matrix.

Here is a 3 by 2 rational matrix.

In[42]:=mat = {{1/4, 4, 3},
{2/6, 1, 7}};

Now we normalize the matrix.



The normalized matrix can be derived by taking the product of the inverse of the diagonal matrix and the original matrix.

In[44]:=Inverse[First[%]] . mat === Last[%]


Hermite matrices are integer triangular matrices. The function returns both the regular unimodular matrix that performs the pivoting to produce the triangular matrix, and the triangular matrix that results. ColumnHermiteForm produces an upper triangular matrix, while RowHermiteForm produces a lower triangular matrix. These functions only operate on integer matrices.

Here is a 3 by 3 integer matrix.

In[45]:=mat = {{2, 3, 1},
{6, 3, 2},
{1, 5, 1}};

Here are the permutation matrix and the lower triangular matrix.



We see that the lower triangular matrix is derived by a matrix multiplication of the original matrix and the permutation matrix.

In[47]:=mat . First[%] === Last[%]


While the matrix needs to have integer elements, it does not have to be square. However, the unimodular matrix that is derived will be square.

{{2, 6},
{3, 3},
{1, 2}}


Resampling Matrix Operations

Functions to determine information about resampling matrices.

It is useful to be able to determine the characteristics of various matrices. ResamplingMatrix determines whether a matrix is a resampling matrix. If it contains symbolic elements such that it could be a resampling matrix, the function returns a logical expression denoting the conditions on the symbolic components required to make the matrix a resampling matrix. RegularUnimodularResamplingMatrixQ returns True if the matrix is a regular unimodular resampling matrix, and False otherwise. The CommutableResamplingMatricesQ function returns True if the two matrices used as inputs are commutable when the first is treated as a downsampler and the second as an upsampler, and False otherwise.

Here is a regular unimodular resampling matrix.

In[49]:=mat = {{1,1},{1,2}};

The inverse of such a matrix is also a regular unimodular resampling matrix.



Another characteristic of these matrices is that when used as a downsampler, they are commutable with its inverse used as an upsampler, and vice-versa.

mat, Inverse[mat]


When given a square matrix with symbolic components, ResamplingMatrix will return the conditions on those components required to make the matrix a resampling matrix. Otherwise, it returns True or False.

{{a, b},
{c, d}}


Functions for finding common multiples and divisors of resampling matrices.

A number of mathematical operations can be performed on resampling matrices. A common operation is to determine common divisors and multiples of such matrices. Because matrix multiplication is not commutable, these operations divide into determining right and left multiples (and divisors). These divisors and multiples are also resampling matrices. They are not, however, unique; they may be multiplied (on the right for right multiples or left divisors, and on the left for left multiples or right divisors) by regular unimodular resampling matrices to obtain other common multiples or divisors. A regular unimodular resampling matrix plays much the same role that does in determining greatest common divisors of integers. Computationally, the GCLD is related to the LCRM, and the GCRD and LCLM are derived by a dual algorithm.

Here is the greatest common left divsor of two matrices.



Here is the least common left multiple of two matrices.



Operations on resampling matrices.

In general, a resampling operation can be represented not only by an integer matrix, but a rational nonsingular square matrix. Such an operation is equivalent to a cascaded upsampling and downsampling operation by integer matrices. FactorResamplingMatrix allows you to convert a rational resampling matrix into normal integer resampling matrices. Since it is the inverse of the resampling matrix that is used in upsampling operations, FactorResamplingMatrix returns that inverse.

Here is a rational resampling matrix factored into the inverse of an upsampling matrix and a downsampling matrix.

In[55]:={invupsamp, downsamp} =
{{3/2, 3/5},
{6/7, 8}}


We see that we can get back the original matrix by application of matrix multiplication.

In[56]:=invupsamp . downsamp


The resampling matrix corresponding to the upsampler is the inverse of the matrix that FactorResamplingMatrix returns.



An resampling matrix corresponds to an -dimensional signal processing operation. Each diagonal element is equivalent to an operation within a particular dimension, while the elements off the diagonal apply to coupling between the dimensions. An operation is separable only if the matrix is diagonal. In some situations when using Signals and Systems, it is useful to associate symbolic variables with each dimension. The ReorderResampling operation provides a way to reorder a resampling matrix based on those symbolic names. Given a matrix with a corresponding list of variables for each dimension, a permuted list of those variables can be given which will cause ReorderResampling to swap rows and columns to match the new variable positions. It returns the reordered matrix, the original variable list, and the reordered variable list.

Here is a reordering of a second-order matrix, corresponding to a two-dimensional operation.

{{1, 2}, {3, 4}},


Although this operation is intended for resampling matrices, it will work on arbitrary square matrices. Here is a permuted symbolic 3 by 3 matrix representing a resampling matrix.

{{a, b, c},
{d, e, f},
{g, h, i}},
{n1, n2, n3},
{n3, n2, n1}


To understand the ResamplingMatrixMod and DistinctCosetVectors functions, it may be useful to review the definitions at the start of this section. DistinctCosetVectors returns a list of Abs[Det[matrix]] distinct coset vectors for the lattice corresponding to an input resampling matrix. ResamplingMatrixMod of a vector and resampling matrix is defined as v - S . Floor[Inverse[S] . v]. The result will be a distinct coset vector.

There can be only one distinct coset vector for a regular unimodular resampling matrix. This vector will always be {0,0}.

{{1, 1},
{2, 1}}


This resampling matrix (which corresponds to a quincunx downsampler) has three distinct cosets.

{{1, 1},
{2, -1}}


Here is the modulus of a vector with respect to the resampling matrix used in the previous example.

{23, 32},
{{1, 1},
{2, -1}}


Smith Form Matrices

Function for computing various Smith form matrices.

It is useful in many design situations to be able to perform computations on a multidimensional signal processing operation by separating the variables. When speaking of operations expressed as resampling matrices, a system with which you can separate the variables is written as a diagonal matrix. If elements off the diagonal are nonzero, the system is not separable. For an arbitrary resampling matrix, it is possible to decompose it into the product of a regular unimodular resampling matrix, a diagonal (separable) resampling matrix, and another unimodular resampling matrix. The resampling matrices essentially reshuffle elements of the key matrix, and the diagonal matrix performs the signal processing task. This form of decomposition is called Smith decomposition, and the resulting triad of resampling matrices is the Smith form of the original resampling matrix. The decomposition is usually expressed as , with and being the unimodular matrices, and being the diagonal matrix.

The Smith form of a matrix is not unique, as there are many more degrees of freedom in the new matrices than the original matrix. Signals and Systems provides ways of computing several variant Smith forms meeting different criteria by setting the options of SmithFormDecomposition.

The Method option is the main control over what variant of the decomposition is performed. When set to Normalized, the standard Smith form is computed. The algorithm is based on that used in Integer and Mixed Programming: Theory and Applications, by Arnold A. Kaufmann and A. Arnaud Henry-Labordere (Academic Press, 1977). Ordered rearranges the elements on the diagonal to be positive and in ascending order. Reduced generates a matrix where each element on the diagonal is a factor of the succeeding element. It is based on processing the ordered form. The equalized form is computed by Method -> Equalized, in which the elements on the diagonal are made to be as close to each other in value as possible. This routine is based on an algorithm by Brian L. Evans, Jürgen Teich, and Ton A. Kalker.

Here is the Smith normal form of a resampling matrix.

In[63]:=mats = SmithFormDecomposition[
{{1, 2, 3},
{3, 4, 1},
{-1, 3, 2}}


We can check that the matrices produced have the characteristics we desire.

RegularUnimodularResamplingMatrixQ[#3]}& @@ mats


We can also check that the product of the decomposition still forms the original matrix.

In[65]:=newmat = Dot @@ mats


Here is the ordered form of the same matrix. For small matrices, the Smith normal form will be close to that of the ordered form, and the ordered form will usually be identical to the reduced form.

Method -> Ordered


The equalized form is quite different, however. The elements on the diagonal will usually not be identical, but they will be much closer than with the original Smith form. This modified form is useful in the design of multidimensional filter banks. Note that minimization must be used to get the standard definition of the equalized form. This option is explained in more detail below.

Method -> Equalized,
MinimizeType -> FrobeniusNorm


The matrix product of the equalized decomposition is still that of the original matrix.

In[68]:=Dot @@ %


If given a rational matrix, a variation of the Smith form called the Smith-McMillan form can be calculated. The computation is performed by multiplying the matrix elements by the least common multiple of the denominators of the rational elements. This forms an integer matrix which is then used in the computation of the Smith form. The diagonal matrix which results is divided by the multiple, forming a separable rational matrix.

A Smith-McMillan form can be computed from a rational matrix.

{{1/2, 3/4},
{2/5, 1/3}}


Options for Smith form decomposition.

The Justification option is much the same as in other functions from Signals and Systems. It can take the values None, Automatic, and All, which causes information about the steps being taken in the computation to be printed.

Here we see the pivots taken in the computation of a Smith form of this matrix.

{{4, 3}, {2, 5}},
Justification -> Automatic


The Method option we have already seen in action. It is the switch for the various types of Smith forms included.

Additional variants of the Smith form can be computed by use of the MinimizeType option. Two forms of minimization, FrobeniusNorm and LeastSquares can be employed to minimize the unimodular matrices in the Smith form.

MinimizeType -> FrobeniusNorm attempts to minimize the regular unimodular matrix or in the sense of its Frobenius norm. You choose which of these should be minimized by setting the MinimizeSide option to Left or Right, respectively. The default is Left. The algorithm perfoming this minimization computes the Smith form based on a lattice-reduced form of the input matrix (computed via the standard function LatticeReduce).

Here is the Smith form generated with a minimized Frobenius norm for the right unimodular matrix .

MinimizeType -> FrobeniusNorm,
MinimizeSide -> Right


The LeastSquares type of minimization will try to construct the Smith form with the chosen side as close to an identity matrix as possible (in a least-squares sense). Again, use the MinimizeSide option to choose whether it is the left or the right resampling matrix that is to be minimal.

Here is a decomposition.

{{2, 1}, {3, 5}}


This is the result when the left matrix is minimized in the least-squares sense.

{{2, 1}, {3, 5}},
MinimizeType -> LeastSquares


Decomposing a matrix based on a precomputed Smith form.

In some situations, it is useful to be able to compute the Smith form for one matrix based on using the same or unimodular matrix from another Smith form decomposition. The function ConstrainedSmithFormDecomposition attempts to do this with the specified precomputed Smith form decomposition.

Here is a Smith form decomposition of the ordered type.

In[74]:=decomp = SmithFormDecomposition[
{{1, 2},{4, 3}},
Method -> Ordered


This attempts to decompose the second matrix assuming that the matrix of the two decompositions are identical. The new decomposition is not a Smith form, as seen by the False that appears in the answer.

{{5, 1},{6, 2}}


The right shuffling matrix for the second decomposition was not unimodular. Hence, a valid Smith form did not result.

%[[2, 3]]


Note that the decomposition generated is still correct in that the matrix product of the decomposition corresponds to the original matrix.

The option for ConstrainedSmithFormDecomposition.

We can also attempt to constrain the decomposition by presupposing the right resampling matrices to be identical.

{{5, 1},{6, 2}},
MatrixFactor -> Right


Cepstral AnalysisUtilities