"OrthogonalProjection" Method for NDSolve

Introduction

Consider the matrix differential equation

where the initial value is given. Assume that , that the solution has the property of preserving orthonormality, , and that it has full rank for all .

From a numerical perspective, a key issue is how to numerically integrate an orthogonal matrix differential system in such a way that the numerical solution remains orthogonal. There are several strategies that are possible. One approach, suggested in [DRV94], is to use an implicit RungeKutta method (such as the Gauss scheme). Some alternative strategies are described in [DV99] and [DL01].

The approach taken here is to use any reasonable numerical integration method and then postprocess using a projective procedure at the end of each integration step.

An important feature of this implementation is that the basic integration method can be any built-in numerical method, or even a user-defined procedure. In the following examples an explicit RungeKutta method is used for the basic time stepping. However, if greater accuracy is required an extrapolation method could easily be used, for example, by simply setting the appropriate Method option.

Projection Step

At the end of each numerical integration step you need to transform the approximate solution matrix of the differential system to obtain an orthogonal matrix. This can be carried out in several ways (see for example [DRV94] and [H97]):

Schulz Iteration

The approach chosen is based on the Schulz iteration, which works directly for . In contrast, Newton iteration for needs to be preceded by QR decomposition.

Comparison with direct computation based on the singular value decomposition is also given.

The Schulz iteration is given by

The Schulz iteration has an arithmetic operation count per iteration of floating-point operations, but is rich in matrix multiplication [H97].

In a practical implementation, GEMM-based level 3 BLAS of LAPACK [LAPACK99] can be used in conjunction with architecture-specific optimizations via the Automatically Tuned Linear Algebra Software [ATLAS00]. Such considerations mean that the arithmetic operation count of the Schulz iteration is not necessarily an accurate reflection of the observed computational cost. A useful bound on the departure from orthonormality of is in [H89]: . Comparison with the Schulz iteration gives the stopping criterion for some tolerance .

Standard Formulation

Assume that an initial value for the current solution of the ODE is given, together with a solution from a one-step numerical integration method. Assume that an absolute tolerance for controlling the Schulz iteration is also prescribed.

The following algorithm can be used for implementation.

Step 1. Set and .

Step 2. Compute .

Step 3. Compute .

Step 4. If or then return .

Step 5. Set and go to step 2.

Increment Formulation

NDSolve uses compensated summation to reduce the effect of rounding errors made by repeatedly adding the contribution of small quantities to at each integration step [H96]. Therefore, the increment is returned by the base integrator.

An appropriate orthogonal correction for the projective iteration can be determined using the following algorithm.

Step 1. Set and .

Step 2. Set .

Step 3. Compute .

Step 4. Compute .

Step 5. If or , then return .

Step 6. Set and go to step 2.

This modified algorithm is used in "OrthogonalProjection" and shows an advantage of using an iterative process over a direct process, since it is not obvious how an orthogonal correction can be derived for direct methods.

Examples

Orthogonal Error Measurement

A function to compute the Frobenius norm of a matrix can be defined in terms of the Norm function as follows:
An upper bound on the departure from orthonormality of can then be measured using this function [H97]:
This defines the utility function for visualizing the orthogonal error during a numerical integration:

Square Systems

This example concerns the solution of a matrix differential system on the orthogonal group (see [Z98]).

The matrix differential system is given by

with

and

The solution evolves as

The eigenvalues of are , , . Thus as approaches , two of the eigenvalues of approach -1. The numerical integration is carried out on the interval :
This computes the solution using an explicit RungeKutta method. The appropriate initial step size and method order are selected automatically, and the step size may vary throughout the integration interval, which is chosen in order to satisfy local relative and absolute error tolerances. Alternatively, the order of the method could be specified by using a Method option:
This computes the orthogonal error, or absolute deviation from the orthogonal manifold, as the integration progresses. The error is of the order of the local accuracy of the numerical method:
This computes the solution using an orthogonal projection method with an explicit RungeKutta method used for the basic integration step. The initial step size and method order are the same as earlier, but the step size sequence in the integration may differ:
Using the orthogonal projection method, the orthogonal error is reduced to approximately the level of roundoff in IEEE double-precision arithmetic:
The Schulz iteration, using the incremental formulation, generally yields smaller errors than the direct singular value decomposition.

Rectangular Systems

In the following example it is shown how the implementation of the orthogonal projection method also works for rectangular matrix differential systems. Formally stated, the interest is in solving ordinary differential equations on the Stiefel manifold, the set of × orthogonal matrices with .

Definition The Stiefel manifold of × orthogonal matrices is the set , , where is the × identity matrix.

Solutions that evolve on the Stiefel manifold find numerous applications such as eigenvalue problems in numerical linear algebra, computation of Lyapunov exponents for dynamical systems and signal processing.

Consider an example adapted from [DL01]:

where , , with and .

The exact solution is given by

Normalizing as

it follows that satisfies the following weak skew-symmetric system on :

In the following example, the system is solved on the interval with and dimension :
This computes the exact solution which can be evaluated throughout the integration interval:
This computes the solution using an explicit RungeKutta method:
This computes the componentwise absolute global error at the end of the integration interval:
This computes the orthogonal errora measure of the deviation from the Stiefel manifold:
This computes the solution using an orthogonal projection method with an explicit RungeKutta method as the basic numerical integration scheme:
The componentwise absolute global error at the end of the integration interval is roughly the same as before since the absolute and relative tolerances used in the numerical integration are the same:
Using the orthogonal projection method, however, the deviation from the Stiefel manifold is reduced to the level of roundoff:

Implementation

The implementation of the method "OrthogonalProjection" has three basic components:

Option Summary

option name
default value
Dimensions{}specify the dimensions of the matrix differential system
"IterationSafetyFactor"specify the safety factor to use in the termination criterion for the Schulz iteration (1)
MaxIterationsAutomaticspecify the maximum number of iterations to use in the Schulz iteration (2)
Method"StiffnessSwitching"specify the method to use for the numerical integration

Options of the method "OrthogonalProjection".