The Design of the NDSolve Framework


Supporting a large number of numerical integration methods for differential equations is a lot of work.

In order to cut down on maintenance and duplication of code, common components are shared between methods.

This approach also allows code optimization to be carried out in just a few central routines.

The principal features of the NDSolve framework are:

  • Uniform design and interface
  • Code reuse (common code base)
  • Objection orientation (method property specification and communication)
  • Data hiding
  • Separation of method initialization phase and run-time computation
  • Hierarchical and reentrant numerical methods
  • Uniform treatment of rounding errors (see [HLW02], [SS03], and the references therein)
  • Vectorized framework based on a generalization of the BLAS model [LAPACK99] using optimized in-place arithmetic
  • Tensor framework that allows families of methods to share one implementation
  • Type and precision dynamic for all methods
  • Plug-in capabilities that allow user extensibility and prototyping
  • Specialized data structures

Common Time Stepping

A common time-stepping mechanism is used for all one-step methods. The routine handles a number of different criteria including:

  • Step sizes in a numerical integration do not become too small in value, which may happen in solving stiff systems
  • Step sizes do not change sign unexpectedly, which may be a consequence of user programming error
  • Step sizes are not increased after a step rejection
  • Step sizes are not decreased drastically toward the end of an integration
  • Specified (or detected) singularities are handled by restarting the integration
  • Divergence of iterations in implicit methods (e.g. using fixed, large step sizes)
  • Unrecoverable integration errors (e.g. numerical exceptions)
  • Rounding error feedback (compensated summation) is particularly advantageous for high-order methods or methods that conserve specific quantities during the numerical integration

Data Encapsulation

Each method has its own data object that contains information that is needed for the invocation of the method. This includes, but is not limited to, coefficients, workspaces, step-size control parameters, step-size acceptance/rejection information, and Jacobian matrices. This is a generalization of the ideas used in codes like LSODA ([H83], [P83]).

Method Hierarchy

Methods are reentrant and hierarchical, meaning that one method can call another. This is a generalization of the ideas used in the Generic ODE Solving System, Godess (see [O95], [O98], and the references therein), which is implemented in C++.

Initial Design

The original method framework design allowed a number of methods to be invoked in the solver.



First Revision

This was later extended to allow one method to call another in a sequential fashion, with an arbitrary number of levels of nesting.


The construction of compound integration methods is particularly useful in geometric numerical integration.


Second Revision

A more general tree invocation process was required to implement composition methods.


This is an example of a method composed with its adjoint.

Current State

The tree invocation process was extended to allow for a subfield to be solved by each method, instead of the entire vector field.

This example turns up in the ABC Flow subsection of "Composition and Splitting Methods for NDSolve".


User Extensibility

Built-in methods can be used as building blocks for the efficient construction of special-purpose (compound) integrators. User-defined methods can also be added.

Method Classes

Methods such as include a number of schemes of different orders. Moreover, alternative coefficient choices can be specified by the user. This is a generalization of the ideas found in RKSUITE [BGS93].

Automatic Selection and User Controllability

The framework provides automatic step-size selection and method-order selection. Methods are user-configurable via method options.

For example, a user can select the class of methods, and the code will automatically attempt to ascertain the "optimal" order according to the problem, the relative and absolute local error tolerances, and the initial step-size estimate.

Here is a list of options appropriate for .

Click for copyable input


In order to illustrate the low-level behavior of some methods, such as stiffness switching or order variation that occurs at run time, a new has been added.

This fits between the relatively coarse resolution of and the fine resolution of .

This feature is not officially documented and the functionality may change in future versions.

Shared Features

These features are not necessarily restricted to NDSolve since they can also be used for other types of numerical methods.

  • Function evaluation is performed using a that dynamically changes type as needed, such as when IEEE floating-point overflow or underflow occurs. It also calls the Wolfram Language's compiler Compile for efficiency when appropriate.
  • Jacobian evaluation uses symbolic differentiation or finite difference approximations, including automatic or user-specifiable sparsity detection.
  • Dense linear algebra is based on LAPACK, and sparse linear algebra uses special-purpose packages such as UMFPACK.
  • Common subexpressions in the numerical evaluation of the function representing a differential system are detected and collected to avoid repeated work.
    • Other supporting functionality that has been implemented is described in "Norms in NDSolve".

    This system dynamically switches type from real to complex during the numerical integration, automatically recompiling as needed.

    Click for copyable input

Some Basic Methods

1Explicit Euler
2Explicit Midpoint
1Backward or Implicit Euler (1-stage RadauIIA)
2Implicit Midpoint (1-stage Gauss)
2Trapezoidal (2-stage Lobatto IIIA)
1Linearly Implicit Euler
2Linearly Implicit Midpoint

Some of the one-step methods that have been implemented.

Here , denotes the identity matrix, and denotes the Jacobian matrix .

Although the implicit midpoint method has not been implemented as a separate method, it is available through the one-stage Gauss scheme of the method.