Rounding error feedback (compensated summation) is particularly advantageous for high-order methods or methods that conserve specific quantities during the numerical integration
Each method has its own data object that contains information that is needed for the invocation of the method. This includes, but is not limited to, coefficients, workspaces, step-size control parameters, step-size acceptance/rejection information, and Jacobian matrices. This is a generalization of the ideas used in codes like LSODA ([H83], [P83]).
Methods are reentrant and hierarchical, meaning that one method can call another. This is a generalization of the ideas used in the Generic ODE Solving System, Godess (see [O95], [O98], and the references therein), which is implemented in C++.
The original method framework design allowed a number of methods to be invoked in the solver.
Built-in methods can be used as building blocks for the efficient construction of special-purpose (compound) integrators. User-defined methods can also be added.
Methods such as "ExplicitRungeKutta" include a number of schemes of different orders. Moreover, alternative coefficient choices can be specified by the user. This is a generalization of the ideas found in RKSUITE [BGS93].
Automatic Selection and User Controllability
The framework provides automatic step-size selection and method-order selection. Methods are user-configurable via method options.
For example, a user can select the class of "ExplicitRungeKutta" methods, and the code will automatically attempt to ascertain the "optimal" order according to the problem, the relative and absolute local error tolerances, and the initial step-size estimate.
Here is a list of options appropriate for "ExplicitRungeKutta".
In order to illustrate the low-level behavior of some methods, such as stiffness switching or order variation that occurs at run time, a new "MethodMonitor" has been added.
This fits between the relatively coarse resolution of "StepMonitor" and the fine resolution of "EvaluationMonitor" .
This feature is not officially documented and the functionality may change in future versions.
These features are not necessarily restricted to NDSolve since they can also be used for other types of numerical methods.
Function evaluation is performed using a NumericalFunction that dynamically changes type as needed, such as when IEEE floating-point overflow or underflow occurs. It also calls the Wolfram Language's compiler Compile for efficiency when appropriate.
Jacobian evaluation uses symbolic differentiation or finite difference approximations, including automatic or user-specifiable sparsity detection.
Dense linear algebra is based on LAPACK, and sparse linear algebra uses special-purpose packages such as UMFPACK.
Common subexpressions in the numerical evaluation of the function representing a differential system are detected and collected to avoid repeated work.
Other supporting functionality that has been implemented is described in "Norms in NDSolve".
This system dynamically switches type from real to complex during the numerical integration, automatically recompiling as needed.
Some Basic Methods
Backward or Implicit Euler (1-stage RadauIIA)
Implicit Midpoint (1-stage Gauss)
Trapezoidal (2-stage Lobatto IIIA)
Linearly Implicit Euler
Linearly Implicit Midpoint
Some of the one-step methods that have been implemented.
Here , denotes the identity matrix, and denotes the Jacobian matrix .
Although the implicit midpoint method has not been implemented as a separate method, it is available through the one-stage Gauss scheme of the "ImplicitRungeKutta" method.