"ExplicitRungeKutta" Method for NDSolve

Introduction

This loads packages containing some test problems and utility functions:

Euler's Method

One of the first and simplest methods for solving initial value problems was proposed by Euler:

Euler's method is not very accurate.

Local accuracy is measured by how high terms are matched with the Taylor expansion of the solution. Euler's method is first-order accurate, so that errors occur one order higher starting at powers of .

Euler's method is implemented in NDSolve as "ExplicitEuler":

Generalizing Euler's Method

The idea of RungeKutta methods is to take successive (weighted) Euler steps to approximate a Taylor series. In this way function evaluations (and not derivatives) are used.

For example, consider the one-step formulation of the midpoint method.

The midpoint method can be shown to have a local error of , so it is second-order accurate.

The midpoint method is implemented in NDSolve as "ExplicitMidpoint":

RungeKutta Methods

Extending the approach in (1), repeated function evaluation can be used to obtain higher-order methods.

Denote the RungeKutta method for the approximate solution to an initial value problem at by

where is the number of stages.

It is generally assumed that the row-sum conditions hold:

These conditions effectively determine the points in time at which the function is sampled and are a particularly useful device in the derivation of high-order RungeKutta methods.

The coefficients of the method are free parameters that are chosen to satisfy a Taylor series expansion through some order in the time step . In practice other conditions such as stability can also constrain the coefficients.

Explicit RungeKutta methods are a special case where the matrix is strictly lower triangular:

It has become customary to denote the method coefficients , , and using a Butcher table, which has the following form for explicit RungeKutta methods.

The row-sum conditions can be visualized as summing across the rows of the table.

Notice that a consequence of explicitness is , so that the function is sampled at the beginning of the current integration step.

Example

The Butcher table for the explicit midpoint method (2) is given by:

FSAL Schemes

A particularly interesting special class of explicit RungeKutta methods, used in most modern codes, is that for which the coefficients have a special structure known as First Same As Last (FSAL):

For consistent FSAL schemes the Butcher table (3) has the form:

The advantage of FSAL methods is that the function value at the end of one integration step is the same as the first function value at the next integration step.

The function values at the beginning and end of each integration step are required anyway when constructing the InterpolatingFunction that is used for dense output in NDSolve.

Embedded Pairs and Local Error Estimation

An efficient means of obtaining local error estimates for adaptive step-size control is to consider two methods of different orders and that share the same coefficient matrix (and hence function values).

These give two solutions:

A commonly used notation is , typically with or .

In most modern codes, including the default choice in NDSolve, the solution is advanced with the more accurate formula so that , which is known as local extrapolation.

The vector of coefficients gives an error estimator avoiding subtractive cancellation of in floating-point arithmetic when forming the difference between (4) and (5).

The quantity gives a scalar measure of the error that can be used for step size selection.

Step Control

The classical Integral (or I) step-size controller uses the formula

where .

The error estimate is therefore used to determine the next step size to use from the current step size.

The notation is explained within "Norms in NDSolve".

Overview

Explicit RungeKutta pairs of orders 2(1) through 9(8) have been implemented.

Formula pairs have the following properties:

Example

Define the Brusselator ODE problem, which models a chemical reaction:
This solves the system using an explicit RungeKutta method:
Extract the interpolating functions from the solution:
Plot the solution components:

Method Comparison

Sometimes you may be interested to find out what methods are being used in NDSolve.

Here you can see the coefficients of the default 2(1) embedded pair:

You also may want to compare some of the different methods to see how they perform for a specific problem.

Utilities

You will make use of a utility function CompareMethods for comparing various methods. Some useful NDSolve features of this function for comparing methods are:

Reference Solution

A number of examples for comparing numerical methods in the literature rely on the fact that a closed-form solution is available, which is obviously quite limiting.

In NDSolve it is possible to get very accurate approximations using arbitrary-precision adaptive step size; these are adaptive order methods based on "Extrapolation".

The following reference solution is computed with a method that switches between a pair of "Extrapolation" methods, depending on whether the problem appears to be stiff:

Automatic Order Selection

When you select "DifferenceOrder"->Automatic, the code will automatically attempt to choose the optimal order method for the integration.

Two algorithms have been implemented for this purpose and are described within "SymplecticPartitionedRungeKutta Method for NDSolve".

Example 1

Here is an example that compares built-in methods of various orders, together with the method that is selected automatically.

This selects the order of the methods to choose between and makes a list of method options to pass to NDSolve:
Compute the number of integration steps, function evaluations, and the endpoint global error:
Display the results in a table:

The default method has order nine, which is close to the optimal order of eight in this example. One function evaluation is needed during the initialization phase to determine the order.

Example 2

A limitation of the previous example is that it did not take into account the accuracy of the solution obtained by each method, so that it did not give a fair reflection of the cost.

Rather than taking a single tolerance to compare methods, it is preferable to use a range of tolerances.

The following example compares various "ExplicitRungeKutta" methods of different orders using a variety of tolerances.

This selects the order of the methods to choose between and makes a list of method options to pass to NDSolve:
The data comparing accuracy and work is computed using CompareMethods for a range of tolerances:
The work-error comparison data for the various methods is displayed in the following logarithmic plot, where the global error is displayed on the vertical axis and the number of function evaluations on the horizontal axis. The default order selected (displayed in red) can be seen to increase as the tolerances are increased:

The order-selection algorithms are heuristic in that the optimal order may change through the integration but, as the examples illustrate, a reasonable default choice is usually made.

Ideally, a selection of different problems should be used for benchmarking.

Coefficient Plug-in

The implementation of "ExplicitRungeKutta" provides a default method pair at each order.

Sometimes, however, it is convenient to use a different method, for example:

The Classical RungeKutta Method

This shows how to define the coefficients of the classical explicit RungeKutta method of order four, approximated to precision :

The method has no embedded error estimate and hence there is no specification of the coefficient error vector. This means that the method is invoked with fixed step sizes.

Here is an example of the calling syntax:

ode23

This defines the coefficients for a 3(2) FSAL explicit RungeKutta pair.

The third-order formula is due to Ralston, and the embedded method was derived by Bogacki and Shampine [BS89a].

This defines a function for computing the coefficients to a desired precision:

The method is used in the Texas Instruments TI-85 pocket calculator, Matlab, and RKSUITE [S94]. Unfortunately it does not allow for the form of stiffness detection that has been chosen.

A Method of Fehlberg

This defines the coefficients for a 4(5) explicit RungeKutta pair of Fehlberg that was popular in the 1960s [F69].

The fourth-order formula is used to propagate the solution, and the fifth-order formula is used only for the purpose of error estimation.

This defines the function for computing the coefficients to a desired precision:

In contrast to the classical RungeKutta method of order four, the coefficients include an additional entry that is used for error estimation.

The Fehlberg method is not a FSAL scheme since the coefficient matrix is not of the form (6); it is a six-stage scheme, but it requires six function evaluations per step because of the function evaluation that is required at the end of the step to construct the InterpolatingFunction.

A DormandPrince Method

Here is how to define a 5(4) pair of Dormand and Prince coefficients [DP80]. This is currently the method used by ode45 in Matlab.

This defines a function for computing the coefficients to a desired precision:

The DormandPrince method is a FSAL scheme since the coefficient matrix is of the form (7); it is a seven-stage scheme, but effectively uses only six function evaluations.

Here is how the coefficients of Dormand and Prince can be used in place of the built-in choice. Since the structure of the coefficients includes an error vector, the implementation is able to ascertain that adaptive step sizes can be computed:

Method Comparison

Here you solve a system using several explicit RungeKutta pairs.

For the Fehlberg 4(5) pair, the option "EmbeddedDifferenceOrder" is used to specify the order of the embedded method:
The Dormand and Prince 5(4) pair is defined as follows:
The 5(4) pair of Bogacki and Shampine is the default order-five method:
Put the methods and some descriptive names together in a list:
Compute the number of integration steps, function evaluations, and the endpoint global error:
Display the results in a table:

The default method was the least expensive and provided the most accurate solution.

Method Plug-in

This shows how to implement the classical explicit RungeKutta method of order four using the method plug-in environment.

This definition is optional since the method in fact has no data. However, any expression can be stored inside the data object. For example, the coefficients could be approximated here to avoid coercion from rational to floating-point numbers at each integration step:
The actual method implementation is written using a stepping procedure:

Notice that the implementation closely resembles the description that you might find in a textbook. There are no memory allocation/deallocation statements or type declarations, for example. In fact the implementation works for machine real numbers or machine complex numbers, and even using arbitrary-precision software arithmetic.

Here is an example of the calling syntax. For simplicity the method only uses fixed step sizes, so you need to specify what step sizes to take:

Many of the methods that have been built into NDSolve were first prototyped using top-level code before being implemented in the kernel for efficiency.

Stiffness

Stiffness is a combination of problem, initial data, numerical method, and error tolerances.

Stiffness can arise, for example, in the translation of diffusion terms by divided differences into a large system of ODEs.

In order to understand more about the nature of stiffness it is useful to study how methods behave when applied to a simple problem.

Linear Stability

Consider applying a RungeKutta method to a linear scalar equation known as Dahlquist's equation:

The result is a rational function of polynomials where (see for example [L87]).

This utility function finds the linear stability function for RungeKutta methods. The form depends on the coefficients and is a polynomial if the RungeKutta method is explicit.

Here is the stability function for the fifth-order scheme in the DormandPrince 5(4) pair:

This function finds the linear stability function for RungeKutta methods. The form depends on the coefficients and is a polynomial if the RungeKutta method is explicit.

The following package is useful for visualizing linear stability regions for numerical methods for differential equations:
You can now visualize the absolute stability region :

Depending on the magnitude of in (8), if you choose the step size such that , then errors in successive steps will be damped, and the method is said to be absolutely stable.

If , then step-size selection will be restricted by stability and not by local accuracy.

Stiffness Detection

The device for stiffness detection that is used with the option "StiffnessTest" is described within "StiffnessTest Method Option for NDSolve".

Recast in terms of explicit RungeKutta methods, the condition for stiffness detection can be formulated as

with and defined in (9).

The difference can be shown to correspond to a number of applications of the power method applied to .

The difference is therefore a good approximation of the eigenvector corresponding to the leading eigenvalue.

The product gives an estimate that can be compared to the stability boundary in order to detect stiffness.

An -stage explicit RungeKutta has a form suitable for (10) if .

The default embedded pairs used in "ExplicitRungeKutta" all have the form (11).

An important point is that (12) is very cheap and convenient; it uses already available information from the integration and requires no additional function evaluations.

Another advantage of (13) is that it is straightforward to make use of consistent FSAL methods (14).

Examples

Select a stiff system modeling a chemical reaction:

This applies a built-in explicit RungeKutta method to the stiff system.

By default stiffness detection is enabled, since it only has a small impact on the running time:
The coefficients of the DormandPrince 5(4) pair are of the form (15) so stiffness detection is enabled:
Since no "LinearStabilityBoundary" property has been specified, a default value is chosen. In this case the value corresponds to a generic method of order 5:
You can set up an equation in terms of the linear stability function and solve it exactly to find the point where the contour crosses the negative real axis:
The default generic value is very slightly smaller in magnitude than the computed value:

In general, there may be more than one point of intersection, and it may be necessary to choose the appropriate solution.

The following definition sets the value of the linear stability boundary:
Using the new value for this example does not affect the time at which stiffness is detected:

The Fehlberg 4(5) method does not have the correct coefficient structure (16) required for stiffness detection, since .

The default value "StiffnessTest"->Automatic checks to see if the method coefficients provide a stiffness detection capability; if they do, then stiffness detection is enabled.

Step Control Revisited

There are some reasons to look at alternatives to the standard Integral step controller (17) when considering mildly stiff problems.

This system models a chemical reaction:
This defines an explicit RungeKutta method based on the DormandPrince coefficients that does not use stiffness detection:
This solves the system and plots the step sizes that are taken using the utility function StepDataPlot:

Solving a stiff or mildly stiff problem with the standard step-size controller leads to large oscillations, sometimes leading to a number of undesirable step-size rejections.

The study of this issue is known as step-control stability.

It can be studied by matching the linear stability regions for the high- and low-order methods in an embedded pair.

One approach to addressing the oscillation is to derive special methods, but this compromises the local accuracy.

PI Step Control

An appealing alternative to Integral step control (18) is Proportional-Integral or PI step control.

In this case the step size is selected using the local error in two successive integration steps according to the formula:

This has the effect of damping and hence gives a smoother step-size sequence.

Note that Integral step control (19) is a special case of (20) and is used if a step is rejected:

The option "StepSizeControlParameters"->{k1,k2} can be used to specify the values of k1 and k2.

The scaled error estimate in (21) is taken to be for the first integration step.

Examples

Stiff Problem

This defines a method similar to IERK that uses the option "StepSizeControlParameters" to specify a PI controller.

Here you use generic control parameters suggested by Gustafsson:

This specifies the step-control parameters:
Solving the system again, it can be observed that the step-size sequence is now much smoother:

Nonstiff Problem

In general the I step controller (22) is able to take larger steps for a nonstiff problem than the PI step controller (23) as the following example illustrates.

Select and solve a nonstiff system using the I step controller:
Using the PI step controller the step sizes are slightly smaller:

For this reason, the default setting for "StepSizeControlParameters" is Automatic , which is interpreted as:

Fine-Tuning

Instead of using (26) directly, it is common practice to use safety factors to ensure that the error is acceptable at the next step with high probability, thereby preventing unwanted step rejections.

The option "StepSizeSafetyFactors"->{s1,s2} specifies the safety factors to use in the step-size estimate so that (27) becomes:

Here s1 is an absolute factor and s2 typically scales with the order of the method.

The option "StepSizeRatioBounds"->{srmin,srmax} specifies bounds on the next step size to take such that:

Option Summary

option name
default value
"Coefficients"EmbeddedExplicitRungeKuttaCoefficientsspecify the coefficients of the explicit RungeKutta method
"DifferenceOrder"Automaticspecify the order of local accuracy
"EmbeddedDifferenceOrder"Automaticspecify the order of the embedded method in a pair of explicit RungeKutta methods
"StepSizeControlParameters"Automaticspecify the PI step-control parameters (see (28))
"StepSizeRatioBounds"{,4}specify the bounds on a relative change in the new step size (see (29))
"StepSizeSafetyFactors"Automaticspecify the safety factors to use in the step-size estimate (see (30))
"StiffnessTest"Automaticspecify whether to use the stiffness detection capability

Options of the method "ExplicitRungeKutta".

The default setting of Automatic for the option "DifferenceOrder" selects the default coefficient order based on the problem, initial values, and local error tolerances, balanced against the work of the method for each coefficient set.

The default setting of Automatic for the option "EmbeddedDifferenceOrder" specifies that the default order of the embedded method is one lower than the method order. This depends on the value of the "DifferenceOrder" option.

The default setting of Automatic for the option "StepSizeControlParameters" uses the values {1,0} if stiffness detection is active and {3/10,2/5} otherwise.

The default setting of Automatic for the option "StepSizeSafetyFactors" uses the values {17/20,9/10} if the I step controller (31) is used and {9/10,9/10} if the PI step controller (32) is used. The step controller used depends on the values of the options "StepSizeControlParameters" and "StiffnessTest".

The default setting of Automatic for the option "StiffnessTest" will activate the stiffness test if the coefficients have the form (33).