Norms in NDSolve

NDSolve uses norms of error estimates to determine when solutions satisfy error tolerances. In nearly all cases the norm has been weighted, or scaled, such that it is less than 1 if error tolerances have been satisfied and greater than 1 if error tolerances are not satisfied. One significant advantage of such a scaled norm is that a given method can be written without explicit reference to tolerances: the satisfaction of tolerances is found by comparing the scaled norm to 1, thus simplifying the code required for checking error estimates within methods.

Suppose that is vector and is a reference vector to compute weights with (typically is an approximate solution vector). Then the scaled vector to which the norm is applied has components:

where absolute and relative tolerances and are derived respectively from the AccuracyGoal->ag and PrecisionGoal->pg options by and .

The actual norm used is determined by the setting for the NormFunction option given to NDSolve.

option name
default value
NormFunctionAutomatica function to use to compute norms of error estimates in NDSolve

NormFunction option to NDSolve.

The setting for the NormFunction option can be any function that returns a scalar for a vector argument and satisfies the properties of a norm. If you specify a function that does not satisfy the required properties of a norm, NDSolve will almost surely run into problems and give an answer, if any, which is incorrect.

The default value of Automatic means that NDSolve may use different norms for different methods. Most methods use an infinity-norm, but the IDA method for DAEs uses a 2-norm because that helps maintain smoothness in the merit function for finding roots of the residual. It is strongly recommended that you use Norm with a particular value of . For this reason, you can also use the shorthand NormFunction->p in place of NormFunction->(Norm[#,p]/Length[#]^(1/p)&). The most commonly used implementations for , , and have been specially optimized for speed.

This compares the overall error for computing the solution to the simple harmonic oscillator over 100 cycles with different norms specified:

The reason that error decreases with increasing is because the norms are normalized by multiplying with , where is the length of the vector. This is often important in NDSolve because in many cases, an attempt is being made to check the approximation to a function, where more points should give a better approximation, or less error.

Consider a finite difference approximation to the first derivative of a periodic function given by where on a grid with uniform spacing . In the Wolfram Language, this can easily be computed using ListCorrelate.

This computes the error of the first derivative approximation for the cosine function on a grid with 16 points covering the interval :
This computes the error of the first derivative approximation for the cosine function on a grid with 32 points covering the interval :

It is quite apparent that the pointwise error is significantly less with a larger number of points.

The 2 norms of the vectors are of the same order of magnitude:

The norms of the vectors are comparable because the number of components in the vector has increased, so the usual linear algebra norm does not properly reflect the convergence. Normalizing by multiplying by reflects the convergence in the function space properly.

The normalized 2 norms of the vectors reflect the convergence to the actual function. Since the approximation is first order, doubling the number of grid points should approximately halve the error:

Note that if you specify a function an option value, and you intend to use it for PDE or function approximation solutions, you should be sure to include a proper normalization in the function.


Methods that have error control need to determine whether a step satisfies local error tolerances or not. To simplify the process of checking this, utility function ScaledVectorNorm does the scaling (1) and computes the norm. The table includes the formulas for specific values of for reference.

ScaledVectorNorm[p,{tr,ta}][v,u]compute the normalized p-norm of the vector v scaling using scaling (1) with reference vector u and relative and absolute tolerances ta and tr
ScaledVectorNorm[fun,{tr,ta}][v,u]compute the norm of the vector v using scaling (1) with reference vector u and relative and absolute tolerances ta and tr and the norm function fun
ScaledVectorNorm[2,{tr,ta}][v,u]compute where n is the length of vectors v and u
ScaledVectorNorm[,{tr,ta}][v,u]compute , where n is the length of vectors v and u


This sets up a scaled vector norm object with the default machine-precision tolerances used in NDSolve:
This applies the scaled norm object with a sample error and solution reference vector:
Because of the absolute tolerance term, the value comes out reasonably even if some of the components of the reference solution are zero:

When setting up a method for NDSolve, you can get the appropriate ScaledVectorNorm object to use using the "Norm" method function of the NDSolve`StateData object.

Here is an NDSolve`StateData object:
This gets the appropriate scaled norm to use from the state data:
This applies it to a sample error vector using the initial condition as reference vector: