NIntegrate Integration Rules

Introduction

An integration rule computes an estimate of an integral over a region, typically using a weighted sum. In the context of NIntegrate usage, an integration rule object provides both an integral estimate and an error estimate as a measure of the integral estimate's accuracy.

The following general notes pertain to weighted sum type integration rules such as "GaussKronrodRule" and "MultidimensionalRule". A separate discussion applies to other types of rules such as "LevinRule".

An integration rule samples the integrand at a set of points, called sampling points. In the literature these are also called abscissas. Corresponding to each sampling point there is a weight number . An integration rule estimates the integral with the weighted sum . An integration rule is a functional, that is, it maps functions over the interval (or a more general region) into real numbers.

If a rule is applied over the region this will be denoted as , where is the integrand.

The sampling points of the rules considered below are chosen to compute estimates for integrals either over the interval , or the unit cube , or the "centered" unit cube , where is the dimension of the integral. So if is one of these regions, will be used instead of . When these rules are applied to other regions, their abscissas and estimates need to be scaled accordingly.

The integration rule is said to be exact for the function if .

The application of an integration rule to a function will be referred as an integration of , for example, "when is integrated by , we get ."

A one-dimensional integration rule is said to be of degree if it integrates exactly all polynomials of degree or less, and will fail to do so for at least one polynomial of degree .

A multidimensional integration rule is said to be of degree if it integrates exactly all monomials of degree or less, and will fail to do so for at least one monomial of degree , that is, the rule is exact for all monomials of the form , where is the dimension, , and .

A null rule of degree will integrate to zero all monomials of degree and will fail to do so for at least one monomial of degree . Each null rule may be thought of as the difference between a basic integration rule and an appropriate integration rule of a lower degree.

If the set of sampling points of a rule of degree contains the set of sampling points of a rule of a lower degree , that is, , then is said to be embedded in . This will be denoted as .

An integration rule of degree that is a member of a family of rules with a common derivation and properties but different degrees will be denoted as , where might be chosen to identify the family. (For example, trapezoidal rule of degree 4 might be referred to as .)

If each rule in a family is embedded in another rule in the same family, then the rules of that family are called progressive. (For any given there exists , for which ).

An integration rule is of open type if the integrand is not evaluated at the end points of the interval. It is of closed type if it uses integrand evaluations at the interval end points.

An NIntegrate integration rule object has one integration rule for the integral estimate and one or several null rules for the error estimate. The sampling points of the integration rule and the null rules coincide. It should be clear from the context whether "integration rule" or "rule" would mean an NIntegrate integration rule object, or an integration rule in the usual mathematical sense.

Integration Rule Specification

All integration rules described below, except , are to be used by the adaptive strategies of NIntegrate. In NIntegrate, all Monte Carlo strategies, crude and adaptive, use .

Changing the integration rule component of an integration strategy will make a different integration algorithm.

The way to specify what integration rule the adaptive strategies in NIntegrate (see "Global Adaptive Strategy" and "Local Adaptive Strategy") should use is through a Method suboption.

Here is an example of using a specific integration rule with the strategy.
In[1]:=
Click for copyable input
Out[1]//InputForm=
Here is an example of using the same integration rule with a different strategy ().
In[2]:=
Click for copyable input
Out[2]//InputForm=

If NIntegrate is given a method option that has only an integration rule specification (other than ), then that rule is used with the strategy. The two inputs below are equivalent.

For this integration only the integration rule is specified. The strategy is used by default.
In[74]:=
Click for copyable input
Out[74]//InputForm=
For this integration an integration strategy and an integration rule are specified.
In[75]:=
Click for copyable input
Out[75]//InputForm=

For , the adaptive Monte Carlo strategy is automatically used. The following two commands are equivalent.

For this Monte Carlo integration only the is specified.
In[5]:=
Click for copyable input
Out[5]//InputForm=
For this Monte Carlo integration a Monte Carlo integration strategy and are specified.
In[6]:=
Click for copyable input
Out[6]//InputForm=

"TrapezoidalRule"

The trapezoidal rule for integral estimation is one of the simplest and oldest rules (possibly used by the Babylonians and certainly by the ancient Greek mathematicians):

The compounded trapezoidal rule is a Riemann sum of the form

where .

If the Method option is given the value , the compounded trapezoidal rule is used to estimate each subinterval formed by the integration strategy.

A integration:
In[7]:=
Click for copyable input
Out[7]=
option name
default value
"Points"5number of coarse trapezoidal points
"RombergQuadrature"Trueshould Romberg quadrature be used or not
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

The trapezoidal rule and its compounded (multipanel) extension are not very accurate. (The compounded trapezoidal rule is exact for linear functions and converges at least as fast as , if the integrand has continuous second derivative [DavRab84].) The accuracy of the multipanel trapezoidal rule can be increased using the Romberg quadrature.

Since the abscissas of are a subset of , the difference , can be taken to be an error estimate of the integral estimate , and can be computed without extra integrand evaluations.

The option "Points"->k can be used to specify how many coarse points are used. The total number of points used by is .

This verifies that the sampling points are as in (1).
In[8]:=
Click for copyable input
Out[9]=

can be used for multidimensional integrals too.

Here is a multidimensional integration with . The exact result is .
In[10]:=
Click for copyable input
Out[10]=

Remark: NIntegrate has both a trapezoidal rule and a trapezoidal strategy; see "'Trapezoidal Strategy" in the tutorial "Integration Strategies". All internally implemented integration rules of NIntegrate have the suffix -Rule. So is used to specify the trapezoidal integration rule, and is used to specify the trapezoidal strategy.

Romberg Quadrature

The idea of the Romberg quadrature is to use a linear combination of and that eliminates the same order terms of truncation approximation errors of and .

From the Euler-Maclaurin formula [DavRab84] we have

where

Hence we can write

The terms of the equations above can be eliminated if the first equation is subtracted from the second equation four times. The result is

This example shows that a trapezoidal rule using the Romberg quadrature gives better performance than the standard trapezoidal rule. Also, the result of the former is closer to the exact result, .
In[11]:=
Click for copyable input
Out[11]=
Here is an integration with a trapezoidal rule that does not use Romberg quadrature.
In[10]:=
Click for copyable input
Out[10]=

"TrapezoidalRule" Sampling Points and Weights

The following calculates the trapezoidal sampling points, weights, and error weights for a given precision.
In[3]:=
Click for copyable input
Out[4]=
Here is how the Romberg quadrature weights and error weights can be derived.
In[5]:=
Click for copyable input
Out[9]=

"NewtonCotesRule"

Newton-Cotes integration formulas are formulas of interpolatory type with sampling points that are equally spaced.

The Newton-Cotes quadrature for NIntegrate can be specified with the Method option value .
In[20]:=
Click for copyable input
Out[20]=
option name
default value
"Points"3number of coarse Newton-Cotes points
"Type"Closedtype of the Newton-Cotes rule
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

Let the interval of integration, , be divided into subintervals of equal length by the points

Then the integration formula of interpolatory type is given by

where

with

When is large, the Newton-Cotes -point coefficients are large and are of mixed sign.

In[21]:=
Click for copyable input
Out[21]=

Since this may lead to large losses of significance by cancellation, a high-order Newton-Cotes rule must be used with caution.

"NewtonCotesRule" Sampling Points and Weights

The following calculates the Newton-Cotes sampling points, weights, and error weights for a given precision.
In[22]:=
Click for copyable input
Out[23]=

"GaussBerntsenEspelidRule"

Gaussian quadrature uses optimal sampling points (through polynomial interpolation) to form a weighted sum of the integrand values over these points. On a subset of these sampling points a lower order quadrature rule can be made. The difference between the two rules can be used to estimate the error. Berntsen and Espelid derived error estimation rules by removing the central point of Gaussian rules with odd number of sampling points.

The Gaussian quadrature for NIntegrate can be specified with the Method option value .
In[24]:=
Click for copyable input
Out[24]=
option name
default value
"Points"Automaticnumber of Gauss points
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

A Gaussian rule of points for integrand is exact for polynomials of degree (i.e., if is a polynomial of degree ).

Gaussian rules are of open type since the integrand is not evaluated at the end points of the interval. (Lobatto rules, Clenshaw-Curtis rules, and the trapezoidal rule are of closed type since they use integrand evaluations at the interval end points.)

This defines the divided differences functional [Ehrich2000]

For the Gaussian rule , with sampling points , Berntsen and Espelid have derived the following error estimate functional (see [Ehrich2000])

(The original formula in [Ehrich2000] is for sampling points in . The formula above is for sampling points in .)

This example shows the number of sampling points used by NIntegrate with various values of the option .
In[25]:=
Click for copyable input
Out[25]=

"GaussBerntsenEspelidRule" Sampling Points and Weights

The following calculates the Gaussian abscissas, weights, and Berntsen-Espelid error weights for a given number of coarse points and precision.
In[26]:=
Click for copyable input
Out[27]=

The Berntsen-Espelid error weights are implemented below.

This implements the divided differences.
In[28]:=
Click for copyable input
This computes the abscissas and the weights of .
In[30]:=
Click for copyable input
This computes the Berntsen-Espelid error weights.
In[31]:=
Click for copyable input
Out[31]=

"GaussKronrodRule"

Gaussian quadrature uses optimal sampling points (through polynomial interpolation) to form a weighted sum of the integrand values over these points. The Kronrod extension of a Gaussian rule adds new sampling points in between the Gaussian points and forms a higher-order rule that reuses the Gaussian rule integrand evaluations.

The Gauss-Kronrod quadrature for NIntegrate can be specified with the Method option value .
In[32]:=
Click for copyable input
Out[32]=
option name
default value
"Points"Automaticnumber of Gauss points that will be extended with Kronrod points
"SymbolicProcessing"Automaticnumber of seconds to do symbolic processing

options.

A Gaussian rule of points for integrand is exact for polynomials of degree , that is, if is a polynomial of degree .

Gauss-Kronrod rules are of open type since the integrand is not evaluated at the end points of the interval.

The Kronrod extension of a Gaussian rule with points adds points to and the extended rule is exact for polynomials of degree if is even, or if is odd. The weights associated with a Gaussian rule change in its Kronrod extension.

Since the abscissas of are a subset of , the difference can be taken to be an error estimate of the integral estimate , and can be computed without extra integrand evaluations.

This example shows the number of sampling points used by NIntegrate with various values of the option .
In[33]:=
Click for copyable input
Out[33]=

For an implementation description of Kronrod extensions of Gaussian rules, see [PiesBrand74].

"GaussKronrodRule" Sampling Points and Weights

The following calculates the Gauss-Kronrod abscissas, weights, and error weights for a given number of coarse points and precision.
In[34]:=
Click for copyable input
Out[35]=

The calculations below demonstrate the degree of the Gauss-Kronrod integration rule (see above).

This computes the degree of the Gauss-Kronrod integration rule.
In[36]:=
Click for copyable input
Out[36]=
This defines a function.
In[37]:=
Click for copyable input

The command below implements the integration rule weighted sums for the integral estimate, , and the error estimate, , where are the abscissas, are the weights, and are the error weights.

These are the integral and error estimates for computed with the rule.
In[38]:=
Click for copyable input
Out[38]=
The integral estimate coincides with the exact result.
In[39]:=
Click for copyable input
Out[39]=

The error estimate is not zero since the embedded Gauss rule is exact for polynomials of degree . If we integrate a polynomial of that degree, the error estimate becomes zero.

This defines a function.
In[40]:=
Click for copyable input
These are the integral and error estimates for computed with the rule.
In[41]:=
Click for copyable input
Out[41]=
Here is the exact result using Integrate.
In[42]:=
Click for copyable input
Out[42]=

"LobattoKronrodRule"

The Lobatto integration rule is a Gauss-type rule with preassigned abscissas. It uses the end points of the integration interval and optimal sampling points inside the interval to form a weighted sum of the integrand values over these points. The Kronrod extension of a Lobatto rule adds new sampling points in between the Lobatto rule points and forms a higher-order rule that reuses the Lobatto rule integrand evaluations.

NIntegrate uses the Kronrod extension of the Lobatto rule if the Method option is given the value .
In[43]:=
Click for copyable input
Out[43]=
option name
default value
"Points"5number of Gauss-Lobatto points that will be extended with Kronrod points
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

A Lobatto rule of points for integrand is exact for polynomials of degree , (i.e., if is a polynomial of degree ).

The Kronrod extension of a Lobatto rule with points adds points to and the extended rule is exact for polynomials of degree if is even, or if is odd. The weights associated with a Lobatto rule change in its Kronrod extension.

As with , the number of Gauss points is specified with the option . If is invoked with "Points"->n, the total number of rule points will be .

This example shows the number of sampling points used by NIntegrate with various values of the option .
In[44]:=
Click for copyable input
Out[44]=

Since the Lobatto rule is a closed rule, the integrand needs to be evaluated at the end points of the interval. If there is a singularity at these end points, NIntegrate will ignore it.

For an implementation description of Kronrod extensions of Lobatto rules, see [PiesBrand74].

"LobattoKronrodRule" Sampling Points and Weights

The following calculates the Lobatto-Kronrod abscissas, weights, and error weights for a given number of coarse points and precision.
In[45]:=
Click for copyable input
Out[46]=

The calculations below demonstrate the degree of the Lobatto-Kronrod integration rule (see above).

This computes the degree of the Lobatto-Kronrod integration rule.
In[47]:=
Click for copyable input
Out[47]=
This defines a function.
In[48]:=
Click for copyable input

The command below implements the integration rule weighted sums for the integral estimate, , and the error estimate, , where are the abscissas, are the weights, and are the error weights.

These are the integral and error estimates for computed with the rule.
In[49]:=
Click for copyable input
Out[49]=
The preceding integral estimate coincides with the exact result.
In[50]:=
Click for copyable input
Out[50]=

The preceding error estimate is not zero since the embedded Lobatto rule is exact for polynomials of degree . If we integrate a polynomial of that degree, the error estimate becomes zero.

This defines a function.
In[51]:=
Click for copyable input
These are the integral and error estimates for computed with the rule.
In[52]:=
Click for copyable input
Out[52]=
The exact result using Integrate.
In[53]:=
Click for copyable input
Out[53]=

"ClenshawCurtisRule"

A Clenshaw-Curtis rule uses sampling points derived from the Chebyshev polynomial approximation of the integrand.

The Clenshaw-Curtis quadrature for NIntegrate can specified with the Method option value .
In[54]:=
Click for copyable input
Out[54]=
option name
default value
"Points"5number of coarse Clenshaw-Curtis points
"SymbolicProcessing"Automaticnumber of seconds to do symbolic processing

options.

Theoretically a Clenshaw-Curtis rule with sampling points is exact for polynomials of degree or less. In practice, though, Clenshaw-Curtis rules achieve the accuracy of the Gaussian rules [Evans93][OHaraSmith68]. The error of the Clenshaw-Curtis formula is analyzed in [OHaraSmith68].

The sampling points of the classical Clenshaw-Curtis rule are zeros of Chebyshev polynomials. The sampling points of a practical Clenshaw-Curtis rule are chosen to be Chebyshev polynomial extremum points. The classical Clenshaw-Curtis rules are not progressive, but the practical Clenshaw-Curtis rules are [DavRab84][KrUeb98].

Let denote a practical Clenshaw-Curtis rule of sampling points for the function .

The progressive property means that the sampling points of are a subset of the sampling points of . Hence the difference can be taken to be an error estimate of the integral estimate , and can be computed without extra integrand evaluations.

The NIntegrate option Method->{"ClenshawCurtisRule", "Points"->k} uses a practical Clenshaw-Curtis rule with points .
In[55]:=
Click for copyable input
Out[55]=
This example shows the number of sampling points used by NIntegrate with various values of the option .
In[56]:=
Click for copyable input
Out[56]=

"ClenshawCurtisRule" Sampling Points and Weights

Here are the sampling points and the weights of the Clenshaw-Curtis rule for a given coarse number of points and precision.
In[57]:=
Click for copyable input
Out[58]=
Here is another way to compute the sampling points of .
In[59]:=
Click for copyable input
Out[60]=
This defines a function.
In[61]:=
Click for copyable input
These are the integral and error estimates for computed with the rule.
In[62]:=
Click for copyable input
Out[62]=
The exact value by Integrate.
In[63]:=
Click for copyable input
Out[63]=

"MultipanelRule"

combines into one rule the applications of a one-dimensional integration rule over two or more adjacent intervals. An application of the original rule to any of the adjacent intervals is called a panel.

Here is an example of an integration with .
In[7]:=
Click for copyable input
Out[7]=
option name
default value
Method"NewtonCotesRule"integration rule specification that provides the abscissas, weights, and error weights for a single panel
"Panels"5number of panels
"SymbolicProcessing"Automaticnumber of seconds to do symbolic processing

options.

Let the unit interval be partitioned into sub-intervals with the points .

If we have the rule

it can be transformed into a rule for the interval ,

Let , and , . Then the -panel integration rule based on can be written explicitly as

If is closed, that is, has and as sampling points, then , and the number of sampling points of can be reduced to . (This is done in the implementation of .)

More about the theory of multipanel rules, also referred to as compounded or composite rules, can be found in [KrUeb98] and [DavRab84].

"MultipanelRule" Sampling Points and Weights

The sampling points and the weights of the can be obtained with this command.
In[65]:=
Click for copyable input
Out[66]=
Here are the abscissas and weights of a Gauss-Kronrod rule.
In[67]:=
Click for copyable input
Out[67]=
The multipanel rule abscissas can be obtained using Rescale.
In[68]:=
Click for copyable input
Out[68]=
This shows how to derive the multipanel rule weights from the original weights.
In[69]:=
Click for copyable input
Out[69]=

"CartesianRule"

A -dimensional Cartesian rule has sampling points that are a Cartesian product of the sampling points of one-dimensional rules. The weight associated with a Cartesian rule sampling point is the product of the one-dimensional rule weights that correspond to its coordinates.

The Cartesian product integration for NIntegrate can be specified with the Method option value .
In[70]:=
Click for copyable input
Out[70]=
option name
default value
Method"GaussKronrodRule"a rule or a list of rules with which the Cartesian product rule will be formed
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

For example, suppose we have the formulas:

that are exact for polynomials of degree , , and , respectively. Then it is not difficult to see that the formula with points,

is exact for polynomials in of degree . Note that the weight associated with the abscissa is .

The general Cartesian product formula for one-dimensional rules the of which has sampling points and weights is

Clearly (2) can be written as

where , and for each integer , and .

Here is a visualization of a Cartesian product rule integration. Along the axis is used; along the axis is used.
In[71]:=
Click for copyable input
Out[72]=

Cartesian rules are applicable for relatively low dimensions (), since for higher dimensions they are subject to "combinatorial explosion". For example, a five-dimensional Cartesian product of identical one-dimensional rules each having sampling points would have sampling points.

NIntegrate uses Cartesian product rule if the integral is multidimensional and the Method option is given a one-dimensional rule or a list of one-dimensional rules.

Here is an example specifying Cartesian product rule integration with .
In[73]:=
Click for copyable input
Out[73]=
Here is an example specifying Cartesian product rule integration with a list of one-dimensional integration rules.
In[74]:=
Click for copyable input
Out[74]=
Another example specifying Cartesian product rule integration with a list of one-dimensional integration rules.
In[75]:=
Click for copyable input
Out[75]=

More about Cartesian rules can be found in [Stroud71].

"CartesianRule" Sampling Points and Weights

The sampling points and the weights of the rule can be obtained with the command .
In[76]:=
Click for copyable input
Out[76]=

keeps the abscissas and the weights of each rule separated. Otherwise, as it can be seen from (3) the result might be too big for higher dimensions.

The results of can be put into the form of (4) with this function.
In[77]:=
Click for copyable input
In[78]:=
Click for copyable input
Out[78]=

"MultidimensionalRule"

A fully symmetric integration rule for the cube , consists of sets of points with the following properties: (1) all points in a set can be generated by permutations and/or sign changes of the coordinates of any fixed point from that set; (2) all points in a set have the same weight associated with them.

The fully symmetric multidimensional integration (fully symmetric cubature) for NIntegrate can be specified with the Method option value .
In[6]:=
Click for copyable input
Out[6]=

A set of points of a fully symmetric integration rule that satisfies the preceding properties is called an orbit. A point of an orbit, , for the coordinates of which the inequality holds, is called a generator. (See [KrUeb98][GenzMalik83].)

option name
default value
"Generators"5number of generators of the fully symmetric rule
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

If an integration rule has orbits denoted , and the ^(th) of them, , has a weight associated with it, then the integral estimate is calculated with the formula

A null rule of degree will integrate to zero all monomials of degree and will fail to do so for at least one monomial of degree . Each null rule may be thought of as the difference between a basic integration rule and an appropriate integration of lower degree.

The object of NIntegrate is basically an interface to three different integration rule objects that combine an integration rule and one or several null rules. Their number of generators and orders are summarized in the table below. The rule objects with 6 and 9 generators use three null rules, each of which is a linear combination of two null rules. The null rule linear combinations are used in order to avoid phase errors. See [BerntEspGenz91] for more details about how the null rules are used.

Number of generators and orders of the fully symmetric rules of NIntegrate:

Number ofGeneratorsIntegration Rule
Order
Order of Each of the Null RulesDescribed in
575[GenzMalik80]
675,3,1[GenzMalik83][BerntEspGenz91]
997,5,3[GenzMalik83][BerntEspGenz91]

This is the number of sampling points used by NIntegrate with its fully symmetric multidimensional integration rules for integrals of the form , .
In[80]:=
Click for copyable input
Out[81]=

"MultidimensionalRule" Sampling Points and Weights

This subsection gives an example of a calculation of an integral estimate with a fully symmetric multidimensional rule.

Here is the parameter for the number of generators.
In[82]:=
Click for copyable input
This function takes a generator point and creates its orbit.
In[83]:=
Click for copyable input
The generators and weights for given number of generators.
In[84]:=
Click for copyable input
This computes the orbit of each generator.
In[89]:=
Click for copyable input
This defines a function.
In[90]:=
Click for copyable input
This applies the multidimensional rule.
In[92]:=
Click for copyable input
Out[92]//InputForm=
Here is the exact result.
In[93]:=
Click for copyable input
Out[93]//InputForm=
This makes graphics primitives for points of the orbits.
In[94]:=
Click for copyable input
Here is how the different orbits look.
In[95]:=
Click for copyable input
Out[95]=
Here are all rule points together.
In[96]:=
Click for copyable input
Out[96]=

"LevinRule"

A Levin-type rule estimates the integral of an oscillatory function by approximating the antiderivative as a product of a polynomial and the oscillatory part of the integrand.

Here is a highly oscillatory integral computed using .
In[45]:=
Click for copyable input
Out[45]=

By default, automatically detects the oscillatory part of the integrand and applies the collocation method described below to form an integral estimate. Options can be used to specify the oscillatory part and to control the detailed behavior of the numerical algorithm, including whether to adaptively switch to a non-oscillatory alternative integration rule.

option name
default value
"Points"Automaticnumber of coarse collocation points
"EndpointLimitTerms"Automaticpolynomial extrapolation order at non-numerical endpoint
"TimeConstraint"10maximum time solving collocation equations per integration step
"MethodSwitching"Truewhether to automatically switch to non-oscillatory rule
"OscillationThreshold"10approximate number of oscillations above which to apply
MethodAutomaticalternative non-oscillatory rule
"LevinFunctions"Automaticoscillatory functions to include in kernel
"MaxOrder"50maximum differential order of kernel
ExcludedForms{}forms to exclude from kernel
"AdditiveTerm"Automaticadditive non-oscillatory part of integrand
"Amplitude"Automaticcoefficients of kernel in integrand
"DifferentialMatrix"Automaticmatrix form of linear ODE satisfied by kernel
"Kernel"Automaticexplicit oscillatory kernel

options.

Levin-Type Collocation Method

Scope

Basic Levin-type methods for one-dimensional numerical integration [Levin96] apply to oscillatory integrals of the form

where the oscillatory part must satisfy a linear ordinary differential equation (linear ODE) of order written in matrix form as

where is one of the . The vector of functions is the oscillatory "kernel" and the matrix is the "differential matrix".

For example, if , the oscillatory kernel may be

with the differential matrix

Levin-type methods also handle the more general case

where in the simpler case above we typically have and . The vector of functions is the "amplitude".

Most of the oscillatory special functions of mathematics, such as Exp, BesselJ, AiryAi, and so on, satisfy a linear ODE and can thus be integrated using Levin-type methods. Furthermore, any product, sum, or integer power of such functions, or their compositions with any non-rapidly oscillatory function, also satisfies a linear ODE. See also DifferentialRootReduce.

The method can be applied to a slightly more general class of integrals,

where the "additive term" is handled separately by a traditional quadrature method such as .

Levin-type methods are effective when the amplitude , the additive term , and the differential matrix are not themselves rapidly oscillatory [Levin97].

Collocation Method

In a Levin-type collocation method, an antiderivative of the integrand is sought of the form for some unknown functions :

Differentiating both sides with respect to , and using , we find that all of the cancel, and the functions satisfy the non-oscillatory differential equations

These equations can be solved for the using a collocation method in which we approximate each as a linear combination of some simple set of predetermined basis functions :

In , the basis functions are chosen to be Chebyshev polynomials ChebyshevT[k, x]. Substituting this into the differential equations for yields linear equations for the collocation coefficients :

Since , , , and are all known functions, we can evaluate these linear equations at collocation nodes inside the integration range. This yields linear equations for the coefficients . Upon solving these equations numerically, we have an approximate form for the , and we may directly evaluate the integral using the fundamental theorem of calculus, as

NIntegrate uses the abscissae of the corresponding "GaussKronrodRule" as the collocation nodes .

computes the integral estimate using collocation nodes where is the value of the method option. A lower order integral estimate is also computed using every second collocation node (). The difference between these two integral estimates is taken as an estimate of the error of the algorithm.

Compute an integral estimate using 23 collocation nodes on each integration subregion, using 11 collocation nodes to form a lower order integral estimate for error estimation.
In[88]:=
Click for copyable input
Out[88]=

Specifying Oscillatory Kernel

By default, automatically selects an oscillatory kernel comprising as much of the integrand as possible. This behavior can be modified in detail with options.

If the integrand contains an oscillatory function that is not highly oscillatory over the integration region, it may be more efficient to exclude it from the oscillatory kernel. For example, Sin[x] is not highly oscillatory over the region .

The method option can be used to specify the oscillatory part the integrand.

Use with the kernel Sin[1000x]. The factor Sin[x] is not included in the oscillatory kernel, and is instead included in the amplitude.
In[157]:=
Click for copyable input
Out[157]=

The method option can be used to specify parts of the integrand that should not be included in the automatically detected oscillatory kernel.

Here is another way to specify the same kernel as above, using the option.
In[158]:=
Click for copyable input
Out[158]=

The method option can be used to specify which types of functions should be included in the automatically detected oscillatory kernel. The setting detects only the specific functions .

Use only on Exp and Sin. The BesselJ is ignored.
In[28]:=
Click for copyable input
Out[28]=

The setting detects functions from the named groups .

Use while only detecting trig-related and exp-related functions. The BesselJ is not included in the oscillatory kernel.
In[29]:=
Click for copyable input
Out[29]=

Named groups recognized by the method option.

By default, the setting for includes all named groups apart from , , and . This setting corresponds to elementary and special functions that are oscillatory in some part of their domain.

NIntegrate`LevinIntegrand Object

It is possible to inspect in detail the oscillatory kernel, differential matrix, amplitude, and additive term detected by NIntegrate using the internal function . returns an object representing a fully processed integrand to which a Levin-type collocation method can be applied.

Here is the processed decomposing the integrand sqrt(x)+x ⅇ^(ⅈ x) TemplateBox[{0, {100,  , x}}, BesselJ] into an oscillatory kernel and other parts.
In[4]:=
Click for copyable input
Out[4]=

Properties of an object li are accessed using the form li["prop"].

Here is the oscillatory kernel for the integrand above.
In[6]:=
Click for copyable input
Out[6]=
Here is the differential matrix for the same integrand. First is used because in the multidimensional case there is a different differential matrix for each dimension of integration.
In[9]:=
Click for copyable input
Out[9]//MatrixForm=
Here is how to verify the Levin differential equation for this oscillatory kernel.
In[13]:=
Click for copyable input
Out[13]=
In[14]:=
Click for copyable input
Out[14]=
A full list of properties can be accessed with li["Properties"].
In[15]:=
Click for copyable input
Out[15]=

The property gives the main properties of the that are used in the Levin-type collocation algorithm.

Show the additive term and amplitude in addition to the oscillatory kernel and differential matrix for the integrand above.
In[16]:=
Click for copyable input
Out[16]=

Using this property we can quickly see the effect of different method options on the oscillatory kernel detection, such as the option.

Here are two different decompositions of the same integrand. By default both the Sin[x] and Cos[x] terms are included.
In[19]:=
Click for copyable input
Out[19]=
With the setting "AdditiveTerm"->Sin[x], the Sin[x] term is not included in the detected kernel.
In[20]:=
Click for copyable input
Out[20]=

We can see how to use the option to include or exclude groups of functions from the integrand.

Here is how to see the effect of different settings for the method option. By default Log[x] is excluded from the detected kernel, since it is not an oscillatory function.
In[43]:=
Click for copyable input
Out[43]=
With the setting "LevinFunctions"->All, Log[x] is included in the detected kernel.
In[44]:=
Click for copyable input
Out[44]=

It is possible to completely specify all four parts of the integrand decomposition (additive term, amplitude, kernel, and differential matrix). This can be used for efficient multiple calls to NIntegrate with similar integrands.

Here is how to use with a completely manual specification of the integrand decomposition.
In[24]:=
Click for copyable input
Out[24]=

Note that, in this case, the first argument to NIntegrate is completely ignored.

The following integration is equivalent to that above. The first argument to NIntegrate is ignored.
In[25]:=
Click for copyable input
Out[25]=

"MonteCarloRule"

A Monte Carlo rule estimates an integral by forming a uniformly weighted sum of integrand evaluations over random (quasi-random) sampling points.

Here is an example of using with 1000 sampling points.
In[97]:=
Click for copyable input
Out[97]=
option name
default value
"Points"100number of sampling points
"PointGenerator"Randomsampling points coordinates generator
"AxisSelector"Automaticselection algorithm of the splitting axis when global adaptive Monte Carlo integration is used
"SymbolicProcessing"Automaticnumber of seconds to do symbolic preprocessing

options.

In Monte Carlo methods [KrUeb98], the -dimensional integral is interpreted as the following expected (mean) value

where is the mean value of the function interpreted as a random variable, with respect to the uniform distribution on , that is, the distribution with probability density . denotes the characteristic function of the region ; denotes the volume of .

The crude Monte Carlo estimate of the expected value is obtained by taking independent random vectors with density (that is, the vectors are uniformly distributed on ), and making the estimate

Remark: The function is a valid probability density function because it is non-negative on the whole of and .

According to the strong law of large numbers, the convergence

happens with probability . The strong law of large numbers does not provide information for the error , so a probabilistic estimate is used.

Let be defined as

Formula (5) is an unbiased estimator of (that is, the expectation of for various sets of is ) and its variance is

where denotes the variance of . The standard error of is thus .

In practice the is not known, so it is estimated with the formula

The standard error of is then

The result of the Monte Carlo estimation can be written as .

It can be seen from Equation (6) that the convergence rate of the crude Monte Carlo estimation does not depend on the dimension of the integral, and if sampling points are used then the convergence rate is .

The NIntegrate integration rule calculates the estimates and .

The estimates can be improved incrementally. That is, if we have the estimates and , and a new additional set of sample function values , then using (7) and (8) we have

To compute the estimates and , it is not necessary to know the random points used to compute the estimates and .

"AxisSelector"

When used for multidimensional global adaptive integration, chooses the splitting axis of an integration subregion it is applied to in two ways: (i) by random selection or (ii) by minimizing the sum of the variances of the integral estimates of each half of the subregion, if the subregion is divided along that axis. The splitting axis is selected after the integral estimation.

The random axis selection is done in the following way. keeps a set of axes for selection, . Initially contains all axes. An element of is randomly selected. The selected axis is excluded from . After the next integral estimation, an axis is selected from and excluded from it, and so forth. If is empty, it is filled up with all axes.

The minimization of variance axis selection is done in the following way. During the integration over the region, a subset of the sampling points and their integrand values is stored. Then for each axis, the variances of the two subregions that the splitting along this axis will produce are estimated using the stored sampling point and corresponding integrand values. The axis for which the sum of these variances is minimal is chosen to be the splitting axis, since this would mean that if the region is split on that axis, the new integration error estimate will be minimal. If it happens that for some axis all stored points are clustered in one of the half-regions, then that axis is selected for splitting.

option value
Randomrandom splitting axis election
MinVariance|{MinVariance, "SubsampleFraction"->frac}splitting axis selection that minimizes the sum of variances of the new regions

options.

option name
default value
"SubsampleFraction"1/10fraction of the sampling points used to determine the splitting axis

options.

This is an example of using the option .
In[98]:=
Click for copyable input
Out[98]=

In the examples below the two axis selection algorithms are compared. In general, the minimization of variance selection uses less number of sampling points. Nevertheless, using the minimization of variance axis selection slows down the application of . So for integrals for which both axis selection methods would result in the same number of sampling points, it is faster to use random axis selection. Also, using larger fraction sampling points to determine the splitting axis in minimization of variance selection makes the integration slower.

Consider the following function.
In[2]:=
Click for copyable input
Out[3]=
These are the adaptive Monte Carlo integration sampling points for the function above with random choice of splitting axis.
In[4]:=
Click for copyable input
Out[5]=
These are the sampling points with choice of splitting axes that minimize the variance. Compared to the previous Monte Carlo integration, the sampling points of this one are more concentrated around the circle , and their number is nearly twice as small.
In[6]:=
Click for copyable input
Out[6]=
Here is an adaptive Monte Carlo integration that uses random axis selection.
In[104]:=
Click for copyable input
Out[104]=
Here is an adaptive Monte Carlo integration for the preceding integral that uses the minimization of variance axis selection and is slower than using random axis selection.
In[105]:=
Click for copyable input
Out[105]=
Using a larger fraction of stored points for the minimization of variance axis choice slows down the integration.
In[106]:=
Click for copyable input
Out[106]=

Comparisons of the Rules

All integration rules, except , are to be used by adaptive strategies in NIntegrate. Changing the type and the number of points of the integration rule component for an integration strategy will make a different integration algorithm. In general these different integration algorithms will perform differently for different integrals. Naturally the following questions arise.

1.  Is there a type of rule that is better than other types for any integral or for integrals of a certain type?

2.  Given an integration strategy, what rules perform better with it? For what integrals?

3.  Given an integral, an integration strategy, and an integration rule, what number of points in the rule will minimize the total number of sampling points used to reach an integral estimate that satisfies the precision goal?

For a given integral and integration strategy the integration rule which achieves a result that satisfies the precision goal with the smallest number of sampling points is called the best integration rule. There are several factors that determine the best integration rule.

4.  In general the higher the degree of the rule the faster the integration will be for smooth integrands and for higher-precision goals. On the other hand, the rule degree might be too high for the integrand and hence too many sampling points might be used when the adaptive strategies work around, for example, the integrand's discontinuities.

5.  The error estimation functional of a rule influences significantly the total amount of work by the integration strategy. Rules with a smaller number of points might lead (1) to a wrong result because of underestimation of the integral, or (2) to applying too many sampling points because of overestimation of the integrand. (See "Examples of Pathological Behavior".) Further, the error estimation functional might be computed with one or several embedded null rules. In general, the larger the number of the null rules the better the error estimation—fewer phase errors are expected. The number of the null rules and the weights assigned to them in the sum that computes the error estimate determine the sets of pathological integrals and integrals hard to compute for that rule. (Some of the multidimensional rules of NIntegrate use several embedded null rules to compute the error estimate. All of the one-dimensional integration rules of NIntegrate use only one null rule.)

6.  Local adaptive strategies are more effective with closed rules that have their sampling points more uniformly distributed (for example, ) than with open rules (for example, ) and closed rules that have sampling points distributed in a non-uniform way (for example, ).

7.  The percent of points reused by the strategy might greatly determine what is the best rule. For one-dimensional integrals, reuses all points of the closed rules. throws away almost all points of the regions that need improvement of their error estimate.

Number of Points in a Rule

This subsection demonstrates with examples that the higher the degree of the rule the faster the integration will be for smooth integrands and for higher-precision goals. It also shows examples in which the degree of the rule is too high for the integrand and hence too many sampling points are used when the adaptive strategies work around the integrand's discontinuities. All examples use Gaussian rules with Berntsen-Espelid error estimate.

Here is the error of a Gaussian rule in the interval .

(See [DavRab84].)

Here is a function that calculates the error of a rule for the integral , using the exact value computed by Integrate for comparison.
In[107]:=
Click for copyable input
This defines a list of functions.
In[108]:=
Click for copyable input
Here are plots of the functions in the interval .
In[109]:=
Click for copyable input
Out[109]=
Here is the computation of the errors of for , , , and for a range of points.
In[110]:=
Click for copyable input
Here are plots of how the logarithm of the error decreases for each of the functions. It can be seen that the integral estimates of discontinuous functions and functions with discontinuous derivatives improve slowly when the number of points is increased.
In[111]:=
Click for copyable input
Out[117]=

Minimal Number of Sampling Points

Here is a function that finds the number of sampling points used in an integration.
In[118]:=
Click for copyable input
This finds the number of sampling points used for a range of precision goals and a range of integration rule coarse points.
In[19]:=
Click for copyable input
This finds for each precision the minimum total number of sampling points. This way the number of coarse integration rule points used is also found.
In[121]:=
Click for copyable input
This is a plot of the precision goal and the number of integration rule points with which the minimum number of total sampling points was used.
In[122]:=
Click for copyable input
Out[128]=

Rule Comparison

Here is a function that calculates the error of a rule for the integral , using the exact value computed by Integrate for comparison.
In[129]:=
Click for copyable input
This defines a list of functions.
In[130]:=
Click for copyable input
Here are plots of the functions in the interval .
In[131]:=
Click for copyable input
Out[131]=
This is the computation of the errors of , , , and for each of the integrals , , , and for a range of points.
In[132]:=
Click for copyable input
Here are plots of how the logarithms of the errors decrease for each rule and each function.
In[136]:=
Click for copyable input
Out[136]=

Examples of Pathological Behavior

Tricking the Error Estimator

In this subsection an integral will be discussed which NIntegrate underestimates with its default settings since it fails to detect part of the integrand. The part is detected if the precision goal is increased.

The Wrong Estimation

Consider the following function.
In[13]:=
Click for copyable input
Here is its exact integral over .
In[138]:=
Click for copyable input
Out[138]=
NIntegrate gives the estimate.
In[139]:=
Click for copyable input
Out[139]=
This is too inaccurate when compared to the exact value.
In[140]:=
Click for copyable input
Out[140]=
Here is the plot of the function, which is also wrong.
In[141]:=
Click for copyable input
Out[141]=

Better Results

Better results can be achieved using the NIntegrate option PrecisionGoal and increasing the recursion depth.
In[17]:=
Click for copyable input
Out[17]=
This is a table that finds the precision goal for which no good results are computed.
In[18]:=
Click for copyable input
Out[18]=
If the plot points are increased, the plot of the function looks different.
In[144]:=
Click for copyable input
Out[144]=
Here is the zoomed plot of the spike that Plot is missing with the default options.
In[145]:=
Click for copyable input
Out[145]=
If this part of the function is integrated, the result fits the quantity that is "lost" (or "missed") by NIntegrate with the default option settings.
In[146]:=
Click for copyable input
Out[146]=
In[147]:=
Click for copyable input
Out[147]=

Why the Estimator Is Misled

These are the abscissas and weights of a Gauss-Kronrod rule used by default by NIntegrate.
In[146]:=
Click for copyable input
This defines a function for application of the rule.
In[147]:=
Click for copyable input
This finds the points at which the adaptive strategy samples the integrand.
In[148]:=
Click for copyable input
This is a plot of the sampling points. The vertical axis is for the order at which the points have been used to evaluate the integrand.
In[149]:=
Click for copyable input
Out[149]=

It can be seen on the preceding plot that NIntegrate does extensive computation around the top of the second spike near . NIntegrate does not do as much computation around the unintegrated spike near .

These are Gauss-Kronrod and Gauss abscissas in the last set of sampling points, which is over the region .
In[150]:=
Click for copyable input
Out[150]=
Out[151]=
Here the integrand is applied over the abscissas.
In[152]:=
Click for copyable input
Here is a polynomial approximation of the integrand over the abscissas.
In[154]:=
Click for copyable input
These plots show that the two polynomial approximations almost coincide over .
In[156]:=
Click for copyable input
Out[156]=
Out[158]=
If the polynomials are integrated over the region where is placed, the difference between them, which NIntegrate uses as an error estimate, is really small.
In[159]:=
Click for copyable input
Out[159]=
Out[160]=
Out[161]//FullForm=

Since the difference is the error estimate assigned for the region , with the default precision goal NIntegrate never picks it up for further integration refinement.

Phase Errors

In this subsection are discussed causes why integration rules might seriously underestimate or overestimate the actual error of their integral estimates. Similar discussion is given in [LynKag76].

This defines a function.
In[162]:=
Click for copyable input
Consider the numerical and symbolic evaluations of the integral of over the region .
In[163]:=
Click for copyable input
Out[163]=
In[164]:=
Click for copyable input
Out[164]=
They differ significantly. The precision goal requested is 2, but relative error is much higher than .
In[165]:=
Click for copyable input
Out[165]=

(Note that NIntegrate gives correct results for higher-precision goals.)

Below is an explanation why this happens.

Let the integration rule be embedded in the rule . Accidentally, the error estimate of the integral estimate , where , can be too small compared with the exact error .

To demonstrate this, consider the Gauss-Kronrod rule with 11 sampling points that has an embedded Gauss rule with 5 sampling points. (This is the rule used in the two integrations above.)
In[166]:=
Click for copyable input
This defines a function that applies the rule.
In[167]:=
Click for copyable input
This is the integral of previously defined.
In[168]:=
Click for copyable input
We can plot a graph with the estimated error of and the real error for different values of in . That is, you plot and .
In[169]:=
Click for copyable input
Out[169]=

In the plot above, the blue graph is for the estimated error, . The graph of the actual error is red.

You can see that the value of the parameter is very close to one of the local minimals.

A one-dimensional quadrature rule can be seen as the result of the integration of a polynomial that is fitted through the rule's abscissas and the integrand values over them. We can further try to see the actual fitting polynomials for the integration of .

In[170]:=
Click for copyable input

In the plots below the function is plotted in red, the Gauss polynomial is plotted in blue, the Gauss-Kronrod polynomial is plotted in violet, the Gauss sampling points are in black, and the Gauss-Kronrod sampling points are in red.

You can see that since the peak of falls approximately halfway between two abscissas, its approximation is an underestimate.

In[172]:=
Click for copyable input
Out[172]=

Conversely, you can see that since the peak of falls approximately on one of the abscissas, its approximation is an overestimate.

In[173]:=
Click for copyable input
Out[173]=

Index of Technical Terms

Abscissas

Degree of a one-dimensional integration rule

Degree of a multidimensional integration rule

Exact rule

Embedded rule

Null rule

Product rule

Progressive rule

Sampling points

New to Mathematica? Find your learning path »
Have a question? Ask support »