NIntegrate Integration Strategies
Introduction
An integration strategy is an algorithm that attempts to compute integral estimates that satisfy userspecified precision or accuracy goals.
An integration strategy prescribes how to manage and create new elements of a set of disjoint subregions of the initial integral region. Each subregion might have its own integrand and integration rule associated with it. The integral estimate is the sum of the integral estimates of all subregions. Integration strategies use integration rules to compute the subregion integral estimates. An integration rule samples the integrand with a set of points, called abscissas (or sampling points).
To improve an integral estimate the integrand should be sampled at additional points. There are two principal approaches: (i) adaptive strategies try to identify the problematic integration areas and concentrate the computational efforts (i.e., sampling points) on them; (ii) nonadaptive strategies increase the number of sampling points over the whole region in order to compute a higher degree integration rule estimate that reuses the integrand evaluations of the former integral estimate.
Both approaches can use symbolic preprocessing and variable transformation or sequence summation acceleration to achieve faster convergence.
In the following integration, the symbolic piecewise preprocessor in NIntegrate recognizes the integrand as a piecewise function, and the integration is done over regions for which x≥1 with the integrand and regions for which x≤1 with the integrand .
Out[31]=  

Here is a plot of all sampling points used in the integration. The integrand is sampled at the x coordinates in the order of the y coordinates (in the plot). It can be seen that the sampling points are concentrated near the singularity point 1. The patterns formed by the sampling points at the upper part of the plot differ from the patterns of the lower part of the plot because a singularity handler is applied.
Out[11]=  

The section
"Adaptive Strategies" gives a general description of the adaptive strategies. The default (main) strategy of
NIntegrate is global adaptive, which is explained in the section
"Global Adaptive Strategy". Complementary to it is the local adaptive strategy, which is explained in the section
"Local Adaptive Strategy". Both adaptive strategies use singularity handling mechanisms, which are explained in the section
"Singularity Handling".
The strategies
NIntegrate uses for special types of integrals (or integrands) are explained in the corresponding sections:
"Duffy's coordinates strategy",
"Oscillatory strategies", and
"Cauchy principal value integration".
Here is a table with links to descriptions of builtin integration strategies of
NIntegrate.
Adaptive Strategies
Adaptive strategies try to concentrate computational efforts where the integrand is discontinuous or has some other kind of singularity. Adaptive strategies differ by the way they partition the integration region into disjoint subregions. The integral estimates of each subregion contribute to the total integral estimate.
The basic assumption for the adaptive strategies is that for given integration rule
R and integrand
f, if an integration region
V is partitioned into, say, two disjoint subregions
V_{1} and
V_{2} ,
V=V_{1}V_{2},
V_{1}V_{2}=0, then the sum of the integral estimates of
R over
V_{1} and
V_{2} is closer to the actual integral
_{V}fx. In other words,
and (
1) will imply that the sum of the error estimates for
R_{V1} (f) and
R_{V2} (f) is smaller than the error estimate of
R_{V} (f).
Hence an adaptive strategy has these components [
MalcSimp75]:
(i) an integration rule to compute the integral and error estimates over a region;
(ii) a method for deciding which elements of a set of regions
to partition/subdivide;
(iii) stopping criteria for deciding when to terminate the adaptive strategy algorithm.
Global Adaptive Strategy
A global adaptive strategy reaches the required precision and accuracy goals of the integral estimate by recursive bisection of the subregion with the largest error estimate into two halves, and computes integral and error estimates for each half.
  
Method  Automatic  integration rule used to compute integral and error estimates over each subregion 
"SingularityDepth"  Automatic  number of recursive bisections before applying a singularity handler 
"SingularityHandler"  Automatic  singularity handler 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
"GlobalAdaptive" options.
"GlobalAdaptive" is the default integration strategy of
NIntegrate. It is used for both onedimensional and multidimensional integration.
"GlobalAdaptive" works with both
Cartesian product rules and
fully symmetric multidimensional
rules.
"GlobalAdaptive" uses a data structure called a "heap" to keep the set of regions partially sorted, with the largest error region being at the top of the heap. In the main loop of the algorithm the largest error region is bisected in the dimension that is estimated to be responsible for most of its error.
It can be said that the algorithm produces the leaves of a binary tree, the nodes of which are the regions. The children of a node/region are its subregions obtained after bisection.
After a bisection of a region and the subsequent integration over the new (sub)regions, new global integral and global error estimates are computed, which are sums of the integral and error estimates of all regions that are leaves of the binary tree.
Each region has a record of how many bisections are made per dimension in order to produce it. When a region has been produced through too many bisections a singularity flattening algorithm is applied to it; see
Singularity Handling.
"GlobalAdaptive" stops if the following expression is true:
where
pg and
ag are precision and accuracy goals.
The strategy also stops when the number of recursive bisections of a region exceeds a certain number (see
MinRecursion and MaxRecursion), or when the global integration error oscillates too much (see
"MaxErrorIncreases").
Theoretical and practical evidence show that the global adaptive strategies have in general better performance than the local adaptive strategies [
MalcSimp75][
KrUeb98].
MinRecursion and MaxRecursion
The minimal and maximal depths of the recursive bisections are given by the values of the options
MinRecursion and
MaxRecursion.
If for any subregion the number of bisections in any of the dimensions is greater than
MaxRecursion then the integration by
"GlobalAdaptive" stops.
Setting
MinRecursion to a positive integer forces recursive bisection of the integration regions before the integrand is ever evaluated. This can be done to ensure that a narrow spike in the integrand is not missed. (See
Tricking the error estimator.)
For multidimensional integration an effort is made to bisect in each dimension for each level of recursion in
MinRecursion.
"MaxErrorIncreases"
Since (
2) is expected to hold in
"GlobalAdaptive" the global error is expected to decrease after the bisection of the largest error region and the integration over its new parts. In other words the global error is expected to be more or less monotonically decreasing with respect to the number of integration steps.
The global error might oscillate due to
phase errors of the integration rules. Still, the global error is assumed at some point to start decreasing monotonically.
Below are listed cases in which this assumption might become false.
(i) The actual integral is zero.
(ii) The specified working precision is not dense enough for the specified precision goal.
(iii) The integration is badly conditioned [
KrUeb98]. For example, the reason might be that the integrand is defined by complicated expressions or in terms of approximate solutions of mathematical problems (such as differential equations or nonlinear algebraic equations).
The strategy
"GlobalAdaptive" keeps track of the number of times the total error estimate has not decreased after the bisection of the region with the largest error estimate. When that number becomes bigger than the value of the
"GlobalAdaptive" option
"MaxErrorIncreases", the integration stops with a message (
NIntegrate::eincr).
The default value of
"MaxErrorIncreases" is 400 for onedimensional integrals and 2000 for multidimensional integrals.
The following integration invokes the message NIntegrate::eincr, with the default value of "MaxErrorIncreases".
Out[1]=  

Increasing "MaxErrorIncreases" silences the NIntegrate::eincr message.
Out[2]=  

The result compares well with the exact value.
Out[4]=  

Example Implementation of a Global Adaptive Strategy
This computes GaussKronrod abscissas, weights, and error weights. 
This is a definition of a function that applies the integration rule with abscissas and weights computed to the function f over the interval {a, b}. 
This is a definition of a simple global adaptive algorithm that finds the integral of the function f over the interval {aArg, bArg} with relative error tol. 
This defines an integrand. 
The global adaptive strategy defined earlier gives the following result.
Out[19]=  

The relative error is within the prescribed tolerance.
Out[21]=  

Local Adaptive Strategy
In order to reach the required precision and accuracy goals of the integral estimate, a local adaptive strategy recursively partitions the subregion into smaller disjoint subregions and computes integral and error estimates for each of them.
  
Method  Automatic  integration rule used to compute integral and error estimates over the subregions 
"SingularityDepth"  Automatic  number of recursive bisections before applying a singularity handler 
"SingularityHandler"  Automatic  singularity handler 
"Partitioning"  Automatic  how to partition the regions in order to improve their integral estimate 
"InitialEstimateRelaxation"  True  attempt to adjust the magnitude of the initial integral estimate in order to avoid unnecessary computation 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
"LocalAdaptive" options.
Like
"GlobalAdaptive",
"LocalAdaptive" can be used for both onedimensional and multidimensional integration.
"LocalAdaptive" works with both
Cartesian product rules and
fully symmetric multidimensional
rules.
The
"LocalAdaptive" strategy has an initialization routine and a Recursive Routine (RR). RR produces the leaves of a tree, the nodes of which are regions. The children of a node/region are subregions obtained by its partition. RR takes a region as an argument and returns an integral estimate for it.
RR uses an integration rule to compute integral and error estimates of the region argument. If the error estimate is too big, RR calls itself on the region's disjoint subregions obtained by partition. The sum of the integral estimates returned from these recursive calls becomes the region's integral estimate.
RR makes the decision to continue the recursion knowing only the integral and error estimates of the region at which it is executed. (This is why the strategy is called "local adaptive.")
The initialization routine computes an initial estimation of the integral over the initial regions. This initial integral estimate is used in the stopping criteria of RR: if the error of a region is significant compared to the initial integral estimate then that region is partitioned into disjoint regions and RR is called on them; if the error is insignificant the recursion stops.
The error estimate of a region,
regionError, is considered insignificant if
The stopping criteria (
3) will compute the integral to the working precision. Since you want to compute the integral estimate to userspecified precision and accuracy goals, the following stopping criteria is used:
where
eps is the smallest number such that
1+eps≠1 at the working precision, and
pg and
ag are the userspecified precision and accuracy goals.
The recursive routine of
"LocalAdaptive" stops the recursion if:
1. there are no numbers of the specified working precision between region's boundaries;
2. the maximum recursion level is reached;
3. the error of the region is insignificant, i.e., the criteria (
4) is true.
MinRecursion and MaxRecursion
The options
MinRecursion and
MaxRecursion for
"LocalAdaptive" have the same meaning and functionality as they do for
"GlobalAdaptive". See
MinRecursion and MaxRecursion.
"InitialEstimateRelaxation"
After the first recursion is finished a better integral estimate,
I_{2}, will be available. That better estimate is compared to the two integral estimates,
I_{1} and
I_{1e} that the integration rule has used to give the integral estimate (
I_{1}) and the error estimate (
I_{1}I_{1e}) for the initial step. If
then the integral estimate
integralEst in (
5) can be increased—that is, the condition (
6) is relaxed—with the formula
since
<1 means that the rule's integral estimate is more accurate than what the rule's error estimate predicts.
"Partitioning"
"LocalAdaptive" has the option
"Partitioning" to specify how to partition the regions that do not satisfy (
7). For onedimensional integrals, if
"Partitioning" is set to
Automatic,
"LocalAdaptive" partitions a region between the sampling points of the (rescaled) integration rule. In this way, if the integration rule is of
closed type, every integration value can be reused. If
"Partitioning" is given a list of integers
{p_{1}, p_{2}, ..., p_{n}} with length
n that equals the number of integral variables, each dimension
i of the integration region is divided into
p_{i} equal parts. If
"Partitioning" is given an integer
p, all dimensions are divided into
p equal parts.
Consider the following function.
Out[4]=  

These are the sampling points used by "LocalAdaptive" with its automatic region partitioning. It can be seen that the sampling points of each recursion level are between the sampling points of the previous recursion level.
Out[2]=  

These are the sampling points used by "LocalAdaptive" integration which partitions the regions with large error into three subregions. The patterns formed clearly show the three next recursion level subregions of each region of the first and second recursion levels.
Out[6]=  

Multidimensional example of using the "Partitioning" option. To make the plot, the sampling points of the first region to be integrated, [0, 1]×[0, 1], are removed.
Out[10]=  

Reuse of Integrand Values
With its default partitioning settings for onedimensional integrals
"LocalAdaptive" reuses the integrand values at the end points of the subintervals that have integral and error estimates that do not satisfy (
8).
Sampling points of the integration of by "LocalAdaptive". The variable rulePoints determines the number of points in the integration rule used by "LocalAdaptive".
Out[15]=  
Out[16]=  

The percent of reused points in the integration.
Out[19]=  

Example Implementation of a Local Adaptive Strategy
This computes ClenshawCurtis abscissas, weights, and error weights. 
This is a definition of a function that applies the integration rule, with the abscissas and weights computed in the previous example, to the function f over the interval {a, b}. 
This defines a simple local adaptive algorithm that finds the integral of the function f over the interval {aArg, bArg} with relative error tol. 
The local adaptive strategy gives the result.
Out[38]=  

The relative error is within the prescribed tolerance.
Out[40]=  

"GlobalAdaptive" versus "LocalAdaptive"
In general the global adaptive strategy has better performance than the local adaptive one. In some cases though the local adaptive strategy is more robust and/or gives better performance.
There are two main differences between
"GlobalAdaptive" and
"LocalAdaptive":
1.
"GlobalAdaptive" stops when the sum of the errors of all regions satisfies the precision goal, while
"LocalAdaptive" stops when the error of each region is small enough compared to an estimate of the integral.
2. To improve the integral estimate
"GlobalAdaptive" bisects the region with largest error, while
"LocalAdaptive" partitions all regions the error for which is not small enough.
For multidimensional integrals
"GlobalAdaptive" is much faster because
"LocalAdaptive" does partitioning along each axis, so the number of regions can explode combinatorically.
Why and how global adaptive strategy is faster for onedimensional smooth integrands is proved (and explained) in [
MalcSimp75].
When
"LocalAdaptive" is faster and performs better than
"GlobalAdaptive", it is because the precisiongoalstopping criteria and partitioning strategy of
"LocalAdaptive" are more suited for the integrand's nature. Another factor is the ability of
"LocalAdaptive" to reuse the integral values of all points already sampled.
"GlobalAdaptive" has the ability to reuse very few integral values (at most 3 per rule application, 0 for the default onedimensional rule, the GaussKronrod rule).
The following subsection demonstrates the performance differences between
"GlobalAdaptive" and
"LocalAdaptive".
"GlobalAdaptive" Is Generally Better than "LocalAdaptive"
The table that follows, with timing ratios and numbers of integrand evaluations, demonstrates that
"GlobalAdaptive" is better than
"LocalAdaptive" for the most common cases. All integrals considered are onedimensional over
[0, 1], because (i) for multidimensional integrals the performance differences should be expected to deepen, since
"LocalAdaptive" partitions the regions along each axis, and (ii) onedimensional integrals over different ranges can be rescaled to be over
[0, 1].
Here are the definitions of some functions, precision goals, number of integrations, and the integration rule. The variable integrationRule can be changed in order to compare the profiling runs with the same integration rule. The last function is derived from ^{x}sin (x) by the variable change that transforms [0, 1) into [0, ).
Out[74]=  

Exact integral values. All integrals are over [0, 1]. 
"GlobalAdaptive" timings. 
"GlobalAdaptive" function evaluations. 
"LocalAdaptive" function evaluations. 
Table with the timing ratios and integrand evaluations.
Out[80]=  

Table with the errors of the integrations. Both "GlobalAdaptive" and "LocalAdaptive" reach the required precision goals.
Out[81]=  

Singularity Handling
The adaptive strategies of
NIntegrate speed up their convergence through variable transformations at the integration region boundaries and userspecified singular points or manifolds. The adaptive strategies also ignore the integrand evaluation results at singular points.
Singularity specification is discussed in
"Userspecified Singularities".
Multidimensional singularity handling with variable transformations should be used with caution; see
"IMT Multidimensional Singularity Handling". Coordinate change for a multidimensional integral can simplify or eliminate singularities; see
"Duffy's Coordinates for Multidimensional Singularity Handling".
For details about how
NIntegrate ignores singularities see
"Ignoring the Singularity".
The computation of Cauchy principal value integrals is described in
"Cauchy Principal Value Integration".
UserSpecified Singularities
Point Singularities
If it is known where the singularities occur, they can be specified in the ranges of integration, or through the option
Exclusions.
Here is an example of an integral that has two singular points at and .
Out[58]=  

Here is an example of a twodimensional integral with a singular point at (1, 1).
Out[59]=  

Here is an example of an integral that has two singular points at and specified with the Exclusions option.
Out[60]=  

Here is an example of a twodimensional integral with a singular point at (1, 1) specified with the Exclusions option.
Out[61]=  

Curve, Surface, and Hypersurface Singularities
Singularities over curves, surfaces, or hypersurfaces in general can be specified through the option
Exclusions with their equations. Such singularities, generally, cannot be specified using variable ranges.
This twodimensional function is singular along the curve x^{2}+y^{2}=1.
Out[62]=  

NIntegrate will reach convergence much more slowly if no singularity specification is given.
Out[35]=  

Here is an example of a case in which a singular curve can be specified with the variable ranges. If x[0, 2] and y[0, 2] this would not be possible—see the following example.
Out[10]=  

Example Implementation of Curve, Surface, and Hypersurface Singularity Handling
If the curve, surface, or hypersurface on which the singularities occur is known in implicit form (i.e., it can be described as a single equation) the function
Boole can be used to form integration regions that have the singular curves, surfaces, or hypersurfaces as boundaries.
This twodimensional function has singular points along the curve x^{2}+y^{2}=1.
Out[66]=  

This twodimensional function has singular points along the curve x+ (1y)^{2}=1.
Out[68]=  

Again, using Boole the integral is calculated quickly.
Out[8]=  

This is how the sampling points of the integration look.
Out[7]=  

Here is a function that takes a singular curve, surface, or hypersurface specification and uses the function Boole to make integration regions that have the singularities on their boundaries. 
This defines a threedimensional function. 
Here is the integral of a threedimensional function with singular points along the surface x+ (1y)^{2}+ (1z)^{2}=1.
Out[3]=  

These are the sampling points of the integration.
Out[5]=  

"SingularityHandler" and "SingularityDepth"
Adaptive strategies improve the integral estimate by region bisection. If an adaptive strategy subregion is obtained by the number of bisections specified by the option
"SingularityDepth", it is decided that subregion has a singularity. Then the integration over that subregion is done with the singularity handler specified by
"SingularityHandler".
  
"SingularityDepth"  Automatic  number of recursive bisections before applying a singularity handler 
"SingularityHandler"  Automatic  singularity handler 
"GlobalAdaptive" and "LocalAdaptive" singularity handling options.
If there is an integrable singularity at the boundary of a given region of integration, bisection could easily recur to
MaxRecursion before convergence occurs. To deal with these situations the adaptive strategies of
NIntegrate use variable transformations (
IMT,
"DoubleExponential",
SidiSin) to speed up the integration convergence, or a region transformation (
Duffy's coordinates) that relaxes the order of the singularity. The theoretical background of the variable transformation singularity handlers is given by the EulerMaclaurin formula [
DavRab84].
Use of the IMT Variable Transformation
The IMT variable transformation is the variable transformation in a quadrature method proposed by Iri, Moriguti, and Takasawa, called in the literature the IMT rule [
DavRab84][
IriMorTak70]. The IMT rule
is based upon the idea of transforming the independent variable in such a way that all derivatives of the new integrand vanish at the end points of the integration interval. A trapezoidal rule is then applied to the new integrand, and under proper conditions high accuracy of the result might be attained [
IriMorTak70][
Mori74].
Here is a numerical integration that uses the IMT variable transformation for singularity handling.
Out[13]=  

  
"TuningParameters"  10  a pair of numbers {a, p} that are the tuning parameters in the IMT transformation formula ; if only a number a is given, it is interpreted as {a, 1} 
IMT singularity handler options.
Adaptive strategies of
NIntegrate employ only the transformation of
the
IMT rule. With the decision that a region might have a singularity, the IMT transformation is applied to its integrand. The integration continues, though not with a trapezoidal rule, but with the same integration rule used before the transformation. (Singularity handling with
"DoubleExponential" switches to a trapezoidal integration rule.)
Also, adaptive strategies of
NIntegrate use a variant of the original IMT transformation, with the transformed integrand vanishing only at one of the ends.
The IMT transformation _{a, p} (t): (0, 1]→ (0, 1], a>0, p>0, is defined. 
The parameters
a and
p are called tuning parameters [
MurIri82].
The limit of the derivative of the IMT transformation is 0.
Out[16]=  

Here is the plot of the IMT transformation.
Out[17]=  

From the graph above follows that the transformed sampling points are much denser around 0. This means that if the integrand is singular at 0 it will be sampled more effectively, since a larger part of the integration rule sampling points will contribute large integrand values to the integration rule's integral estimate.
Since for any given working precision the numbers around 0 are much denser than the numbers around 1, after a region bisection the adaptive strategies of NIntegrate reverse the bisection variable of the subregion that has the right end of the bisected interval. This can be seen from the following plot.
Out[19]=  

No other singularity handler is applied to the subregions of a region to which the IMT variable transformation has been applied.
IMT Transformation by Example
Consider the function over (0, 1] that has a singularity at 0.
Out[30]=  

Assume the integration is done with "GlobalAdaptive", with singularity handler IMT and singularity depth 4. After four bisections "GlobalAdaptive" will have a region with boundaries {0, 1/16} that contains the singular end point. For that region the IMT variable transformation will change its boundaries to {0, 1} and its integrand to the following.
Out[32]=  

Here is the plot of the new integrand.
Out[34]=  

The singularity is smashed!
Some of the sampling points, though, become too close to the singular end, and therefore special care should be taken for sampling points that coincide with the singular point because of the IMT transformation.
NIntegrate ignores evaluations at singular points; see
"Ignoring the Singularity".
The GaussKronrod sampling points for the region {0, 1/16} and the derivatives of the rescaling follow.
Out[36]=  
Out[37]=  

With the IMT transformation, these are the sampling points and derivatives.
Out[39]=  
Out[40]=  

Here is the integral estimate with the IMT transformation.
Out[41]=  

The estimate calculated with the IMT variable transformation is much closer to the exact value.
Out[42]=  

Use of DoubleExponential Quadrature
When adaptive strategies use the IMT variable transformation they do not change the integration rule on the IMTtransformed regions. In contrast to this you can use both a variable transformation and a different integration rule on the regions considered to have singularity. (This is more in the spirit of the
IMT rule [
DavRab84].) This is exactly what happens when
doubleexponential quadrature is used—doubleexponential quadrature uses the trapezoidal rule.
NIntegrate can use doubleexponential quadrature for singularity handling only for onedimensional integration.
Here is a numerical integration that uses doubleexponential quadrature for singularity handling.
Out[103]=  

IMT versus "DoubleExponential" versus No Singularity Handling for OneDimensional Integrals
Both singularity handlers (
IMT and
"DoubleExponential") are applied to regions that are obtained through too many bisections (as specified by
"SingularityDepth").
The main difference between them is that
IMT does not change the integration rule used to compute integral estimates on the region it is applied to—
IMT is only a variable transformation. On the other hand,
"DoubleExponential" uses both variable transformation and a different integration rule—the trapezoidal rule—to compute integral estimates on the region it is applied to. In other words, the singularity handler
"DoubleExponential" delegates the integration to the doubleexponential quadrature as described in
DoubleExponential Strategy.
As a consequence, a region to which the
IMT singularity handler is applied is still going to be subject to bisection by the adaptive integration strategy. Therefore, until the precision goal is reached the integrand evaluations done before the last bisection will be thrown away. On the other hand, a region to which the
"DoubleExponential" singularity handler is applied will not be bisected. The trapezoidal rule quadrature used by
"DoubleExponential" will compute integral estimates over the region with an increasing number of sampling points at each step, completely reusing the integrand evaluations of the sampling points from the previous steps.
So, if the integrand is "very" analytic (i.e., no rapid or sudden changes of the integrand and its derivatives wrt the integration variable) over the regions with end point singularity, the
"DoubleExponential" singularity handler is going to be much faster than the
IMT singularity handler. In the cases where the integrand is not analytic in the region given to the
"DoubleExponential" singularity handler, or the double transformation of the integrand converges too slowly, it is better to switch to the
IMT singularity handler. This is done if the option
"SingularityHandler" is set to
Automatic.
Following are tables that compare the
IMT,
"DoubleExponential", and
Automatic singularity handlers applied at different depths of bisection.
This loads a package that defines the profiling function NIntegrateProfile that gives the number of sampling points and the time needed by a numerical integration command. 
Table for a "very" analytical integrand that the "DoubleExponential" singularity handler easily computes.
Out[36]//TableForm= 
 

Table for an integrand, , that does not have a singularity and has a nearly discontinuous derivative (i.e., it is not "very" analytical). The Automatic singularity handler starts with "DoubleExponential" and then switches to IMT.
Out[40]//TableForm= 
 

A table for an integrand, , for which the Automatic singularity handler starts with "DoubleExponential" and then switches to IMT.
Out[44]//TableForm= 
 

IMT Multidimensional Singularity Handling
When used for multidimensional integrals, the IMT singularity handler speeds up the integration process only when the singularity is along one of the axes. When the singularity is at a corner of the integration region, using IMT is counterproductive. The function
NIntegrateProfile defined earlier is used in the following examples.
The number of integrand evaluations and timings for an integrand that has a singularity only along the x axis. The default (automatic) singularity handler chooses to apply IMT to regions obtained after the default (four) bisections.
Out[19]=  

The number of integrand evaluations and timings for an integrand that has a singularity only along the x axis with no singularity handler application.
Out[20]=  

The number of integrand evaluations and timings for an integrand that has a singularity at a corner of the integration region. The default (automatic) singularity handler chooses to apply the singularity handler DuffyCoordinates to regions obtained after the default (four) bisections.
Out[21]=  

The number of integrand evaluations and timings for an integrand that has a singularity at a corner of the integration region. IMT is applied to regions obtained after the default (four) bisections.
Out[22]=  

The number of integrand evaluations and timings for an integrand that has a singularity at a corner of the integration region with no singularity handler application.
Out[23]=  

Duffy's Coordinates for Multidimensional Singularity Handling
Duffy's coordinates is a technique that transforms an integrand over a square, cube, or hypercube with a singular point in one of the corners into an integrand with a singularity over a line, which might be easier to integrate.
The following integration uses Duffy's coordinates.
Out[63]=  

The following integration does not use Duffy's coordinates.
Out[62]=  

The
NIntegrate strategies
"GlobalAdaptive" and
"LocalAdaptive" apply the Duffy's coordinates technique only at the corners of the integration region.
When the singularity of a multidimensional integral occurs at a point, the coupling of the variables will make the singularity variable transformations used in onedimensional integration counterproductive. A variable transformation that has a geometrical nature, proposed by Duffy in [
Duffy82], makes a change of variables that replaces a point singularity at a corner of the integration region with a "softer" one on a plane.
If
d is the dimension of integration and
r=x_{1}^{2}+x_{2}^{2}+...+x_{d}^{2}, then Duffy's coordinates is a suitable technique for singularities of the following type (see again [
Duffy82]):
1. r^{},
r^{} ln r,
>d ;
3. (c_{1}x_{1}^{}+c_{2}x_{2}^{}+...+c_{d}x_{d}^{})^{},
>0,
>d,
c_{i}>0, i[1, d].
For example, consider the integral
If the integration region
(0, 1]× (0, x] is changed to
(0, 1]× (0, 1] with the rule
y→ x y, the Jacobian of which is
x, the integral becomes
The last integral has no singularities at all!
Now consider the integral
which is equivalent to the sum
The first integral of that sum is transformed as in (
9); for the second one, though, the change of
(0, 1]× (1, x] into
(0, 1]× (0, 1] by
y→x+ (1x)y has the Jacobian
1x, which will not bring the desired cancellation of terms. Fortunately, a change of the order of integration:
makes the second integral amenable for the transformation in (
10):
(In the second integral in the equation (3) the variables were permuted, which is not necessary to prove the mathematical equivalence, but it is faster when computing the integrals.)
So the integral (
11) can be rewritten as an integral with no singularities:
If the integration variables were not permuted in (
12), the integral (
13) is going to be rewritten as
That is a more complicated integral, as its integrand is not simple along both axes. Subsequently it is harder to compute than the former one.
Here is the number of sampling points for the simpler integral.
Out[58]=  

Here is the number of sampling points for the more complicated integral.
Out[59]=  

NIntegrate uses a generalization to arbitrary dimension of the technique in the example above. (In [
Duffy82] third dimension is described only.) An
example implementation together with the generalization description are given below.
Here is a table that compares the different singularity handlings for . (The profiling function NIntegrateProfile defined earlier is used.)
Out[75]=  

Duffy's Coordinates Strategy
When Duffy's coordinates are applicable, a numerical integration result is obtained faster if Duffy's coordinate change is made before the actual integration begins. Making the transformation beforehand, though, requires knowledge at which corner(s) of the integration region the singularities occur. The
"DuffyCoordinates" strategy in
NIntegrate facilitates facilitates such preintegration transformation.
Here is an example with an integrand that has singularities at two different corners of its integration region.
Out[84]=  

  
Method  {"GlobalAdaptive","SingularityDepth">}  the strategy with which the integration will be made after applying Duffy's coordinates transformation 
"Corners"  All  a vector or a list of vectors that specify the corner(s) to apply the Duffy's coordinates tranformation to; the elements of the vectors are either 0 or 1; each vector length equals the dimension of the integral 
"DuffyCoordinates" options.
The first thing
"DuffyCoordinates" does is to rescale the integral into one that is over the unit hypercube (or square, or cube). If only one corner is specified
"DuffyCoordinates" applies Duffy's coordinates transformation as described
earlier. If more than one corner is specified, the unit hypercube of the previous step is partitioned into disjoint cubes with side length of onehalf. Consider the integrals over these disjoint cubes. Duffy's coordinates transformation is applied to the ones that have a vertex that is specified to be singular. The rest are transformed into integrals over the unit cube. Since all integrals at this point have an integration region that is the unit cube, they are summated, and that sum is given to
NIntegrate with a
Method option that is the same as the one given to
"DuffyCoordinates".
The actual integrand used by
"DuffyCoordinates" can be obtained through
NIntegrate`DuffyCoordinatesIntegrand, which has the same arguments as
NIntegrate.
Here is an example for the "DuffyCoordinates" integrand of a threedimensional function that is singular at one of the corners of the integration region.
Out[78]=  

Here is an example for the "DuffyCoordinates" integrand for a twodimensional function that is singular at two of the corners of the integration region.
Out[79]=  

Integration with "DuffyCoordinates".
Out[80]=  

Integration with the default NIntegrate options settings which is much slower than the previous one.
Out[81]=  

Here is another example of a speedup by "DuffyCoordinates".
Out[82]=  

Integration with the default NIntegrate options settings which is much slower than the previous one.
Out[83]=  

Duffy's Coordinates Generalization and Example Implementation
The implementation is based on the following two theorems.
Theorem 1: A
ddimensional cube can be divided into
d disjoint geometrically equivalent
ddimensional pyramids (with bases
(d1)dimensional cubes) and with coinciding apexes.
Proof: Assume the side length of the cube is
1, the cube has a vertex at the origin, and the coordinates of all other vertexes are
1 or
0. Consider the
(d1)dimensional cube walls
w_{s}={c_{1}, ..., c_{s1}, 1, c_{s+1}, ..., c_{d}}, where
c_{i}[0, 1]. Their number is exactly
d, and the origin does not belong to them. Each of the
w_{s} walls can form a pyramid with the origin. This proves the theorem.
Here is a plot that illustrates the theorem in 3D.
Out[92]=  

If the
d axes are denoted
x_{1}, x_{2}, ..., x_{d} the pyramid formed with the wall
w_{1}={1, c_{2}, ..., c_{d}} can be described as
0≤x_{1}≤1, 0≤x_{i}≤x_{1}, i{2, ..., d} . Let
^{i} denote the permutation derived after rotating
{1, ..., d} cyclically
i times to the left (i.e., applying
i times
RotateLeft to
{1, ..., d}). Then the following theorem holds:
Theorem 2: For any integral over the unit cube the following equalities hold:
Proof: The first equality follows from Theorem 1. The second equality is just a change of variables that transforms a pyramid to a cube.
Here is a function that gives the rules and the Jacobian for the transformation of a hypercube with a specified side into a region. 
Here is an example of unitsquare to infinite region rescaling.
Out[96]=  

Here is a function that computes the integrals obtained by the Duffy's coordinates technique when the singularity is at the origin. 
Here is a function that computes the integrals obtained by the Duffy's coordinates technique for a specified corner of the hypercube where the singularity occurs. 
Using Duffy's coordinates is much faster than using no singularity handling (see the next example).
Out[110]=  

Integration using no singularity handling.
Out[111]=  

Of course, the internal implementation of NIntegrate gives similar performance results.
Out[107]=  

Ignoring the Singularity
Another way of handling a singularity is to ignore it.
NIntegrate ignores sampling points that coincide with a singular point.
Consider the following integral that has a singular point at
1.
The integrand goes to
 when the integration variable is close to
1.
With its default options
NIntegrate has a sampling point at
1, as can be seen from the following.
But for
NIntegrate[Log[(1x)^{2}], {x, 0, 2}] the evaluation monitor has not picked a sampling point that is
1.
Sampling points that belong to the interval [110^{6}, 1+10^{6}].
Out[120]=  

In other words, the singularity at
1 is ignored. Ignoring the singularity is equivalent to having an integrand that is zero at the singular sampling point.
Note that the integral is easily integrated if the singular point is specified in the variable range. Following are the numbers of sampling points and timings for
NIntegrate with the singular and nonsingular range specifications.
Integration with the singular point specified.
Out[123]=  

Integration by ignoring the singularity.
Out[122]=  

A more interesting example of ignoring the singularity is using Bessel functions in the denominator of the integrand.
The result can be checked using
NIntegrate with singular range specification with the zeros of
BesselJ[2, x] (see
BesselJZero).
Needless to say, the last integration required the calculation of the
BesselJ zeros. The former one "just integrates" without any integrand analysis.
Ignoring the singularity may not work with oscillating integrands.
For example, these two integrals are equivalent.
Out[126]=  

However, if the integrand is monotonic in a neighborhood of its singularity, or more precisely, if it can be majorized by a monotonic integrable function, it can be shown that by ignoring the singularity, convergence will be reached.
For theoretical justification and practical recommendations of ignoring the singularity see [
DavRab65IS] and [
DavRab84].
Automatic Singularity Handling
OneDimensional Integration
When the option
"SingularityHandler" is set to
Automatic for a onedimensional integral,
"DoubleExponential" is used on regions that are obtained by
"SingularityDepth" number of partitionings. As explained
earlier, this region will not be partitioned further as long as the
"DoubleExponential" singularity handler works over it. If the error estimate computed by
"DoubleExponential" does not evolve in a way predicted by the theory of the
doubleexponential quadrature, the singularity handling for this region is switched to
IMT.
As explained in
"Convergence Rate", the following dependency of the error is expected with respect to the number of doubleexponential sampling points:
where
c is a positive constant. Consider the relative errors
E_{m} and
E_{n} of two consecutive doubleexponential quadrature calculations, made with
m and
n number of sampling points respectively, for which
m<n. Assuming
E_{m}<1,
E_{n}<1, and
E_{m}>E_{n} it should be expected that
The switch from
"DoubleExponential" to
IMT happens when:
(i) the region error estimate is larger than the absolute value of the region integral estimate (hence the relative error is not smaller than
1);
(ii) the inequality (2) is not true in two different instances;
(iii) the integrand values calculated with the doubleexponential transformation do not decay fast enough.
Here is an example of a switch from "DoubleExponential" to IMT singularity handling. On the plot the integrand is sampled at the x coordinates in the order of the y coordinates. The patterns of the sampling points over show the change from Gaussian quadrature ( y[0, 97]) to doubleexponential quadrature ( y[98, 160]), which later is replaced by Gaussian quadrature using the IMT variable transformation ( y[160, 400]).
Out[145]=  

Multidimensional Integration
When the option
"SingularityHandler" is set to
Automatic for a multidimensional integral, both
"DuffyCoordinates" and
IMT are used.
A region needs to meet the following conditions in order for
"DuffyCoordinates" to be applied:
the region is obtained by
"SingularityDepth" number of bisections (or partitionings) along each axis;
the region is a corner of one of the initial integration regions (the specified integration region can be partitioned into integration regions by piecewise handling or by userspecified singularities).
A region needs to meet the following conditions in order for
IMT to be applied:
the region is obtained with by
"SingularityDepth" number of bisections (or partitionings) along predominantly one axis;
the region is not a corner region and it is on a side of one of the initial integration regions.
In other words,
IMT is applied to regions that are derived through
"SingularityDepth" number of partitionings but do not satisfy the conditions of the
"DuffyCoordinates" automatic application.
IMT is effective if the singularity is along one of the axes. Using
IMT for point singularities can be counterproductive.
Sampling points of twodimensional integration, , with Automatic (left) and "DuffyCoordinates" (right) singularity handling. It can be seen that the automatic singularity handling uses almost two times more points than "DuffyCoordinates". To illustrate the effect of the singularity handlers they are applied after two bisections.
Out[134]=  

Timings for the integral, , with singularity handlers Automatic, "DuffyCoordinates", and IMT and with no singularity handling. The integral has a point singularity at {0, 0}.
Out[47]//TableForm= 
 

Timings for the integral, , singular along y axis with singularity handlers Automatic, "DuffyCoordinates", and IMT and with no singularity handling.
Out[46]//TableForm= 
 

Cauchy Principal Value Integration
To evaluate the Cauchy principal value of an integral,
NIntegrate uses the strategy
PrincipalValue.
Cauchy principal value integration with singular point at 2.
Out[153]=  

In
NIntegrate,
PrincipalValue uses the strategy specified by its
Method option to work directly on those regions where there is no difficulty and by pairing values symmetrically about the specified singularities in order to take advantage of the cancellation of the positive and negative values.
  
Method  Automatic  method specification used to compute estimates over subregions 
SingularPointIntegrationRadius  Automatic  a number _{1} or a list of numbers {_{1}, _{2}, ..., _{n}} that correspond to the singular points b_{1}, b_{2}, ..., b_{n} in the range specification; with each pair (b_{i}, _{i}) an integral of the form is formed 
"PrincipalValue" options.
where each of the integrals is evaluated using
NIntegrate with
Method>methodspec. If
is not given explicitly, a value is chosen based upon the differences
ba and
cb. The option
SingularPointIntegrationRadius can take a list of numbers that equals the number of singular points. For the derivation of the formula see [
DavRab84].
This finds the Cauchy principal value of .
Out[14]=  

Here is the Cauchy principal value of . Note that there are two singularities that need to be specified.
Out[114]=  

This checks the value. The result would be 0 if everything were done exactly.
Out[31]=  

It should be noted that the singularities must be located exactly. Since the algorithm pairs together the points on both sides of the singularity, if the singularity is slightly mislocated the cancellation will not be sufficiently good near the pole and the result can be significantly in error if
NIntegrate converges at all.
Sampling Points Visualization
Consider the calculation of the principal value of
The following examples show two ways of visualizing the sampling points. The first shows the sampling points used. Since the integrand is modified in order to do the principal value integration, it might be desired to see the points at which the original integrand is evaluated. This is shown on the second example.
Here are sampling points used by NIntegrate. There are no points over the interval , because of the PrincipalValue integration , and there are sampling points over .
Out[156]=  

This defines a function which accumulates the argument values given to the integrand. 
Here are the points at which the integrand has been evaluated. Note the symmetric pattern over the interval .
Out[168]=  

DoubleExponential Strategy
The doubleexponential quadrature consists of applying the trapezoidal rule after a variable transformation. The doubleexponential quadrature was proposed by Mori and Takahasi in 1974 and it was inspired by the socalled
IMT rule and TANH rule. The transformation is given the name "doubleexponential" since its derivative decreases doubleexponentially when the integrand's variable reaches the ends of the integration region.
  
"ExtraPrecision"  50  maximum extra precision to be used internally 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
"DoubleExponential" options.
The doubleexponential strategy can be used for onedimensional and multidimensional integration. When applied to multidimensional integrals it uses the Cartesian product of the trapezoidal rule.
A doubleexponential transformation
(t) transforms the integral
where
(a, b) can be finite, halfinfinite (
b=), or infinite (
a=, b=). The integrand
f (x) must be analytic in
(a, b) and might have singularity at one or both of the end points.
The transformed integrand decreases doubleexponentially, that is,
f ( (t))' (t) exp (c exp (t)) as
t→±.
The function
(t) is analytic in
(, ). It is known that for an integral like (
14) of an analytic integrand the trapezoidal rule is an optimal rule [
Mori74].
The transformations used for the different types of integration regions are:
where
a and
b are finite numbers.
The trapezoidal rule is applied to (
15):
The terms in (
16) decay doubleexponentially for large enough
i. Therefore the summation in (
17) is cut off at the terms that are too small to contribute to the total sum. (A criterion similar to (
18) for the local adaptive strategy is used. See also the following
doubleexponential example implementation.)
The strategy
"DoubleExponential" employs the doubleexponential quadrature.
The
"DoubleExponential" strategy works best for analytic integrands; see
"Comparison of DoubleExponential and Gaussian Quadrature".
"DoubleExponential" uses the Cartesian product of doubleexponential quadratures for multidimensional integrals.
Cartesian doubleexponential quadrature.
Out[48]=  

As with the other Cartesian product rules, if
"DoubleExponential" is used for dimensions higher than three, it might be very slow due to combinatorial explosion.
The following plot illustrates the Cartesian product character of the "DoubleExponential" multidimensional integration.
Out[50]=  

Doubleexponential quadrature can be used for singularity handling in adaptive strategies; see
"Singularity Handling".
MinRecursion and MaxRecursion
The option
MinRecursion has the same meaning and functionality as it does for
"GlobalAdaptive" and
"LocalAdaptive" described in "
MinRecursion and MaxRecursion".
MaxRecursion for
"DoubleExponential" restricts how many times the trapezoidal quadrature estimates are improved; see
"Example Implementation of DoubleExponential Quadrature".
Comparison of DoubleExponential and Gaussian Quadrature
The
"DoubleExponential" strategy works best for analytic integrands. For example, the following integral is done by
"DoubleExponential" three times faster than the Gaussian quadrature (using a global adaptive algorithm).
Integration with "DoubleExponential".
Out[215]=  

Integration with Gauss quadrature. (The default strategy of NIntegrate, "GlobalAdaptive" uses by default a GaussKronrod integration rule with 5 Gaussian points and 6 Kronrod points.)
Out[51]=  

Since
"DoubleExponential" converges doubleexponentially with respect to the number of evaluation points, increasing the precision goal slightly increases the work done by
"DoubleExponential". This is illustrated for two integrals,
and
. Each table entry shows the error and number of evaluations.
Doubleexponential quadrature and Gaussian quadrature for . Increasing the precision goal does not change the number of sampling points used by "DoubleExponential".
Out[219]=  

Doubleexponential quadrature and Gaussian quadrature for . Increasing the precision goal does not change the number of sampling points used by "DoubleExponential". (The integrations are done without symbolic preprocessing.)
Out[222]=  

On the other hand, for nonanalytic integrands
"DoubleExponential" is quite slow, and a global adaptive algorithm using Gaussian quadrature can resolve the singularities easily.
"DoubleExponential" needs more than 10000 integrand evaluations to compute this integral with a nonanalytic integrand.
Out[53]=  

Gaussian quadrature is much faster for the integral.
Out[54]=  

Further,
"DoubleExponential" might be slowed down by integrands that have nearly discontinuous derivatives, that is, integrands that are not "very" analytical.
Here is an example with a not "very" analytical integrand.
Out[226]=  

Again, Gaussian quadrature is much faster.
Out[227]=  

Here are the plots of the integrand and its derivative.
Out[228]=  

Convergence Rate
This section demonstrates that the asymptotic error of the doubleexponential quadrature in terms of the number
n of evaluation points used is
where
c is a positive constant.
This defines a doubleexponential integration function that an returns integral estimate and the number of points used. 
This finds the errors and number of evaluation points for a range of step sizes of the trapezoidal rule. 
This fits through the logarithms of the errors; see ( 19). 
Here is the fitted function. The summation term 30.48 is just a translation parameter.
Out[240]=  

You see that the errors fit the model ( 20):
Out[241]=  

Example Implementation of DoubleExponential Quadrature
Following is an example implementation of the doubleexponential quadrature with the finite region variable transformation (transformation (
21) earlier).
This is a definition of a function that applies the trapezoidal rule to a transformed integrand. The function uses ( 22) and it is made to reuse integral estimates computed with a twice larger step. 
This is a definition of a simple doubleexponential strategy, which finds the integral of the function f over the finite interval {a, b} with relative error tol. 
This defines a function that is singular at 0. 
The two results are the same.
This defines an oscillating function. 
The relative error is within the prescribed tolerance.
Out[182]=  

"Trapezoidal" Strategy
The
"Trapezoidal" strategy gives optimal convergence for analytic periodic integrands when the integration interval is exactly one period.
  
"ExtraPrecision"  50  maximum extra precision to be used internally 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
"Trapezoidal" options.
"Trapezoidal" takes the same options as
"DoubleExponential". If the integration ranges are infinite or semiinfinite,
"Trapezoidal" becomes
"DoubleExponential".
For theoretical background, examples, and explanations of periodic functions integration (with trapezoidal quadrature) see [
Weideman2002].
Out[109]=  
Here is a table that shows the number of sampling points for different values of the parameter t used by "GlobalAdaptive" and "Trapezoidal" respectively for the integral .
Out[35]//TableForm= 
 

Example Implementation
This function makes a trapezoidal quadrature integral estimate with specified points. 
This function improves a trapezoidal quadrature integral estimate using sampling points between the old ones. 
This function is an interface to the preceding one. 
Here is a definition of a (Bessel) function. 
Here is the trapezoidal quadrature estimate.
Out[274]=  

The relative error is within the prescribed tolerance.
Out[279]=  

Oscillatory Strategies
The oscillatory strategies of
NIntegrate are are for onedimensional integrals. Generally in quadrature, the algorithms for finite region integrals are different from the algorithms for infinite regions.
NIntegrate uses Chebyshev expansions of the integrand and the global adaptive integration strategy for finite region oscillatory integrals. For infinite oscillatory integrals
NIntegrate uses either a modification of the doubleexponential algorithm or sequence summation acceleration over the sequence of integrals with regions between the integrand's zeros.
Here is an example that uses both algorithms.
Out[13]=  

NIntegrate automatically detects oscillatory (onedimensional) integrands, and automatically decides which algorithm to use according to the integrand's range.
The integrals detected as being of oscillatory type have the form
in which the oscillating kernel
k (x) is of the form:
1. sin ( x^{p}+c),
cos ( x^{p}+c),
^{ xp} for
(a, b) finite;
2. sin ( x^{p}+c),
cos ( x^{p}+c),
^{ xp},
J_{} ( x^{p}+c),
Y_{} ( x^{p}+c),
H_{}^{ (1)} ( x^{p}+c),
H_{}^{ (2)} ( x^{p}+c),
j_{} ( x^{p}+c), or
y_{} ( x^{p}+c) for
(a, b) infinite or semiinfinite.
In these oscillating kernel forms
,
c and
are real constants, and
p is a positive integer.
Finite Region Oscillatory Integration
Modified ClenshawCurtis quadrature ([
PiesBrand75][
PiesBrand84]) is for finite region onedimensional integrals of the form
where
a,
b,
k,
c,
p are finite real numbers.
The modified ClenshawCurtis quadrature rule approximates
f (x) with a single polynomial through Chebyshev polynomials expansion. This leads to simplified computations because of the orthogonality of the Chebyshev polynomials with sine and cosine functions. The modified ClenshawCurtis quadrature rule is used with the strategy
"GlobalAdaptive". For smooth
f (x) the modified ClenshawCurtis quadrature is usually superior [
KrUeb98] to other approaches for oscillatory integration (as Filon's quadrature and multipanel integration between the zeros of the integrand).
Modified ClenshawCurtis quadrature is quite good for highly oscillating integrals of the form (
23). For example, modified ClenshawCurtis quadrature uses less than a hundred integrand evaluations for both
and
.
Number of integrand evaluations for modified ClenshawCurtis quadrature for slowly oscillating kernel.
Out[1]=  

Timing and integral estimates for modified ClenshawCurtis quadrature for slowly oscillating kernel.
Out[3]=  

Number of integrand evaluations for modified ClenshawCurtis quadrature for highly oscillating kernel.
Out[5]=  

Timing and integral estimates for modified ClenshawCurtis quadrature for highly oscillating kernel.
Out[6]=  

On the other hand, without symbolic preprocessing, the default
NIntegrate method—
"GlobalAdaptive" strategy with a GaussKronrod rule—uses thousands of evaluations for
, and it cannot integrate
.
Number of integrand evaluations for Gaussian quadrature for slowly oscillating kernel.
Out[7]=  

Timing and integral estimates for Gaussian quadrature for slowly oscillating kernel.
Out[8]=  

Number of integrand evaluations for Gaussian quadrature for highly oscillating kernel.
Out[9]=  

Timing and integral estimates for Gaussian quadrature for highly oscillating kernel.
Out[10]=  

Extrapolating Oscillatory Strategy
The
NIntegrate strategy
"ExtrapolatingOscillatory" is for oscillating integrals in infinite onedimensional regions. The strategy uses sequence convergence acceleration for the sum of the sequence that consists of each of the integrals with regions between two consecutive zeros of the integrand.
Here is an example of an integration using "ExtrapolatingOscillatory".
Out[294]=  

  
Method  GlobalAdaptive  integration strategy used to integrate between the zeros and which will be used if ExtrapolatingOscillatory fails 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic processing 
where the function
k (x) is the oscillating kernel and the function
f (x) is smooth. Let
z_{i} be the zeros of
k (x) enumerated from the lower (finite) integration bound, that is, the inequality
a≤ z_{1}< z_{2}<...< z_{i} <... holds. If the integral (
24) converges then the sequence
converges too. The elements of the sequence (
25) are the partial sums of the sequence
Often a good estimate of the limit of the sequence (
26) can be computed with relatively few elements of it through some convergence acceleration technique.
The
"Oscillatory" strategy uses
NSum with Wynn's extrapolation method for the integrals in (
27). Each integral in (
28) is calculated by
NIntegrate without oscillatory methods.
The
"Oscillatory" strategy applies its algorithm to oscillating kernels
k (x) in (
29) that are of the form
sin ( x^{p}+c),
cos ( x^{p}+c),
J_{} ( x^{p}+c),
Y_{} ( x^{p}+c),
H_{}^{ (1)} ( x^{p}+c),
H_{}^{ (2)} ( x^{p}+c),
j_{} ( x^{p}+c), or
y_{} ( x^{p}+c), where
, c,
p, and
are real constants.
Example Implementation
The following example implementation illustrates how the
"Oscillatory" strategy works.
Here is a definition of an oscillation function that will be integrated in the interval [0, ). The zeros of the oscillating function sin ( x) are . 
Here is a plot of the oscillatory function in the interval [0, 10].
Out[89]=  

This is a definition of a function that integrates between two consequent zeros. The zeros of the oscillating function k (x)=sin ( x) are . 
Here is the integral estimate computed by sequence convergence acceleration (extrapolation).
Out[6]=  

Here is the exact integral value.
Out[7]=  

The integral estimate is very close to the exact value.
Out[8]=  

Here is another check using the "ExtrapolatingOscillatory" strategy.
Out[94]=  

The integral estimate by "ExtrapolatingOscillatory" is very close to the exact value.
Out[95]=  

DoubleExponential Oscillatory Integration
The strategy
"DoubleExponentialOscillatory" is for slowly decaying oscillatory integrals over onedimensional infinite regions that have integrands of the form
sin ( x^{p}+c)f (x),
cos ( x^{p}+c)f (x), or
^{ xp}f (x), where
x is the integration variable, and
, c, p are constants.
Integration with "DoubleExponentialOscillatory".
Out[2]=  

  
Method  None  integration strategy which will be used if "DoubleExponentialOscillatory" fails 
"TuningParameters"  Automatic  tuning parameters of the error estimation 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic processing 
Options of "DoubleExponentialOscillatory".
"DoubleExponentialOscillatory" is based on the strategy
"DoubleExponential", but instead of using a transformation that reaches doubleexponentially the ends of the integration interval
"DoubleExponentialOscillatory" uses a transformation that reaches doubleexponentially the zeros of
sin ( x^{p}+c) and
cos ( x^{p}+c). The theoretical foundations and properties of the algorithm are explained in [
OouraMori91], [
OouraMori99], [
MoriOoura93]. The implementation of
"DoubleExponentialOscillatory" uses the formulas and the integrator design in [
OouraMori99].
The algorithm of
"DoubleExponentialOscillatory" will be explained using the sine integral
Consider the following transformation
where
and
are constants satisfying
The parameters
and
are chosen to satisfy
(taken from [
OouraMori99]).
Transformation (
30) is applied to (
31) to obtain
Note that
disappeared in the sine term. The trapezoidal formula with equal mesh size
h applied to (
32) gives
which is approximated with the truncated series sum
M and
h are chosen to satisfy
The integrand decays doubleexponentially at large negative
n as can be seen from (
33). While the doubleexponential transformation, (
34) in the section
"DoubleExponential Strategy", also makes the integrand decay doubleexponentially at large positive
t, the transformation (
35) does not decay the integrand at large positive
t. Instead it makes the sampling points approach doubleexponentially to the zeros of
sin ( x) at large positive
t. Moreover
As is explained in [
OouraMori99], since
sin ( x) is linear near any of its zeros, the integrand decreases doubleexponentially as
x approaches a zero of
sin ( x). This is the sense in which (
36) is considered a doubleexponential formula.
The relative error is assumed to satisfy
In [
OouraMori99] the suggested value for
A is
5.
Since the
DEO (I_{s}, h, N) formulas cannot be made progressive,
"DoubleExponentialOscillatory" (as proposed in [
OouraMori99]) does between
2 and
4 integration estimates with different
h. If the desired relative error is
the integration steps are the following:
1. Choose
M=M_{1} such that
and compute (
37) with
M=M_{1}. Let the result be
I_{M1}.
2. Next, set
M_{2}=2M_{1}, and compute (
38) with
M=M_{2}. Let the result be
I_{M2}. The relative error of the first integration step
is assumed to be
. From (
39)
_{M2}_{M1}^{2}, and therefore, if
is satisfied, where
is a robustness factor (by default
10)
"DoubleExponentialOscillatory" exits with result
I_{M2}.
3. If (
40) does not hold, compute
and compute (
41) with
M=M_{3}. If
"DoubleExponentialOscillatory" exits with result
I_{M3}.
4. If (
42) does not hold, compute
and compute (
43) with
M=M_{4}. Let the result be
I_{M4}. If
does not hold,
"DoubleExponentialOscillatory" issues the message
NIntegrate::deoncon. If the value of the
"DoubleExponentialOscillatory" method option is
None, then
I_{M4} is returned. Otherwise
"DoubleExponentialOscillatory" will return the result of
NIntegrate called with the
"DoubleExponentialOscillatory" method option.
the transformation corresponding to (
44) is
Generalized Integrals
Here is the symbolic computation of the regularized divergent integral .
Out[110]=  

"DoubleExponentialOscillatory" computes the nonregularized integral above in a generalized sense.
Out[111]=  

More about the properties of
"DoubleExponentialOscillatory" for divergent Fourier type integrals can found in [
MoriOoura93].
NonAlgebraic Multiplicand
Symbolic integration of an oscillatory integral.
Out[116]=  

If the oscillatory kernel is multiplied by a nonalgebraic function, "DoubleExponentialOscillatory" still gives a good result.
Out[117]=  

Plots of the integrand and its oscillatory kernel.
Out[119]=  

Crude Monte Carlo and Quasi Monte Carlo Strategies
The crude Monte Carlo algorithm estimates a given integral by averaging integrand values over uniformly distributed random points in the integral's region. The number of points is incremented until the estimated standard deviation is small enough to satisfy the specified precision or accuracy goals. A Monte Carlo algorithm is called a quasi Monte Carlo algorithm if it uses equidistributed, deterministically generated sequences of points instead of uniformly distributed random points.
Here is a crude Monte Carlo integration.
Out[3]=  

Here is a crude quasi Monte Carlo integration.
Out[4]=  

  
Method  "MonteCarloRule"  Monte Carlo rule specification 
MaxPoints  50000  maximum number of sampling points 
"RandomSeed"  Automatic  a seed to reset the random generator 
"Partitioning"  1  partitioning of the integration region along each axis 
"SymbolicProcessing"  0  number of seconds to do symbolic preprocessing 
"MonteCarlo" options.
  
MaxPoints  50000  maximum number of sampling points 
"Partitioning"  1  partitioning of the integration region along each axis 
"SymbolicProcessing"  0  number of seconds to do symbolic preprocessing 
"QuasiMonteCarlo" options.
In Monte Carlo methods [
KrUeb98] the
ddimensional integral
_{V}f (x)x is interpreted as the following expected (mean) value:
where
E (f) is the mean value (the expectation) of the function
f interpreted as a random variable, with respect to the uniform distribution on
V, that is, the distribution with probability density
vol(V)^{1}Boole(xV). With
Boole(xV) is denoted the characteristic function of the region
V, while
vol (V) denotes the volume of
V.
The crude Monte Carlo estimate is made with the integration rule
"MonteCarloRule". The formulas for the integral and error estimation are given in the section
"MonteCarloRule" in the tutorial
"NIntegrate Integration Rules".
If the original integration region
is partitioned into the set of disjoint subregions
,
=_{i}, then the integral estimate is
The number of sampling points used on each subregion generally can be different, but in the Monte Carlo algorithms all
n_{i} are equal (
n_{1}=n_{2}=...=n_{m}).
The partitioning
=_{i} is called stratification, and each
_{i} is called strata. Stratification can be used to
improve crude Monte Carlo estimations. (The
adaptive Monte Carlo algorithm uses recursive stratification.)
AccuracyGoal and PrecisionGoal
The default values for
AccuracyGoal and
PrecisionGoal are
Infinity and 2 respectively when
NIntegrate's Monte Carlo algorithms are used.
MaxPoints
The option
MaxPoints specifies what is the maximum number of (pseudo) random sampling points to be used to compute the Monte Carlo estimate of an integral.
Here is an example in which the maximum number of sampling points is reached and NIntegrate stops with a message.
Out[261]=  

"RandomSeed"
The value of the option
"RandomSeed" is used to seed the random generator used to make the sampling integration points. In that respect the use
"RandomSeed" in Monte Carlo method is similar to the use of
SeedRandom and
RandomReal.
By using
"RandomSeed" the results of a Monte Carlo integration can be reproduced. The results of the following two runs are identical.
The following shows the first 20 points used in the Monte Carlo integrations.
Out[66]=  

Stratified Crude Monte Carlo Integration
In stratified sampling Monte Carlo integration you break the region into several subregions and apply the crude Monte Carlo estimate on each subregion separately.
From the expected (mean) value formula, Equation (
45) at the beginning of
Crude Monte Carlo and Quasi Monte Carlo Strategies, you have
Let the region
V be bisected into two halfregions,
V_{1} and
V_{2}.
E_{i} (f) is the expectation of
f on
V_{i}, and
Var_{i} (f) is the variance of
f on
V_{i}. From the theorem [
PrFlTeuk92]
you can see that the stratified sampling gives a variance that is never larger than the crude Monte Carlo sampling variance.
There are two ways to specify strata for the
"MonteCarlo" strategy. One is to specify "singular" points in the variable range specifications, the other is to use the method suboption
"Partitioning".
Stratified crude Monte Carlo integration using variable ranges specifications.
Out[124]=  

Stratified crude Monte Carlo integration using the suboption "Partitioning".
Out[123]=  

If
"Partitioning" is given a list of integers,
{p_{1}, p_{2}, ..., p_{n}} with length
n that equals the number of integral variables, each dimension
i of the integration region is divided into
p_{i} equal parts. If
"Partitioning" is given an integer
p, all dimensions are divided into
p equal parts.
This graph demonstrates the stratified sampling specified with "Partitioning". Each cell contains 3 points, as specified by the "MonteCarloRule" option "Points".
Out[100]=  

Stratified Monte Carlo sampling can be specified if the integration variable ranges are given with intermediate singular points.
Stratified Monte Carlo sampling through specification of intermediate singular points.
Out[23]=  

Stratified sampling improves the efficiency of the crude Monte Carlo estimation: if the number of strata is
s, the standard deviation of the stratified Monte Carlo estimation is
s times less of the standard deviation of the crude Monte Carlo estimation. (See the following
example.)
The following benchmark shows that stratification speeds up the convergence.
Out[121]=  

Convergence Speedup of the Stratified Monte Carlo Integration
The following example confirms that if the number of strata is
s, the standard deviation of the stratified Monte Carlo estimation is
s times less than the standard deviation of the crude Monte Carlo estimation.
Here is a stratified integration definition based on the expected (mean) value formula ( 46). 
Here the integral above is approximated with 1000 points for the number of strata running from 1 to 40. 
These are the ratios between the standard deviations and the nonstratified, crude Monte Carlo estimation. 
Note that ratiosi is the ratio for the Monte Carlo estimation with i number of strata. This allows you to try a least squares fit of the function to ratios.
Out[128]=  

The fitting of shows a coefficient very close to 1, which is a confirmation of the rule of thumb that s number of strata give stimes faster convergence. This is the plot of the ratios and the least squares fit.
Out[130]=  

Global Adaptive Monte Carlo and Quasi Monte Carlo Strategies
The global adaptive Monte Carlo and quasi Monte Carlo strategies recursively bisect the subregion with the largest variance estimate into two halves, and compute integral and variance estimates for each half.
Here is an example of adaptive Monte Carlo integration.
Out[1]=  

Adaptive (quasi) Monte Carlo uses crude (quasi) Monte Carlo estimation rule on each subregion.
The process of subregion bisection and subsequent biintegration is expected to reduce the global variance, and it is referred to as recursive stratified sampling. It is motivated by a theorem that states that if a region is partitioned into disjoint subregions the random variable variance over the region is greater than or equal to the sum of the random variable variances over each subregion. (See
"Stratified Monte Carlo Integration" in the section
"Crude Monte Carlo and Quasi Monte Carlo Strategies".)
The global adaptive Monte Carlo strategy
"AdaptiveMonteCarlo" is similar to
"GlobalAdaptive". There are some important differences though.
1. "AdaptiveMonteCarlo" does not use singularity flattening, and does not have detectors for slow convergence and noisy integration.
2. "AdaptiveMonteCarlo" chooses randomly the bisection dimension. To avoid irregular separation of different coordinates a dimension recurs only if other dimensions have been chosen for bisection.
3. "AdaptiveMonteCarlo" can be tuned to bisect the subregions away from the middle. More at
"BisectionDithering".
MinRecursion and MaxRecursion
The options
MinRecursion and
MaxRecursion for the adaptive Monte Carlo methods have the same meaning and functionality as they do for
"GlobalAdaptive". See
"MinRecursion and MaxRecursion".
"Partitioning"
The option
"Partitioning" of
"AdaptiveMonteCarlo" provides initial stratification of the integration. It has the same meaning and functionality as
"Partitioning" of the strategy
"MonteCarlo".
"BisectionDithering"
When the integrand has some special symmetry that puts significant parts of it in the middle of the region, it is better if the bisection is done slightly away from the middle. The value of the option
"BisectionDithering">dith specifies that the splitting fraction of the region's splitting dimension side should be at
instead of
. The sign of
dith is changed in a random manner. The default value given to
"BisectionDithering" is
. The allowed values for
dith are reals in the interval
.
The integral is seriously underestimated if no bisection dithering is used i.e., "BisectionDithering" is given 0.
Out[199]=  

The following picture shows why the integral is underestimated. The black points are the integration sampling points. It can be seen that half of the peak of the integrand is undersampled.
Out[207]=  

Specifying bisection dithering of 10% gives a satisfactory estimation.
Out[212]=  

It can be seen on this plot, that the peak of the integrand is sampled better.
Out[216]=  

Choice of Bisection Axis
For multidimensional integrals the adaptive Monte Carlo algorithm chooses the splitting axis of an integration region in two ways: (i) by random selection or (ii) by minimizing the variance of the integral estimates of each half. The
axis selection is a responsibility of the
"MonteCarloRule".
Example: Comparison with Crude Monte Carlo
Generally, the
"AdaptiveMonteCarlo" strategy has greater performance than
"MonteCarlo". This is demonstrated with the examples in this subsection.
It can be seen from the following profiling that
"AdaptiveMonteCarlo" uses nearly three times fewer sampling points than the crude
"MonteCarlo" strategy.
These are the sampling points and timing for "MonteCarlo".
Out[219]=  

These are the sampling points and timing for "AdaptiveMonteCarlo".
Out[220]=  

Here is the timing for 100 integrations with "MonteCarlo".
Out[222]=  

The "MonteCarlo" integration compares well with the exact result. The numbers below show the error of the mean of the integral estimates, the mean of the relative errors of the integral estimates, and the variance of the integral estimates.
Out[223]=  

Here is the timing for 100 integrations with "AdaptiveMonteCarlo", which is several times faster than "MonteCarlo" integrations.
Out[233]=  

The "AdaptiveMonteCarlo" integration result compares well with the exact result. The numbers below show the error of the mean of the integral estimates, the mean of the relative errors of the integral estimates, and the variance of the integral estimates.
Out[234]=  

"MultiPeriodic"
The strategy
"MultiPeriodic" transforms all integrals into integrals over the unit cube and periodizes the integrands to be oneperiodic with respect to each integration variable. Different periodizing functions (or none) can be applied to different variables.
"MultiPeriodic" works for integrals with dimension less than or equal to twelve. If
"MultiPeriodic" is given, integrals with higher dimension the
"MonteCarlo" strategy is used.
Here is an example of integration with "MultiPeriodic".
Out[2]=  

  
"Transformation"  SidiSin  periodizing transformation applied to the integrand 
"MinPoints"  0  minimal number of sampling points 
MaxPoints  10^{5}  maximum number of sampling points 
"SymbolicProcessing"  Automatic  number of seconds to be used for symbolic preprocessing 
"MultiPeriodic" can be seen as a multidimensional generalization of the strategy
"Trapezoidal". It can also be seen as a quasi Monte Carlo method.
"MultiPeriodic" uses lattice integration rules; see [
SloanJoe94] [
KrUeb98].
Here integration lattice in
^{d},
d, is understood to be a discrete subset of
^{d} which is closed under addition and subtraction, and which contains
^{d}. A lattice integration rule [
SloanJoe94] is a rule of the form
where
{x_{1}, x_{2}, ..., x_{N}} are all the points of an integration lattice contained in
[0, 1)^{n}.
If
"MultiPeriodic" is called on, a
ddimensional integral option
"Transformation" takes a list of oneargument functions
{f_{1}, f_{2}, ..., f_{d}} that is used to transform the corresponding variables. If
"Transformation" is given a list with length
l smaller than
d, then the last function,
f_{l}, is used for the last
dl integration variables. If
"Transformation" is given a function, that function will be used to transform all the variables.
Let
d be the dimension of the integral. If
d=1 "MultiPeriodic" calls
"Trapezoidal" after applying the periodizing transformation. For dimensions higher than
12 "MonteCarlo" is called without applying periodizing transformations.
"MultiPeriodic" uses the socalled
2^{d} copy rules for
2≤d≤12. For each
2≤d≤12 "MultiPeriodic" has a set of copy rules that are used to compute a sequence of integral estimates. The rules with a smaller number of points are used first. If the error estimate of a rule satisfies the precision goal, or if the difference of two integral estimates in the sequence satisfies the precision goal, the integration stops.
Number of points for the 2^{d} copy rules in the rule sets for different dimensions.
Out[7]=  

Comparison with "MultiDimensionalRule"
Generally
"MultiPeriodic" is slower than
"GlobalAdaptive" using
"MultiDimensionalRule". For computing highdimension integrals with lower precision,
"MultiPeriodic" might give results faster.
This defines the function of eight arguments. 
Timing in seconds for computing using "MultiPeriodic" and "GlobalAdaptive" with "MultiDimensionalRule".
Out[12]//TableForm= 
 

Number of integrand evaluations for computing using "MultiPeriodic" and "GlobalAdaptive" with "MultiDimensionalRule".
Out[13]//TableForm= 
 

Preprocessors
The capabilities of all strategies are extended through symbolic preprocessing of the integrals. The preprocessors can be seen as strategies that delegate integration to other strategies (preprocessors included).
"SymbolicPiecewiseSubdivision"
"SymbolicPiecewiseSubdivision" is a preprocessor that divides an integral with a piecewise integrand into integrals with disjoint integration regions on each of which the integrand is not piecewise.
  
Method  Automatic  integration strategy or preprocessor to which the integration will be passed 
"ExpandSpecialPiecewise"  True  which piecewise functions should be expanded 
TimeConstraint  5  the maximum number of seconds for which the piecewise subdivision will be attempted 
"MaxPiecewiseCases"  100  the maximum number of subregions the piecewise preprocessor can return 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
Options of "SymbolicPiecewiseSubdivision".
As was mentioned at the beginning of the tutorial,
NIntegrate is able to integrate simultaneously integrals with disjoint domains each having a different integrand. Hence, after the preprocessing with
"SymbolicPiecewiseSubdivision" the integration continues in the same way as if, say,
NIntegrate were given ranges with singularity specifications (which can be seen as specifying integrals with disjoint domains with the same integrand). For example, the strategy
"GlobalAdaptive" tries to improve the integral estimate of the region with the largest error through bisection, and will choose that largest error region regardless of which integrand it corresponds to.
Below are the sampling points for the numerical estimation of the integral . On the plot, the integrand is sampled at the x coordinates in the order of the ord coordinates. It can be seen that "GlobalAdaptive" alternates sampling for the piece with sampling for the piece .
Out[13]=  

Here are the sampling points for the numerical estimation of the integral . The integrand is plotted on the left, the sampling points are plotted on the right. The integral has been partitioned into + + + , that is why the sampling points form a different pattern for 1≤ x≤1.
Out[17]=  

"ExpandSpecialPiecewise"
In some cases it is preferable to do piecewise expansion only over certain piecewise functions. In these case the option
"ExpandSpecialPiecewise" can be given a list of functions to do the piecewise expansion with.
This Monte Carlo integral is done faster with piecewise expansion only over Boole.
Out[19]=  

Here is a Monte Carlo integration with piecewise expansion over both Boole and Abs.
Out[20]=  

"EvenOddSubdivision"
"EvenOddSubdivision" is a preprocessor that reduces the integration region if the region is symmetric around the origin and the integrand is determined to be even or odd. The convergence of odd integrals is verified by default.
  
Method  Automatic  integration strategy or preprocessor to which the integration will be passed 
VerifyConvergence  Automatic  should the convergence be verified if an odd integral is detected 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic preprocessing 
Options of "EvenOddSubdivision".
When the integrand is an even function and the integration region is symmetric around the origin, the integral can be computed by integrating only on some part of the integration region and multiplying with a corresponding factor.
Here is a plot of an even function and the sampling points without any preprocessing.
Out[24]=  

These are the sampling points used by NIntegrate after "EvenOddSubdivision" has been applied. Note that the sampling points are only in the region [0, ]×[0, ].
Out[25]=  

Transformation Theorem
The preprocessor
"EvenOddSubdivision" is based on the following theorem.
Theorem: Given the
ddimensional integral
assume that for some
i{1, 2, ..., d} these equalities hold:
a)
a_{i} (x_{1}, ..., x_{i1})=b_{i} (x_{1}, ..., x_{i1}),
b) for all
j>i, j{1, 2, ..., d}:
In other words the range of
x_{i} is symmetric around the origin, and the boundaries of the variables
x_{j}, j>i are even functions wrt
x_{i}.
a) the integral is equivalent to
if the integrand is even wrt
x_{i}, that is,
b) the integral is equivalent to
0, if the integrand is odd wrt
x_{i}, that is,
Note that the theorem above can be applied several times over an integral.
To illustrate the theorem consider the integral
. It is symmetric along
y, and the integrand and the bounds of
z are even functions wrt
y.
Here is a plot of the sampling points without the application of "EvenOddSubdivision" (black) and with "EvenOddSubdivision" applied (red).
Out[28]=  

If the bounds of
z are not even functions wrt
y then the symmetry along
y is broken. For example, the integral
has no symmetry
NIntegrate can exploit.
Here is a plot of the sampling points with "EvenOddSubdivision" applied (red). The region has no symmetry along y.
Out[30]=  

"VerifyConvergence"
Consider the following divergent integral . NIntegrate detects it as an odd function over a symmetric domain and tries to integrate (that is, check the convergence of ). Since no convergence was reached as is indicated by the ncvb message, NIntegrate gives the message oidiv that the integral might be divergent.
Out[31]=  

If the option VerifyConvergence is set to False no convergence verification—and hence no integrand evaluation—will be done after the integral is found to be odd.
Out[32]=  

"OscillatorySelection"
"OscillatorySelection" is a preprocessor that selects specialized algorithms for efficient evaluation of onedimensional oscillating integrals, the integrands of which are products of a trigonometric or Bessel function and a nonoscillating or a much slower oscillating function.
  
"BesselInfiniteRangeMethod"  "ExtrapolatingOscillatory" 
  specialized integration algorithm for infinite region integrals with Bessel functions 
"FourierFiniteRangeMethod"  Automatic  specialized integration algorithm for Fourier integrals over finite ranges 
"FourierInfiniteRangeMethod"  {"DoubleExponentialOscillatory", Method>"ExtrapolatingOscillatory"} 
  specialized integration algorithm for Fourier integrals over infinite regions 
Method  "GlobalAdaptive"  integration strategy or preprocessor to which the integration will be passed 
"TermwiseOscillatory"  False  if the value of this option is True then the algorithm is selected for each term
in a sum of oscillatory functions 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic processing 
Options of "OscillatorySelection".
"OscillatorySelection" is used by default.
Out[33]=  

Without the "OscillatorySelection" preprocessor NIntegrate does not reach convergence with its default option settings.
Out[34]=  

The preprocessor
"OscillatorySelection" is designed to work with the internal output of the
"SymbolicPiecewiseSubdivision" preprocessor.
"OscillatorySelection" itself partitions oscillatory integrals that include the origin or have oscillatory kernels that need to be expanded or transformed into forms for which the oscillatory algorithms are designed.
Here is a piecewise function integration in which all methods of "OscillatorySelection" are used. For this integral the preprocessor "SymbolicPiecewiseSubdivision" divides the integral into four different integrals; for each of these integrals "OscillatorySelection" selects an appropriate algorithm.
Out[1]=  

The following table shows the names of the
"OscillatorySelection" options used to specify the algorithms for each subinterval in the integral above.
x(,0]  "BesselInfiniteRangeMethod" 
x[0,20]  "FourierFiniteRangeMethod" 
x[30,)  "FourierInfiniteRangeMethod" 
x[20,30]  Method 
In this example "DoubleExponentialOscillatory" is called twice. "DoubleExponentialOscillatory" is a special algorithm for Fourier integrals, and the formula ^{2 x2}=cos (2x^{2})+ sin (2x^{2}) makes the integrand a sum of two Fourier integrands.
Out[35]//InputForm= 
 

To demonstrate that "OscillatorySelection" has used the formula ^{2 x2}=cos (2x^{2})+ sin (2x^{2}), here is the integral above split "by hand." The result is identical with the last result.
Out[36]//InputForm= 
 

The value
Automatic for the option
"FourierFiniteRangeMethod" means that if the integration strategy specified with the option
Method is one of
"GlobalAdaptive" or
"LocalAdaptive" then that strategy will be used for the finite range Fourier integration, otherwise
"GlobalAdaptive" will be used.
Here is a piecewise function integration that uses "DoubleExponential" strategy for the nonoscillatory integral and "LocalAdaptive" for the finite range oscillatory integral.
Out[37]=  

These are the sampling points of the preceding integration and integral but with default option settings. The pattern between [0, 20] on the left picture is typical for the local adaptive quadrature—the recursive partitioning into three parts can be seen (because of the option "Partitioning">3 given to "LocalAdaptive"). The pattern over [0, 20] on the right picture comes from "GlobalAdaptive". The pattern between [20, 40] on the first picture is typical for the doubleexponential quadrature. The same pattern can be seen on the second picture between [20, 21+1/4] since "GlobalAdaptive" uses by default the "DoubleExponential" singularity handler.
Out[42]=  

If the application of a particular oscillatory method is desired for a particular type of oscillatory integrals, either the corresponding options of
"OscillatorySelection" should be changed, or the
Method option in
NIntegrate should be used without the preprocessor
"OscillatorySelection".
Here is a piecewise function integration that uses "ExtrapolatingOscillatory" for any of the infinite range oscillatory integrals.
Out[10]=  

If "ExtrapolatingOscillatory" is given as the method, "OscillatorySelection" uses it for infinite range oscillatory integration.
Out[1]=  

The integration above is faster with the default options of NIntegrate. For this integral "OscillatorySelection", which is applied by default, uses "DoubleExponentialOscillatory".
Out[2]=  

Working with Sums of Oscillating Terms
In many cases it is useful to apply the oscillatory algorithms to integrands that are sums of oscillating functions. That is, each term of such integrands is a product of an oscillating function and a less oscillating one. (See
"Oscillatory Strategies" for the forms recognized as oscillatory by
NIntegrate.)
Here is an example of integration that applies the specialized oscillatory NIntegrate algorithms to each term of the integrand.
Out[4]=  

By default this option is set to False, and the integral cannot be computed.
Out[5]=  

The option is
"TermwiseOscillatory" is set by default to
False since splitting the integrals can lead in some cases to divergent results.
Here is a convergent integral. If it is split into two integrals each will be divergent.
Out[6]=  

If "TermwiseOscillatory">True is used the result is some big number (and lots of messages).
Out[8]=  

"UnitCubeRescaling"
"UnitCubeRescaling" is a preprocessor that transforms the integration region into a unit cube or hypercube. The variables of the original integrand are replaced the result is multiplied by the Jacobian of the transformation.
  
"FunctionalRangesOnly"  True  what ranges should be transformed to the unit cube 
Method  "GlobalAdaptive"  integration strategy or preprocessor to which the integration will be passed 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic processing 
Options of "UnitCubeRescaling".
This uses unit cube rescaling and it is faster than the computation that follows.
Out[10]=  

This integration does not use unit cube rescaling. It is done approximately three times slower than the previous one.
Out[11]=  

"UnitCubeRescaling" transforms the integral
into an integral over the hypercube
[0, 1]^{d}. Assuming that
a_{1} and
b_{1} are finite and
a_{i }, b_{i}, i=2, ..., d are piecewise continuous functions the transformation used by
"UnitCubeRescaling" is
The Jacobian of this transformation is
If for the
i^{th} axis one or both of
a_{i} and
b_{i} are infinite, then the formula for
x_{i} in (
47) is a nonaffine transformation that maps
[0, 1] into
.
NIntegrate uses the following transformations:
where
.
Applying
"UnitCubeRescaling" makes the integrand more complicated if the integration region boundaries are constants (finite or infinite). Since
NIntegrate has efficient affine and infinite internal variable transformations the integration process would become slower. If some of the integration region boundaries are functions, applying
"UnitCubeRescaling" would make the integration faster since the computations that involve the integration variables are done only when the integrand is evaluated. Because of these performance considerations
"UnitCubeRescaling" has the option
"FunctionRangesOnly". If
"FunctionRangesOnly" is set to
True the rescaling is applied only to multidimensional functional ranges.
This integration uses unit cube rescaling.
Out[12]=  

This integration does not use unit cube rescaling. It is done approximately two times faster than the previous one.
Out[13]=  

Example Implementation
The transformation process used by
"UnitCubeRescaling" is the same as the following one implemented by the function
FRangesToCube (also defined in
"Duffy's Coordinates Generalization and Example Implementation").
This function provides the transformation ( 48) and its Jacobian ( 49) for a list of integration ranges and a list of rectangular parallelepiped sides or a hypercube side. 
Each transformation of the transformation (
50) can be done with
Rescale.
Out[17]=  
Note that for given axis
i the transformation rules already derived for axes
1, ..., i1 need to be applied to the original boundaries before the rescaling of boundaries along the
i^{th} axis.
The transformation rules and the Jacobian for [0, 1]×[0, 1]→ [0, 1]×[a (x), b (x)].
Out[19]=  
Out[20]=  

Application of the transformation to a function.
Out[21]=  

The transformation rules and the Jacobian for [0, 1]×[0, 1]→ [0, ]×[a (x), b (x)].
Out[23]=  
Out[24]=  

The transformation rules and the Jacobian for [0, 1]×[0, 1]→ [a_{0}, b_{0}]×[a_{1} (x), b_{1} (x)].
Out[26]=  
Out[27]=  

"SymbolicPreprocessing"
"SymbolicPreprocessing" is a composite preprocessor made to simplify the switching on and off of the other preprocessors.
  
Method  Automatic  integration strategy or preprocessor to which the integration will be passed 
"SymbolicPiecewiseSubdivision"  True  piecewise subdivision 
"EvenOddSubdivision"  True  evenodd subdivision 
"OscillatorySelection"  True  detection of products with an oscillatory function 
"UnitCubeRescaling"  Automatic  rescaling to the unit hypercube 
"SymbolicProcessing"  Automatic  number of seconds to do symbolic processing 
"SymbolicPreprocessing" options.
When
"UnitCubeRescaling" is set to
Automatic it is applied only to multidimensional functional ranges.
Here is an example of the integration of with different combinations of preprocessor application.
Out[44]=  

Examples and Applications
ClosedContour Integrals
This function calculates the mass of a closed contour given in polar coordinates parametrization. 
Here is the circumference approximation of the ellipse with radii 2 and 3 using the same function.
Out[45]=  

The approximation compares quite well with the exact value.
Out[46]=  

Fourier Series Calculation
This is a Mathematica function that calculates a truncated Fourier series approximation of a function. 
This a plot of x^{3}+x^{2} and the Fourier series approximation.
Out[85]=  

This calculates a 60term Fourier approximation of over [4, 4] using NIntegrate. If Integrate is used the calculation will be very slow.
Out[86]=  

This a plot of and the Fourier series approximation.
Out[87]=  
