A.9.4 Numerical and Related Functions
Number representation and numerical evaluation
Large integers and high-precision approximate numbers are stored as arrays of base or digits, depending on the lengths of machine integers.
Precision is internally maintained as a floating-point number.
IntegerDigits, RealDigits and related base conversion functions use recursive divide-and-conquer algorithms. Similar algorithms are used for number input and output.
N uses an adaptive procedure to increase its internal working precision in order to achieve whatever overall precision is requested.
Floor, Ceiling and related functions use an adaptive procedure similar to N to generate exact results from exact input.
Multiplication of large integers and high-precision approximate numbers is done using interleaved schoolbook, Karatsuba, three-way Toom-Cook and number-theoretic transform algorithms.
Machine-code optimization for specific architectures is achieved by using GMP.
Integer powers are found by a left-right binary decomposition algorithm.
Reciprocals and rational powers of approximate numbers use Newton's method.
Exact roots start from numerical estimates.
Significance arithmetic is used for all arithmetic with approximate numbers beyond machine precision.
Random uses the Wolfram rule 30 cellular automaton generator for integers.
It uses a Marsaglia-Zaman subtract-with-borrow generator for real numbers.
GCD interleaves the HGCD algorithm, the Jebelean-Sorenson-Weber accelerated GCD algorithm, and a combination of Euclid's algorithm and an algorithm based on iterative removal of powers of 2.
PrimeQ first tests for divisibility using small primes, then uses the Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
As of 1997, this procedure is known to be correct only for , and it is conceivable that for larger it could claim a composite number to be prime.
The package NumberTheory`PrimeQ` contains a much slower algorithm which has been proved correct for all . It can return an explicit certificate of primality.
FactorInteger switches between trial division, Pollard , Pollard rho and quadratic sieve algorithms.
The package NumberTheory`FactorIntegerECM` contains an elliptic curve algorithm suitable for factoring some very large integers.
Prime and PrimePi use sparse caching and sieving. For large , the Lagarias-Miller-Odlyzko algorithm for PrimePi is used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.
LatticeReduce uses the Lenstra-Lenstra-Lovasz lattice reduction algorithm.
To find a requested number of terms ContinuedFraction uses a modification of Lehmer's indirect method, with a self-restarting divide-and-conquer algorithm to reduce the numerical precision required at each step.
ContinuedFraction uses recurrence relations to find periodic continued fractions for quadratic irrationals.
FromContinuedFraction uses iterated matrix multiplication optimized by a divide-and-conquer method.
Most combinatorial functions use sparse caching and recursion.
Factorial, Binomial and related functions use a divide-and-conquer algorithm to balance the number of digits in subproducts.
Fibonacci[n] uses an iterative method based on the binary digit sequence of n.
PartitionsP[n] uses Euler's pentagonal formula for small n, and the non-recursive Hardy-Ramanujan-Rademacher method for larger n.
ClebschGordan and related functions use generalized hypergeometric series.
Elementary transcendental functions
Exponential and trigonometric functions use Taylor series, stable recursion by argument doubling, and functional relations.
Log and inverse trigonometric functions use Taylor series and functional relations.
Values of constants are cached once computed.
Binary splitting is used to subdivide computations of constants.
Pi is computed using the Chudnovsky formula.
E is computed from its series expansion.
EulerGamma uses the Brent-McMillan algorithm.
Catalan is computed from a linearly convergent Ramanujan sum.
For machine precision most special functions use Mathematica-derived rational minimax approximations. The notes that follow apply mainly to arbitrary precision.
Orthogonal polynomials use stable recursion formulas for polynomial cases and hypergeometric functions in general.
Gamma uses recursion, functional equations and the Binet asymptotic formula.
Incomplete gamma and beta functions use hypergeometric series and continued fractions.
PolyGamma uses Euler-Maclaurin summation, functional equations and recursion.
PolyLog uses Euler-Maclaurin summation, expansions in terms of incomplete gamma functions and numerical quadrature.
Zeta and related functions use Euler-Maclaurin summation and functional equations. Near the critical strip they also use the Riemann-Siegel formula.
StieltjesGamma uses Keiper's algorithm based on numerical quadrature of an integral representation of the zeta function.
The error function and functions related to exponential integrals are all evaluated using incomplete gamma functions.
The inverse error functions use binomial search and a high-order generalized Newton's method.
Bessel functions use series and asymptotic expansions. For integer orders, some also use stable forward recursion.
The hypergeometric functions use functional equations, stable recurrence relations, series expansions and asymptotic series. Methods from NSum and NIntegrate are also sometimes used.
ProductLog uses high-order Newton's method starting from rational approximations and asymptotic expansions.
Elliptic integrals are evaluated using the descending Gauss transformation.
Elliptic theta functions use series summation with recursive evaluation of series terms.
Other elliptic functions mostly use arithmetic-geometric mean methods.
Mathieu functions use Fourier series. The Mathieu characteristic functions use generalizations of Blanch's Newton method.
With Method->Automatic, NIntegrate uses GaussKronrod in one dimension, and MultiDimensional otherwise.
If an explicit setting for MaxPoints is given, NIntegrate by default uses Method->QuasiMonteCarlo.
GaussKronrod: adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
DoubleExponential: non-adaptive double-exponential quadrature.
Trapezoidal: elementary trapezoidal method.
Oscillatory: transformation to handle certain integrals containing trigonometric and Bessel functions.
MultiDimensional: adaptive Genz-Malik algorithm.
MonteCarlo: non-adaptive Monte Carlo.
QuasiMonteCarlo: non-adaptive Halton-Hammersley-Wozniakowski algorithm.
Numerical sums and products
If the ratio test does not give 1, the Wynn epsilon algorithm is applied to a sequence of partial sums or products.
Otherwise Euler-Maclaurin summation is used with Integrate or NIntegrate.
Numerical differential equations
For ordinary differential equations, NDSolve by default uses an LSODA approach, switching between a non-stiff Adams method and a stiff Gear backward differentiation formula method.
For linear boundary value problems the Gel'fand-Lokutsiyevskii chasing method is used.
Differential-algebraic equations use IDA, based on repeated BDF and Newton iteration methods.
For -dimensional PDEs the method of lines is used.
NDSolve supports explicit Method settings that cover most known methods from the literature.
The code for NDSolve and related functions is about 1400 pages long.
Approximate equation solving and optimization
Polynomial root finding is done based on the Jenkins-Traub algorithm.
For sparse linear systems, Solve and NSolve use several efficient numerical methods, mostly based on Gauss factoring with Markowitz products (approximately 250 pages of code).
For systems of algebraic equations, NSolve computes a numerical Gröbner basis using an efficient monomial ordering, then uses eigensystem methods to extract numerical roots.
FindRoot uses a damped Newton's method, the secant method and Brent's method.
With Method->Automatic and two starting values, FindMinimum uses Brent's principal axis method. With one starting value for each variable, FindMinimum uses BFGS quasi-Newton methods, with a limited memory variant for large systems.
If the function to be minimized is a sum of squares, FindMinimum uses the Levenberg-Marquardt method (Method->LevenbergMarquardt).
LinearProgramming uses simplex and revised simplex methods, and with Method->"InteriorPoint" uses primal-dual interior point methods.
For linear cases, NMinimize and NMaximize use the same methods as LinearProgramming. For nonlinear cases, they use Nelder-Mead methods, supplemented by differential evolution, especially when integer variables are present.
Fourier uses the FFT algorithm with decomposition of the length into prime factors. When the prime factors are large, fast convolution methods are used to maintain asymptotic complexity.
For real input, Fourier uses a real transform method.
ListConvolve and ListCorrelate use FFT algorithms when possible. For exact integer inputs, enough digits are computed to deduce exact integer results.
InterpolatingFunction uses divided differences to construct Lagrange or Hermite interpolating polynomials.
Fit works using singular value decomposition. FindFit uses the same method for the linear least-squares case, the Levenberg-Marquardt method for nonlinear least-squares, and general FindMinimum methods for other norms.
CellularAutomaton uses bit-packed parallel operations with bit slicing. For elementary rules, absolutely optimal Boolean functions are used, while for totalistic rules, just-in-time-compiled bit-packed tables are used. In two dimensions, sparse bit-packed arrays are used when possible, with only active clusters updated.
Approximate numerical linear algebra
Machine-precision matrices are typically converted to a special internal representation for processing.
SparseArray with rules involving patterns uses cylindrical algebraic decomposition to find connected array components. Sparse arrays are stored internally using compressed sparse row formats, generalized for tensors of arbitrary rank.
For dense arrays, LAPACK algorithms extended for arbitrary precision are used when appropriate.
BLAS technology is used to optimize for particular machine architectures.
LUDecomposition, Inverse, RowReduce and Det use Gaussian elimination with partial pivoting. LinearSolve uses the same methods, together with iterative improvement for high-precision numbers.
For sparse arrays, LinearSolve uses UMFPACK multifrontal direct solver methods and with Method->"Krylov" uses Krylov iterative methods preconditioned by an incomplete LU factorization.Eigenvalues and Eigenvectors use ARPACK Arnoldi methods.
SingularValueDecomposition uses the QR algorithm with Givens rotations. PseudoInverse, NullSpace and MatrixRank are based on SingularValueDecomposition.
QRDecomposition uses Householder transformations.
SchurDecomposition uses QR iteration.
MatrixExp uses Schur decomposition.
Exact numerical linear algebra
Inverse and LinearSolve use efficient row reduction based on numerical approximation.
With Modulus->n, modular Gaussian elimination is used.
Det uses modular methods and row reduction, constructing a result using the Chinese Remainder Theorem.
Eigenvalues works by interpolating the characteristic polynomial.
MatrixExp uses Putzer's method or Jordan decomposition.