A.9.4 Numerical and Related Functions
Number representation and numerical evaluation
Large integers and highprecision approximate numbers are stored as arrays of base or digits, depending on the lengths of machine integers.
Precision is internally maintained as a floatingpoint number.
IntegerDigits, RealDigits and related base conversion functions use recursive divideandconquer algorithms. Similar algorithms are used for number input and output.
N uses an adaptive procedure to increase its internal working precision in order to achieve whatever overall precision is requested.
Floor, Ceiling and related functions use an adaptive procedure similar to N to generate exact results from exact input.
Basic arithmetic
Multiplication of large integers and highprecision approximate numbers is done using interleaved schoolbook, Karatsuba, threeway ToomCook and numbertheoretic transform algorithms.
Machinecode optimization for specific architectures is achieved by using GMP.
Integer powers are found by a leftright binary decomposition algorithm.
Reciprocals and rational powers of approximate numbers use Newton's method.
Exact roots start from numerical estimates.
Significance arithmetic is used for all arithmetic with approximate numbers beyond machine precision.
Pseudorandom numbers
Random uses the Wolfram rule 30 cellular automaton generator for integers.
It uses a MarsagliaZaman subtractwithborrow generator for real numbers.
Numbertheoretical functions
GCD interleaves the HGCD algorithm, the JebeleanSorensonWeber accelerated GCD algorithm, and a combination of Euclid's algorithm and an algorithm based on iterative removal of powers of 2.
PrimeQ first tests for divisibility using small primes, then uses the MillerRabin strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
As of 1997, this procedure is known to be correct only for , and it is conceivable that for larger it could claim a composite number to be prime.
The package NumberTheory`PrimeQ` contains a much slower algorithm which has been proved correct for all . It can return an explicit certificate of primality.
FactorInteger switches between trial division, Pollard , Pollard rho and quadratic sieve algorithms.
The package NumberTheory`FactorIntegerECM` contains an elliptic curve algorithm suitable for factoring some very large integers.
Prime and PrimePi use sparse caching and sieving. For large , the LagariasMillerOdlyzko algorithm for PrimePi is used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.
LatticeReduce uses the LenstraLenstraLovasz lattice reduction algorithm.
To find a requested number of terms ContinuedFraction uses a modification of Lehmer's indirect method, with a selfrestarting divideandconquer algorithm to reduce the numerical precision required at each step.
ContinuedFraction uses recurrence relations to find periodic continued fractions for quadratic irrationals.
FromContinuedFraction uses iterated matrix multiplication optimized by a divideandconquer method.
Combinatorial functions
Most combinatorial functions use sparse caching and recursion.
Factorial, Binomial and related functions use a divideandconquer algorithm to balance the number of digits in subproducts.
Fibonacci[n] uses an iterative method based on the binary digit sequence of n.
PartitionsP[n] uses Euler's pentagonal formula for small n, and the nonrecursive HardyRamanujanRademacher method for larger n.
ClebschGordan and related functions use generalized hypergeometric series.
Elementary transcendental functions
Exponential and trigonometric functions use Taylor series, stable recursion by argument doubling, and functional relations.
Log and inverse trigonometric functions use Taylor series and functional relations.
Mathematical constants
Values of constants are cached once computed.
Binary splitting is used to subdivide computations of constants.
Pi is computed using the Chudnovsky formula.
E is computed from its series expansion.
EulerGamma uses the BrentMcMillan algorithm.
Catalan is computed from a linearly convergent Ramanujan sum.
Special functions
For machine precision most special functions use Mathematicaderived rational minimax approximations. The notes that follow apply mainly to arbitrary precision.
Orthogonal polynomials use stable recursion formulas for polynomial cases and hypergeometric functions in general.
Gamma uses recursion, functional equations and the Binet asymptotic formula.
Incomplete gamma and beta functions use hypergeometric series and continued fractions.
PolyGamma uses EulerMaclaurin summation, functional equations and recursion.
PolyLog uses EulerMaclaurin summation, expansions in terms of incomplete gamma functions and numerical quadrature.
Zeta and related functions use EulerMaclaurin summation and functional equations. Near the critical strip they also use the RiemannSiegel formula.
StieltjesGamma uses Keiper's algorithm based on numerical quadrature of an integral representation of the zeta function.
The error function and functions related to exponential integrals are all evaluated using incomplete gamma functions.
The inverse error functions use binomial search and a highorder generalized Newton's method.
Bessel functions use series and asymptotic expansions. For integer orders, some also use stable forward recursion.
The hypergeometric functions use functional equations, stable recurrence relations, series expansions and asymptotic series. Methods from NSum and NIntegrate are also sometimes used.
ProductLog uses highorder Newton's method starting from rational approximations and asymptotic expansions.
Elliptic integrals are evaluated using the descending Gauss transformation.
Elliptic theta functions use series summation with recursive evaluation of series terms.
Other elliptic functions mostly use arithmeticgeometric mean methods.
Mathieu functions use Fourier series. The Mathieu characteristic functions use generalizations of Blanch's Newton method.
Numerical integration
With Method>Automatic, NIntegrate uses GaussKronrod in one dimension, and MultiDimensional otherwise.
If an explicit setting for MaxPoints is given, NIntegrate by default uses Method>QuasiMonteCarlo.
GaussKronrod: adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
DoubleExponential: nonadaptive doubleexponential quadrature.
Trapezoidal: elementary trapezoidal method.
Oscillatory: transformation to handle certain integrals containing trigonometric and Bessel functions.
MultiDimensional: adaptive GenzMalik algorithm.
MonteCarlo: nonadaptive Monte Carlo.
QuasiMonteCarlo: nonadaptive HaltonHammersleyWozniakowski algorithm.
Numerical sums and products
If the ratio test does not give 1, the Wynn epsilon algorithm is applied to a sequence of partial sums or products.
Otherwise EulerMaclaurin summation is used with Integrate or NIntegrate.
Numerical differential equations
For ordinary differential equations, NDSolve by default uses an LSODA approach, switching between a nonstiff Adams method and a stiff Gear backward differentiation formula method.
For linear boundary value problems the Gel'fandLokutsiyevskii chasing method is used.
Differentialalgebraic equations use IDA, based on repeated BDF and Newton iteration methods.
For dimensional PDEs the method of lines is used.
NDSolve supports explicit Method settings that cover most known methods from the literature.
The code for NDSolve and related functions is about 1400 pages long.
Approximate equation solving and optimization
Polynomial root finding is done based on the JenkinsTraub algorithm.
For sparse linear systems, Solve and NSolve use several efficient numerical methods, mostly based on Gauss factoring with Markowitz products (approximately 250 pages of code).
For systems of algebraic equations, NSolve computes a numerical Gröbner basis using an efficient monomial ordering, then uses eigensystem methods to extract numerical roots.
FindRoot uses a damped Newton's method, the secant method and Brent's method.
With Method>Automatic and two starting values, FindMinimum uses Brent's principal axis method. With one starting value for each variable, FindMinimum uses BFGS quasiNewton methods, with a limited memory variant for large systems.
If the function to be minimized is a sum of squares, FindMinimum uses the LevenbergMarquardt method (Method>LevenbergMarquardt).
LinearProgramming uses simplex and revised simplex methods, and with Method>"InteriorPoint" uses primaldual interior point methods.
For linear cases, NMinimize and NMaximize use the same methods as LinearProgramming. For nonlinear cases, they use NelderMead methods, supplemented by differential evolution, especially when integer variables are present.
Data manipulation
Fourier uses the FFT algorithm with decomposition of the length into prime factors. When the prime factors are large, fast convolution methods are used to maintain asymptotic complexity.
For real input, Fourier uses a real transform method.
ListConvolve and ListCorrelate use FFT algorithms when possible. For exact integer inputs, enough digits are computed to deduce exact integer results.
InterpolatingFunction uses divided differences to construct Lagrange or Hermite interpolating polynomials.
Fit works using singular value decomposition. FindFit uses the same method for the linear leastsquares case, the LevenbergMarquardt method for nonlinear leastsquares, and general FindMinimum methods for other norms.
CellularAutomaton uses bitpacked parallel operations with bit slicing. For elementary rules, absolutely optimal Boolean functions are used, while for totalistic rules, justintimecompiled bitpacked tables are used. In two dimensions, sparse bitpacked arrays are used when possible, with only active clusters updated.
Approximate numerical linear algebra
Machineprecision matrices are typically converted to a special internal representation for processing.
SparseArray with rules involving patterns uses cylindrical algebraic decomposition to find connected array components. Sparse arrays are stored internally using compressed sparse row formats, generalized for tensors of arbitrary rank.
For dense arrays, LAPACK algorithms extended for arbitrary precision are used when appropriate.
BLAS technology is used to optimize for particular machine architectures.
LUDecomposition, Inverse, RowReduce and Det use Gaussian elimination with partial pivoting. LinearSolve uses the same methods, together with iterative improvement for highprecision numbers.
For sparse arrays, LinearSolve uses UMFPACK multifrontal direct solver methods and with Method>"Krylov" uses Krylov iterative methods preconditioned by an incomplete LU factorization.Eigenvalues and Eigenvectors use ARPACK Arnoldi methods.
SingularValueDecomposition uses the QR algorithm with Givens rotations. PseudoInverse, NullSpace and MatrixRank are based on SingularValueDecomposition.
QRDecomposition uses Householder transformations.
SchurDecomposition uses QR iteration.
MatrixExp uses Schur decomposition.
Exact numerical linear algebra
Inverse and LinearSolve use efficient row reduction based on numerical approximation.
With Modulus>n, modular Gaussian elimination is used.
Det uses modular methods and row reduction, constructing a result using the Chinese Remainder Theorem.
Eigenvalues works by interpolating the characteristic polynomial.
MatrixExp uses Putzer's method or Jordan decomposition.
