A.9.4 Numerical and Related Functions
Large integer multiplication uses the Karatsuba algorithm.
Integer powers are found by an algorithm based on Horner's rule.
Exact roots start from numerical estimates.
Significance arithmetic is used for arithmetic with approximate numbers.
Basic arithmetic uses approximately 400 pages of C source code.
Random uses the Wolfram rule 30 cellular automaton generator for integers.
It uses a Marsaglia-Zaman subtract-with-borrow generator for real numbers.
GCD uses the Jebelean-Weber accelerated GCD algorithm, together with a combination of Euclid's algorithm and an algorithm based on iterative removal of powers of 2.
PrimeQ first tests for divisibility using small primes, then uses the Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses the Lucas test.
As of 1995, this procedure is known to be correct only for , and it is conceivable that for larger it could claim a composite number to be prime.
The package NumberTheory`PrimeQ` contains a much slower algorithm which has been proved correct for all . It can return an explicit certificate of primality.
FactorInteger switches between removing small primes by trial division and using the Pollard , Pollard rho and continued fraction algorithm.
The package NumberTheory`FactorIntegerECM` contains an elliptic curve algorithm suitable for factoring some very large integers.
Prime and PrimePi use sparse caching and sieving. For large , the Lagarias-Miller-Odlyzko algorithm for PrimePi is used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.
uses the Lenstra-Lenstra-Lovasz lattice reduction algorithm.
Most of the combinatorial functions use sparse caching and recursion.
Fibonacci[n] uses an iterative method based on the binary digit sequence of n.
PartitionsP[n] uses Euler's pentagonal formula for small n, and the non-recursive Hardy-Ramanujan-Rademacher method for larger n.
ClebschGordan and related functions use generalized hypergeometric series.
Elementary transcendental functions
Exponential and trigonometric functions use Taylor series, stable recursion by argument doubling, and functional relations.
Log and inverse trigonometric functions use Taylor series and functional relations.
Values of constants are cached once computed.
Pi uses the Chudnovsky formula (roughly 14 digits per step).
E is computed from its continued fraction expansion.
EulerGamma uses the Brent-McMillan algorithm.
For machine precision most special functions use Mathematica-derived rational minimax approximations. The notes that follow apply mainly to arbitrary precision.
Orthogonal polynomials use stable recursion formulas for polynomial cases and hypergeometric functions in general.
Gamma uses recursion, functional equations and the Binet asymptotic formula.
Incomplete gamma and beta functions use hypergeometric series and continued fractions.
PolyGamma uses Euler-Maclaurin summation, functional equations and recursion.
PolyLog uses Euler-Maclaurin summation, expansions in terms of incomplete gamma functions and numerical quadrature.
Zeta and related functions use Euler-Maclaurin summation and functional equations. Near the critical strip they also use the Riemann-Siegel formula.
StieltjesGamma uses Keiper's algorithm based on numerical quadrature of an integral representation of the zeta function.
The error function and functions related to exponential integrals are all evaluated using incomplete gamma functions.
The inverse error functions use binomial search and a high-order generalized Newton's method.
Bessel functions use series and asymptotic expansions. For integer orders, some also use stable forward recursion.
The hypergeometric functions use functional equations, stable recurrence relations, series expansions and asymptotic series. Methods from NSum and NIntegrate are also sometimes used.
ProductLog uses high-order Newton's method starting from rational approximations and asymptotic expansions.
Elliptic integrals are evaluated using the descending Gauss transformation.
Elliptic theta functions use series summation with recursive evaluation of series terms.
Other elliptic functions mostly use arithmetic-geometric mean methods.
Mathieu functions use Fourier series. The Mathieu characteristic functions use generalizations of Blanch's Newton method.
With Method->Automatic, NIntegrate uses GaussKronrod in one dimension, and MultiDimensional otherwise.
If an explicit setting for MaxPoints is given, NIntegrate by default uses Method->QuasiMonteCarlo.
GaussKronrod: adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
DoubleExponential: non-adaptive double-exponential quadrature.
Trapezoidal: elementary trapezoidal method.
Oscillatory: transformation to handle integrals containing trigonometric and Bessel functions.
MultiDimensional: adaptive Genz-Malik algorithm.
MonteCarlo: non-adaptive Monte Carlo.
QuasiMonteCarlo: non-adaptive Halton-Hammersley-Wozniakowski algorithm.
Numerical sums and products
If the ratio test does not give 1, the Wynn epsilon algorithm is applied to a sequence of partial sums or products.
Otherwise Euler-Maclaurin summation is used with Integrate or NIntegrate.
Numerical differential equations
With Method->Automatic, NDSolve switches between a non-stiff Adams method and a stiff Gear method. Based on LSODE.
Adams: implicit Adams method with order between 1 and 12.
Gear: backward difference formula method with order between 1 and 5.
RungeKutta: Fehlberg order 4-5 Runge-Kutta method for non-stiff equations.
For linear boundary value problems the Gel'fand-Lokutsiyevskii chasing method is used.
For 1+1-dimensional PDEs the method of lines is used.
The code for NDSolve is about 500 pages long.
Approximate equation solving and minimization
Polynomial root finding is done based on the Jenkins-Traub algorithm.
For sparse linear systems, Solve and NSolve use several efficient numerical methods, mostly based on Gauss factoring with Markowitz products (approximately 250 pages of code).
FindRoot uses a damped Newton's method, the secant method and Brent's method.
With Method->Automatic, FindMinimum uses various methods due to Brent: the conjugate gradient in one dimension, and a modification of Powell's method in several dimensions.
With Method->NewtonFindMinimum uses Newton's method. With Method->QuasiNewtonFindMinimum uses the BFGS version of the quasi-Newton method.
ConstrainedMax and related functions use an enhanced version of the simplex algorithm.
Fourier uses the FFT algorithm with decomposition of the length into prime factors.
InterpolatingFunction uses divided differences to construct Lagrange or Hermite interpolating polynomials.
Fit works by computing the product of the response vector with the pseudoinverse of the design matrix.
Approximate numerical linear algebra
Machine-precision matrices are typically converted to a special internal representation for processing.
Algorithms similar to those of LINPACK, EISPACK and LAPACK are used when appropriate.
LUDecomposition, Inverse, RowReduce and Det use Gaussian elimination with partial pivoting. LinearSolve uses the same methods, together with iterative improvement for high-precision numbers.
SingularValues uses the QR algorithm with Givens rotations. PseudoInverse and NullSpace are based on SingularValues.
QRDecomposition uses Householder transformations.
SchurDecomposition uses QR iteration.
MatrixExp uses Schur decomposition.
Exact numerical linear algebra
Inverse and LinearSolve use efficient row reduction based on numerical approximation.
With Modulus->n, modular Gaussian elimination is used.
Det uses modular methods and row reduction, constructing a result using the Chinese Remainder Theorem.
Eigenvalues works by interpolating the characteristic polynomial.
MatrixExp uses Putzer's method or Jordan decomposition.