This is documentation for Mathematica 4, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)

 Documentation /  Mathematica /  Der Einstieg/Demos /  Demos /

Notebook ManipulationProgramming Sampler

Numerics in Mathematica

Mark Sofroniou

Mathematica has several important new numerical features added in Versions 3 and 4. They include a new input syntax for floating-point numbers, enhancements to the significance arithmetic used to track the propagation of errors through numerical functions, efficient new techniques for handling exact numeric quantities, a caching mechanism for numerical expressions, and enhancements to the numericalization function N.

This article introduces these new numerical features. Much of the functionality discussed here began with the pioneering work of Jerry B. Keiper. Parts of this article are based upon his technical report [Keiper 1994]. We will not cover the significant enhancements to the numerical compiler, the new and enhanced standard numerical packages, and several new numerical functions in the latest version.

To set the new features in context, we begin with a summary of the arithmetic model used in Mathematica.

Significance Arithmetic

Mathematica's bigfloat arithmetic model is a variant of interval arithmetic where a single floating point number is used to represent the error. This is formally known as significance or range arithmetic [Keiper 1995]. When the number of significant digits is not too small, this model accurately parallels interval arithmetic whilst being considerably more efficient. The model can be used to reflect the conditioning of an algorithm and therefore gives a good indication of how many digits in a result can be trusted (see [Sofroniou 1996] for an example). This mechanism has important implications, such as the accurate determination of branch cuts in numerical functions.

The scale of a number is the number of digits to the left of the decimal point. We can define the decimal scale as follows.

Then the model used in Mathematica's bigfloat arithmetic is

For reasons of backward compatibility, Precision and Accuracy return integer results even though these values are stored internally as floats. The equality above holds only for the real-valued extensions of these functions. Here is a visual interpretation of this relation.

The denote (decimal) digits of a number. The Accuracy is the number of digits to the right of the decimal point and the Precision is the total number of significant digits. Of course, the scale of a number can be negative, as can the precision and accuracy.

No attempt is made to keep track of the precision or accuracy of machine floats. The assumption is that if you are using machine numbers, you are primarily concerned with efficiency. Furthermore, machine arithmetic is contagious. The result of an operation that involves a machine number should, if possible, be a machine number.

We need a mechanism to enable errors to be propagated automatically through computations. For simple functions, the propagated error can be easy to determine using the condition number [Skeel and Keiper 1993]. However, as expressions become very large, it can become tedious or impractical to undertake a rigorous numerical stability analysis.

Our goal is to find a way to represent a number so that we can independently certify its accuracy or precision, and hence determine its error, regardless of the means by which it was computed. Given this information, we need to equip our numerical functions with the ability to propagate errors. Mathematica's numerical functions are designed to reflect the correct amount of uncertainty for a given input. The elementary arithmetic functions Plus and Times then propagate errors between functions assuming that errors are independent.

The way that Mathematica measures errors is to associate with a number self-contained and readily accessible information about the degree of uncertainty. Each bigfloat is effectively a pseudo-interval with error where a is the Accuracy. Thus, the Accuracy is the negative of the scale of the error. The Accuracy and Precision represent the scales of the absolute and relative errors. The absolute error is Log[10,Abs[error]] and the relative error is Log[10,Abs[error/x]]. Since the Accuracy and the Precision are actually fields in the bignum structure, the scale can be immediately ascertained as the difference of these two quantities.

The amount of precision that you should expect in a result varies according to each function. Certain functions can increase the amount of precision. The way that errors are propagated can be illustrated using the differential quotient, which we can write as

where CapitalDelta represents the difference operator. The differential quotient is a linearized approximation estimating the variation of a function. As an example, the error function Erf can give a result that is much more precise than the input.


The scale of the error is

The following function projects an error interval for an arbitrary differentiable function f at a point z using the differential quotient.

We use this function to propagate the error for Erf at the value z.

Recompute the scale of the error and compare it to the Accuracy of the result returned by Erf.

To illustrate the correspondence between significance arithmetic and interval arithmetic, we adapt an example from [Jacobson 1992]. Given a floating point number, double it and subtract the original number. Repeating this process we might expect to continue to lose the least significant bit from our internal binary representation. The result obtained using Mathematica can be more pessimistic.

Begin with a number with 30 decimal digits.

Using fixed precision we might expect to lose 25 binary digits in 25 iterations, which is roughly 7.5 decimal digits.

The result is actually worse, in that we have lost around 12 significant decimal digits.

The scale of the error is 19 digits to the right of the decimal point.

Now, let us repeat the example using exact interval arithmetic. Using an exact interval avoids the outward rounding of inexact endpoints that would otherwise occur at each step of the computation. The midpoint of the interval is

and the offset (half the width of the interval) is

Here is an interval with the same uncertainty as our original number .

We repeat the previous iteration using the exact interval.

Next, we determine the scale of the width of the interval and find that it is very close indeed to the accuracy of the value that was obtained using Mathematica's significance arithmetic.

It is well known that interval arithmetic can sometimes give overly pessimistic results, a phenomenon referred to as the dependency problem. As we have seen, the assumption of independence of errors in Mathematica's significance arithmetic can have a similar effect. Some heuristics for rearranging expressions to improve the bounds given by interval arithmetic are discussed in [Hansen 1992]. Iterative algorithms must generally be recast for interval computations. A user who has information about how errors are propagated can manually specify how errors combine, using the functions SetPrecision or SetAccuracy. A user also has the choice of working with fixed-precision arithmetic by setting values for $MinPrecision and $MaxPrecision.

The primary goal of significance arithmetic is to give a good estimate of the number of correct digits in a result, whilst being efficient and accurate most of the time. The arithmetic model works well provided that the length of the interval representing the error is short relative to the magnitude of the numbers represented. However, some problems require a rigorous error analysis. For such problems, bona fide interval arithmetic can be used.

Here are some more examples to illustrate various aspects of the arithmetic model. Mathematica's accuracy and precision are computed internally as double precision numbers, although for reasons of backward compatibility the functions Accuracy and Precision round results to the nearest integer value.

An option can be used to switch off rounding.

Now, the precision and accuracy are real-valued functions.

This switches rounding back on again.

The precision of a bigfloat can be negative (although negative precision can be a little confusing). The default value of $MinPrecision bounds numbers with negative precision. Here is the default behavior when the set precision is negative.

We can change the default to allow numbers to attain negative precision.

Numbers with zero or negative precision are printed as zero, since none of the digits are known to be correct.

This restores the system default value.

The precisions of the real and imaginary parts of a complex number are independent. The scale of a complex number is given by the absolute value Abs, the Accuracy is the minimum of the accuracies of the real and imaginary parts, and the Precision is calculated as the sum of the accuracy and the scale. To illustrate, let us define a complex number whose real and imaginary parts have different scales and precisions.

Computing the decimal Scale, the Accuracy, and the Precision shows the connection between these quantities.

New Syntax of Floating-Point Numbers

InputForm and FullForm have been changed to address several problems. In the past, the FullForm of a very large or small number was given in "scientific" notation with a raised exponent. This raised exponent made the character string unparsable and FullForm could not be used as input. In the latest versions, FullForm of floating-point numbers has been changed to be the same as InputForm.

Another limitation was that in scientific notation, numbers became multiplication problems. For example, to evaluate 1.234*10^56789, Mathematica would multiply the two numbers 1.234 and 10^56789. The multiplication of the inexact and exact numbers was not so much of a problem as the computation and representation of the exponent , which requires decimal digits. This was wasteful of time and memory.

A further problem with the earlier InputForm was that the specification of certain types of numbers was ambiguous. InputForm specified the precision of arbitrary-precision numbers by counting digits and did not display hidden digits. By merely counting digits, arbitrary-precision numbers with low precision were given in a form that could potentially be mistaken for machine numbers.

The new InputForm of a floating-point number consists of three fields: the mantissa, the precision (or accuracy), and the exponent. The mantissa and precision are separated by a backquote symbol (`) and the exponent is preceded by *^. For example, the input syntax for the bigfloat 1.1 with 6.25 digits of precision is

An empty exponent field indicates that the exponent is zero; an empty precision field indicates a machine number. The machine number 1.25 is

and the bigfloat 3.0 with 7.7124 digits of precision is

Not all decimal numbers are exactly representable as (finite) binary numbers. This gives the machine number closest to the decimal number 0.1.

The representation error is not apparent because it is smaller than machine precision and is masked by the output formatting code. In contrast, the bigfloat 1`100*^-1 is guaranteed to have 100 decimal digits. Using bigfloats, representation errors can thus be made small enough for all practical purposes. In addition, we could use rational arithmetic for exact representation.

Unfortunately, the concept of precision has a singularity at zero. We need a way to specify zeros to various accuracies. The syntax for specifying the accuracy of a number is the same as that for specifying the precision, except that ` is replaced by ``. InputForm only uses the accuracy notation to represent the number zero, but the parser will accept it as a specification of accuracy for other numbers as well. Here is an inexact zero with accuracy 23.2 digits.

This bigfloat has accuracy 7.7124 digits. Notice how significant trailing zeros are displayed.

This bigfloat has scale and accuracy 5.6 digits. Significant leading zeros are displayed.

Because there is no attempt to keep track of significant digits with machine arithmetic, the machine numbers 3` and 3`` are exactly the same: each gives a machine number without regard to either the precision or the accuracy.

To maintain compatibility both with earlier versions of Mathematica and with other programs, the current version has a global variable, $NumberMarks, which can have one of three possible values: True, False, or Automatic. When the value is True, the syntax used involves ` and *^ for the InputForm of all floating point numbers. The value False indicates that the syntax is not to be used at all. The default value is Automatic, which indicates that the syntax is used for bigfloats but not for machine numbers.



You can turn off the syntax for both machine and arbitrary-precision numbers.

Now, neither machine numbers nor arbitrary-precision numbers are shown with the new syntax.


This resets the default.

Various functions like OpenWrite and OpenAppend have the option NumberMarks, which can take the value $NumberMarks or any of its three valid values. Likewise, all streams have the option NumberMarks with the same four valid values.

The N Function

A frustration with N in previous versions has been that it would sometimes give a result with less precision than was requested. For example, if you asked for N[Pi - q, 20] where q is a rational number very near , you used to get a result with much less precision than 20 digits. This is because N was applied to each of Pi and q, and the result was the difference, which lost digits due to cancellation. Now, if digits are lost due to cancellation, the working precision is increased in an effort to get the requested precision in the result.

To achieve the desired precision or accuracy, the new algorithm for N attempts to anticipate the required precision with which to call a fixed-precision numericalization routine. This is done by keeping track of the deficiency in the resulting precision or accuracy as a function of the precision used. Normally, but not always, this function will be linear with slope one.

For example, this rational number is close to .

Here is a 20-digit approximation to the difference Pi-q.

In trying to achieve a requested precision, Mathematica stops increasing the working precision when the extra precision used reaches $MaxExtraPrecision. The default value of this global parameter is 50 digits, but you can set it to higher or lower values. For example, this constructs a rational number that is even closer to Pi.

Set $MaxExtraPrecision to 200 digits.

The difference can now be computed to 20 digits of precision.

This resets the default value.

Of course, this approach could lead to difficulties: the two numeric quantities could have exactly the same value and no amount of increased precision would give a satisfactory result. This expression is equal to zero, but inexact heuristics cannot be used to determine that fact.

You should realize that if we compute a numerical approximation using machine numbers, there is no guarantee that any of the digits in the result are correct. The result can depend upon the number of bits of the mantissa, which effectively determines the precision of a machine's floating-point arithmetic. It can also depend strongly on the order that arithmetic operations are performed.

Here, we explicitly request 20 significant digits in the result. Mathematica gives up when the working precision used exceeds 70 digits. In contrast to the results of machine arithmetic, this result is consistent across different computer platforms because the arbitrary-precision arithmetic is implemented in software.

In fact, using algebraic transformations on nested radicals, Mathematica can now show that the expression is indeed equal to zero.

There are other situations in which N may be unable to achieve a requested precision, for example when an expression contains low-precision inexact values. Here is a product with an inexact imaginary part.

Here, we ask for 100 digits of precision, but the result only has machine precision.

Sometimes, you may want to avoid using Mathematica's new dynamic precision control mechanism. For example, you may already know how well or how badly conditioned your algorithm is or you may wish to use fixed-precision arithmetic. In such cases, you can set $MaxExtraPrecision to zero to indicate that N should not increment precision. This setting gives the same results as you would have obtained in previous versions of Mathematica.

Here is the evaluation of the difference Pi-q that we considered above using fixed precision arithmetic. There are no significant digits in the result, so it displays as zero. Despite the fact that Mathematica may have stored the previous approximation to Pi-q (see the section on numerical caching) the system detects that global settings have changed and repeats the computation using fixed precision.

A variant of this technology is used in many instances in order to adopt canonical forms for expressions. Canonical forms are essential because they assist in many types of expression simplification. The following radical is normalized by determining the sign of its argument. Since the argument is ascertained to be negative, the sign is extracted.

If we were to encounter a hidden numeric zero as the argument, this canonicalization could become quite costly and futile. Therefore, a milder version of the mechanism used by N is adopted internally by many functions. Even though the argument here is negative, the sign is not extracted.

An N-value associated to a symbol is a number that specifies how N should evaluate expressions involving the symbol. For example, here is an N-value defined for the symbol x.

This value is used in the numericalization of an expression involving x.

This gives the list of N-values of x.

Strictly speaking the precision of an exact zero is indeterminate while the precision of a floating-point zero is 0. Because the precision of 0 is not well defined, the default choices are that N[0] yields a machine inexact zero and N[0, prec] gives the integer 0 by default.

However, one might sometimes prefer that N[0] give a different result. What that value should be is often dependent on the intended context. Another new feature of the function N is that the values of N[0] and N[0, prec] have been made user-definable. To set a value for N[0], first unprotect the system symbol Integer.

Assign a value to N[0].

N[0] is now the machine integer 0.

This is how the definition is stored.

The two-argument form of N is unchanged.

We can define a rule for the two-argument form.

Now the two-argument form works.

This removes the rules that were defined and restores the system default behavior.

Because precision and accuracy are stored internally as floating-point numbers, rather than being computed by simply counting digits, a rule such a N[0, 23] = foo will not work. You need to give the precision as a floating-point number, as in N[0, 23.] = foo.

N works by recursively going inside an expression and then numericalizing from the leaves upward through the expression tree. It is sometimes useful to be able to control the evaluation of N on specific parts of an expression. The attributes NHoldFirst, NHoldRest, and NHoldAll can be used for this purpose. For example, since arrays and functions have the same syntax in Mathematica, we may wish to specify that certain expressions are indexed variables and that the indices should be treated differently. We can use the NHoldAll attribute to protect the indices of an array from numericalization.

Numeric Quantities

Mathematica has several kinds of expressions: numbers (either exact or inexact), expressions that represent numeric quantities but aren't numbers according to NumberQ (such as Pi or Sin[2]), and expressions that do not represent numbers even after passing through N (such as Sin[x + y]). When combining numbers with any of the four arithmetic operations, the result is always a number (except when division by zero, overflow, or underflow occurs). If the numbers are all exact, the result will likewise be exact. If inexact numbers are involved, the result will be inexact, unless all of the inexactness is lost due to multiplication by an exact zero. For example:

In previous versions, arithmetic expressions involving non-number numeric quantities, such as 0.5 + Sin[2], were returned unevaluated. The current version of Mathematica can evaluate such expressions efficiently to get an inexact number whose precision is determined using machine arithmetic or by the accuracy of the arguments.

To perform this addition, the exact numeric quantity Sin[2] must be "coerced" into an inexact number with a certain precision. Here is another example.

The scale of the numeric quantity Pi^100 is very large. The amount of precision required to coerce it is determined by the accuracy of the inexact number.

In order to distinguish numeric quantities from other expressions, Mathematica needs to be able to tell quickly whether an expression would become a number if it were encountered by the function N. This is done by marking numeric quantities during evaluation. The new predicate NumericQ checks if an expression is marked as numeric.

How does Mathematica decide whether an expression is numeric? NumericQ acts on expressions recursively. All numbers, as well as the constants Catalan, Degree, E, EulerGamma, GoldenRatio, and Pi, are numeric quantities. Any expression built up from numeric quantities using functions with the new attribute NumericFunction is also a numeric quantity. For example, Sin is a NumericFunction, so Sin[2] is a numeric quantity.

On the other hand, f is not a NumericFunction, so f[2] is not a numeric quantity.

NumericQ and NumericFunction provide an efficient mechanism for predicting whether an expression would become a number if it were encountered by N. However, this mechanism is not foolproof. NumericQ[Sin[2, 3]] gives True, but N[Sin[2, 3]] is simply Sin[2., 3.], not a number. Likewise, NumericQ can give false negatives. For example, with this definition of N[f[x]],

the expression f[2] just evaluates to itself and is not a numeric quantity.

Nevertheless, N[f[2]] evaluates to a number.

This clears the N-value associated with f.

In writing programs, you should be aware that NumericQ may not always give an accurate prediction of the effects of N. If you must be certain of the effect of N, you should actually use N and check the result.

We can use the NumericFunction attribute in conjunction with SetPrecision to specify the conditioning of a function. The following function takes an inexact number as input and returns a number whose precision depends on the input precision.

The requested 200 digits of precision cannot be attained, so Mathematica gives up when the result exceeds 125 digits.

If we raise the extra precision by a sufficient amount, we can obtain an answer to the desired precision.


Of course, we can always define a function that is so badly conditioned that no amount of precision will give us a satisfactory result. Here is such a function.

Reset the system default value.

Many of the built-in functions, like Equal, Less, and Mod, support the new numeric functionality. Numeric quantities with different numerical values can be tested for equality or inequality. For example, we can compare Pi with the rational approximation defined above.

In expressions that contain inexact numbers, exact numeric quantities are coerced appropriately. The amount of precision used in the coercion is determined by the Accuracy of the inexact numbers present.

The right-hand side of this equality is a machine number, but a fairly large amount of precision is needed to try to verify whether the equality holds. The result here indicates that the inputs cannot be distinguished.

In fact, the amount of precision required to ascertain if a numeric quantity is not equal to a machine zero is the negative of the scale of the minimum machine number.

Accuracy gives us the same information directly.

If an equality or inequality cannot be evaluated using maximum working precision, the input is returned unevaluated.

If the previous inequality is exact on both sides then Mathematica will continued to raise precision until the current value of $MaxExtraPrecision is exceeded.

Here the left and right hand sides are in fact different and the inequality is resolved.

The functions Mod, Round, Floor, and Ceiling also attempt to resolve numeric quantities numerically.

The new functions IntegerPart and FractionalPart are closely related to Floor and Mod.

Expressions involving inverse trigonometric functions of trigonometric functions and logarithms of exponentials are often closely related to Mod. When such expressions are numeric quantities, they can be evaluated numerically.

When Mathematica sorts a list of expressions, all of the numbers come before all of the non-numbers. Real numbers are ordered according to magnitude and complex numbers according to the magnitude of their real parts. Numbers are followed by symbols, which in turn are followed by normal expressions. However, numeric quantities that are not numbers (according to NumberQ) are sorted lexicographically. Thus, they might be sandwiched between non-numeric quantities.

The numbers themselves are sorted by value, but the numeric quantities are ordered lexicographically.

By specifying an ordering function you can sort real-valued numeric quantities by magnitude, but this is slower than the canonical ordering.

Because of the non-uniqueness of representation and the limitations of using numerical heuristics to distinguish expressions, it is not always possible to order numeric expressions uniquely according to magnitude.

The extension of predicates to handle numeric quantities is useful, for example, for the assumptions mechanism in Integrate. Previous versions of Integrate returned only generic solutions. Now, Integrate returns multiple-valued solutions in the form of conditional statements.

If a is Pi, the correct solution branch is generated because the assertion Re[ Pi] > 0 can be evaluated.

The fact that multiple solutions are represented as conditional statements means that you can programmatically obtain all solution branches.

Numerical Caching

The new numeric technology gives us much more flexibility, but occasionally at considerable additional expense. In order to recover some of the arithmetic efficiency of Version 2.2, Mathematica now makes extensive use of a cache table to store results of numerical computations. Since common structures often appear at various levels of an expression and across computations, storing results helps to avoid duplication of effort.

As you perform numerical computations involving numeric quantities, you will often see that expressions are faster to evaluate in subsequent calls. Here is a simple compound numeric expression.

If we evaluate the same expression with the same or lower precision, it is retrieved from the cache and returned with that precision.

Numeric constants have their own local caching mechanism, which stores the highest-precision result obtained in a session.

This is a mechanism that clears the entire numerical cache table.


Locally cached values are not affected when the cache table is cleared.

The numerical cache uses hashing of numeric quantities to determine an index to a two-dimensional look-up table. Each entry in the cache is a structure that contains various fields of information, including a pointer to the numeric quantity and the approximate numerical value. The working precision used to obtain the entry and the precision of the entry are also stored, in order to be able to ascertain any deficiency in precision. In addition, the value of $MaxExtraPrecision used to obtain an entry is stored, as is a system time-stamp of the head of the expression.

Because the numerical cache is finite in size, we need to decide which values to keep and which values to update once the cache is full. Therefore, the cache also stores the reference count of each entry and a numerical time stamp for the most recent reference. Items are selected for removal using a heuristic measure to determine the oldest, least referenced, lowest precision entry.

To illustrate how these cached quantities are used, let us define a numeric function and set a value in the cache by numericalizing with N.

Now, we numericalize to higher precision. The cached value is flushed and a new value is set in its place.

Next, we redefine the function, but maintain the same pattern in the left-hand side of the definition. This replaces the definition in the list of DownValues rather than creating a new entry and also changes the time stamp of the symbol g.

When we now evaluate g[3] numerically, the change in the definition causes the old value to be flushed, even though we are requesting lower precision.


Mathematica contains considerable enhancements to its basic arithmetic. Many types of arithmetic are supported. A new syntax for numbers provides more efficient and flexible input. The function N now uses an adaptive-precision algorithm that attempts to return a result to the requested number of digits. Attributes can be used to restrict the application of N to specific parts of expressions. Since there might be no way to distinguish two forms of an expression numerically, the system parameter $MaxExtraPrecision has been introduced to bound the amount of extra precision used internally. The predicate NumericQ detects numeric quantities and this information can be propagated through expressions using the NumericFunction attribute. Numerical heuristics are used to determine relations involving numeric quantities.

For recent references on the nuances of machine-precision arithmetic and the accuracy and stability of numerical algorithms, the interested reader is encouraged to consult [Acton 1996; Goldberg 1991; Higham 1996; Koren 1993] and the references therein.


Many of the features and much of the functionality discussed here began with the pioneering work of Jerry B. Keiper. I deeply regret that he is not still with us to see many of his ideas come to fruition.

I would like to thank Michael Trott for several insightful examples and Professor Benno Fuchssteiner for suggesting the inclusion of an example of the error propagation mechanism used by Mathematica's numerical functions.


Acton, F.S. 1996. Real Computing Made Real: Preventing Errors in Scientific and Engineering Calculations. Princeton University Press.

Goldberg, D. 1991. What every computer scientist should know about floating point arithmetic. ACM Computing Surveys 23(1): 5--48.

Hansen, E. 1992. Global optimization using interval analysis. Marcel Dekker, Inc. New York.

Higham, N.J. 1996. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia.

Jacobson, D. 1992. Floating point in Mathematica. The Mathematica Journal 2(3): 42--46.

Keiper, J.B. 1994. New numerical features of Mathematica. Wolfram Research Technical Report.

Keiper, J.B. 1995. Interval arithmetic in Mathematica. The Mathematica Journal 5(2): 66--71. Reprinted from the proceedings of the International Conference on Numerical Analysis with Automatic Result Verification, Interval Computations 3, 1993.

Koren, I. 1993. Computer Arithmetic Algorithms. Prentice Hall. Englewood Cliffs, New Jersey.

Skeel, R.D., and J.B. Keiper. 1993. Elementary Numerical Computing with Mathematica. McGraw Hill, New York.

Sofroniou, M. 1996. Automatic precision control. The Mathematica Journal. 6(1): 29--30.

Notebook ManipulationProgramming Sampler