# Arbitrary‐Precision Numbers

When you do calculations with arbitrary‐precision numbers, the Wolfram Language keeps track of precision at all points. In general, the Wolfram Language tries to give you results which have the highest possible precision, given the precision of the input you provided.

The Wolfram Language treats arbitrary‐precision numbers as representing the values of quantities where a certain number of digits are known, and the rest are unknown. In general, an arbitrary‐precision number x is taken to have Precision[x] digits which are known exactly, followed by an infinite number of digits which are completely unknown.

When you do a computation, the Wolfram Language keeps track of which digits in your result could be affected by unknown digits in your input. It sets the precision of your result so that no affected digits are ever included. This procedure ensures that all digits returned by the Wolfram Language are correct, whatever the values of the unknown digits may be.

*exactly*, then you have to say so explicitly.

In many computations, the precision of the results you get progressively degrades as a result of "roundoff error". A typical case of this occurs if you subtract two numbers that are close together. The result you get depends on high‐order digits in each number, and typically has far fewer digits of precision than either of the original numbers.

The precision of the output from a function can depend in a complicated way on the precision of the input. Functions that vary rapidly typically give less precise output, since the variation of the output associated with uncertainties in the input is larger. Functions that are close to constants can actually give output that is more precise than their input.

It is worth realizing that different ways of doing the same calculation can end up giving you results with very different precisions. Typically, if you once lose precision in a calculation, it is essentially impossible to regain it; in losing precision, you are effectively losing information about your result.

The fact that different ways of doing the same calculation can give you different numerical answers means, among other things, that comparisons between approximate real numbers must be treated with care. In testing whether two real numbers are "equal", the Wolfram Language effectively finds their difference, and tests whether the result is "consistent with zero" to the precision given.

The internal algorithms that the Wolfram Language uses to evaluate mathematical functions are set up to maintain as much precision as possible. In most cases, built‐in Wolfram Language functions will give you results that have as much precision as can be justified on the basis of your input. In some cases, however, it is simply impractical to do this, and the Wolfram Language will give you results that have lower precision. If you give higher‐precision input, the Wolfram Language will use higher precision in its internal calculations, and you will usually be able to get a higher‐precision result.

N[expr] | evaluate expr numerically to machine precision |

N[expr,n] | evaluate expr numerically trying to get a result with n digits of precision |

If you start with an expression that contains only integers and other exact numeric quantities, then N[expr,n] will in almost all cases succeed in giving you a result to n digits of precision. You should realize, however, that to do this the Wolfram Language sometimes has to perform internal intermediate calculations to much higher precision.

The global variable $MaxExtraPrecision specifies how many additional digits should be allowed in such intermediate calculations.

variable | default value | |

$MaxExtraPrecision | 50 | maximum additional precision to use |

Controlling precision in intermediate calculations.

Even when you are doing computations that give exact results, the Wolfram Language still occasionally uses approximate numbers for some of its internal calculations, so that the value of $MaxExtraPrecision can thus have an effect.

In doing calculations that degrade precision, it is possible to end up with numbers that have no significant digits at all. But even in such cases, the Wolfram Language still maintains information on the accuracy of the numbers. Given a number with no significant digits, but accuracy a, the Wolfram Language can then still tell that the actual value of the number must be in the range . The Wolfram Language by default prints such numbers in the form .

One subtlety in characterizing numbers by their precision is that any number that is consistent with zero must be treated as having zero precision. The reason for this is that such a number has no digits that can be recognized as significant, since all its known digits are just zero.

If you do computations whose results are likely to be near zero, it can be convenient to specify the accuracy, rather than the precision, that you want to get.

N[expr,p] | evaluate expr to precision p |

N[expr,{p,a}] | evaluate expr to at most precision p and accuracy a |

N[expr,{Infinity,a}] | evaluate expr to any precision but to accuracy a |

Specifying accuracy as well as precision.

When the Wolfram Language works out the potential effect of unknown digits in arbitrary‐precision numbers, it assumes by default that these digits are completely independent in different numbers. While this assumption will never yield too high a precision in a result, it may lead to unnecessary loss of precision.

In particular, if two numbers are generated in the same way in a computation, some of their unknown digits may be equal. Then, when these numbers are, for example, subtracted, the unknown digits may cancel. By assuming that the unknown digits are always independent, however, the Wolfram Language will miss such cancellations.

Numerical algorithms sometimes rely on cancellations between unknown digits in different numbers yielding results of higher precision. If you can be sure that certain unknown digits will eventually cancel, then you can explicitly introduce fixed digits in place of the unknown ones. You can carry these fixed digits through your computation, then let them cancel, and get a result of higher precision.

SetPrecision[x,n] | create a number with n decimal digits of precision, padding with base‐2 zeros if necessary |

SetAccuracy[x,n] | create a number with n decimal digits of accuracy |

Functions for modifying precision and accuracy.

SetPrecision works by adding digits which are zero in base 2. Sometimes, the Wolfram Language stores slightly more digits in an arbitrary‐precision number than it displays, and in such cases, SetPrecision will use these extra digits before introducing zeros.

variable | default value | |

$MaxPrecision | Infinity | maximum total precision to be used |

$MinPrecision | 0 | minimum precision to be used |

Global precision control parameters.

By making the global assignment $MinPrecision=n, you can effectively apply SetPrecision[expr,n] at every step in a computation. This means that even when the number of correct digits in an arbitrary‐precision number drops below n, the number will always be padded to have n digits.

If you set $MaxPrecision=n as well as $MinPrecision=n, then you can force all arbitrary‐precision numbers to have a fixed precision of n digits. In effect, what this does is to make the Wolfram Language treat arbitrary‐precision numbers in much the same way as it treats machine numbers—but with more digits of precision.

Fixed‐precision computation can make some calculations more efficient, but without careful analysis you can never be sure how many digits are correct in the results you get.