Functions with Sensitive Dependence on Their Input
Functions that are specified by simple algebraic formulas tend to be such that when their input is changed only slightly, their output also changes only slightly. But functions that are instead based on executing procedures quite often show almost arbitrarily sensitive dependence on their input. Typically the reason this happens is that the procedure "excavates" progressively less and less significant digits in the input.
This shows successive steps in a simple iterative procedure with input 0.1111.
Here is the result with input 0.1112. Progressive divergence from the result with input 0.1111 is seen.
The action of FractionalPart[2x] is particularly simple in terms of the binary digits of the number : it just drops the first one, and shifts the remaining ones to the left. After several steps, this means that the results one gets are inevitably sensitive to digits that are far to the right, and have an extremely small effect on the original value of .
This shows the shifting process achieved by FractionalPart[2x]
in the first 8 binary digits of
If you give input only to a particular precision, you are effectively specifying only a certain number of digits. And once all these digits have been "excavated" you can no longer get accurate results, since to do so would require knowing more digits of your original input. So long as you use arbitrary-precision numbers, Mathematica automatically keeps track of this kind of degradation in precision, indicating a number with no remaining significant digits by , as discussed in "Arbitrary-Precision Numbers".
Successive steps yield numbers of progressively lower precision, and eventually no precision at all.
This asks for the precision of each number. Zero precision indicates that there are no correct significant digits.
This shows that the exact result is a periodic sequence.
It is important to realize that if you use approximate numbers of any kind, then in an example like the one above you will always eventually run out of precision. But so long as you use arbitrary-precision numbers, Mathematica will explicitly show you any decrease in precision that is occurring. However, if you use machine-precision numbers, then Mathematica will not keep track of precision, and you cannot tell when your results become meaningless.
If you use machine-precision numbers, Mathematica
will no longer keep track of any degradation in precision.
By iterating the operation FractionalPart[2x] you extract successive binary digits in whatever number you start with. And if these digits are apparently random—as in a number like —then the results will be correspondingly random. But if the digits have a simple pattern—as in any rational number—then the results you get will be correspondingly simple.
By iterating an operation such as FractionalPart[3/2x] it turns out however to be possible to get seemingly random sequences even from very simple input. This is an example of a very general phenomenon first identified by Stephen Wolfram in the mid-1980s, which has nothing directly to do with sensitive dependence on input.
This generates a seemingly random sequence, even starting from simple input.
After the values have been computed, one can safely find numerical approximations to them.
Here are the last 5 results after 1000 iterations, computed using exact numbers.
Using machine-precision numbers gives completely incorrect results.
Many kinds of iterative procedures yield functions that depend sensitively on their input. Such functions also arise when one looks at solutions to differential equations. In effect, varying the independent parameter in the differential equation is a continuous analog of going from one step to the next in an iterative procedure.
This finds a solution to the Duffing equation with initial condition 1.
Here is a plot of the solution.
Here is the same equation with initial condition 1.001.
The solution progressively diverges from the one shown above.