This is documentation for Mathematica 4, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.1)
Wolfram Research, Inc.

Linear ProgrammingContents

3.9.10 Advanced Topic: Functions with Sensitive Dependence on Their Input

Functions that are specified by simple algebraic formulas tend to be such that when their input is changed only slightly, their output also changes only slightly. But functions that are instead based on executing procedures quite often show almost arbitrarily sensitive dependence on their input. Typically the reason this happens is that the procedure "excavates" progressively less and less significant digits in the input.

This shows successive steps in a simple iterative procedure with input 0.1111.

In[1]:= NestList[FractionalPart[2 #]&, 0.1111, 10]

Out[1]=

Here is the result with input 0.1112. Progressive divergence from the result with input 0.1111 is seen.

In[2]:= NestList[FractionalPart[2 #]&, 0.1112, 10]

Out[2]=

The action of FractionalPart[2 x] is particularly simple in terms of the binary digits of the number x: it justs drops the first one, and shifts the remaining ones to the left. After several steps, this means that the results one gets are inevitably sensitive to digits that are far to the right, and have an extremely small effect on the original value of x.

This shows the shifting process achieved by FractionalPart[2 x] in the first 8 binary digits of x.

In[3]:= RealDigits[Take[%, 5], 2, 8, -1]

Out[3]=

If you give input only to a particular precision, you are effectively specifying only a certain number of digits. And once all these digits have been "excavated" you can no longer get accurate results, since to do so would require knowing more digits of your original input. So long as you use arbitrary-precision numbers, Mathematica automatically keeps track of this kind of degradation in precision.

Successive steps yield numbers of progressively lower precision, and eventually no precision at all is left.

In[4]:= NestList[FractionalPart[40 #]&, N[1/9, 20], 20]

Out[4]=

This asks for the precision of each number. Mathematica generates a warning for numbers where no precision is left.

In[5]:= Map[Precision, %]

Out[5]=

This shows that the exact result is a periodic sequence.

In[6]:= NestList[FractionalPart[40 #]&, 1/9, 10]

Out[6]=

It is important to realize that if you use approximate numbers of any kind, then in an example like the one above you will always eventually run out of precision. But so long as you use arbitrary-precision numbers, Mathematica will explicitly show you any decrease in precision that is occurring. However, if you use machine-precision numbers, then Mathematica will not keep track of precision, and you cannot tell when your results become meaningless.

If you use machine-precision numbers, Mathematica will no longer keep track of any degradation in precision.

In[7]:= NestList[FractionalPart[40 #]&, N[1/9], 20]

Out[7]=

By iterating the operation FractionalPart[2 x] you extract successive binary digits in whatever number you start with. And if these digits are apparently random—as in a number like —then the results will be correspondingly random. But if the digits have a simple pattern—as in any rational number—then the results you get will be correspondingly simple.

By iterating an operation such as FractionalPart[3/2 x] it turns out however to be possible to get seemingly random sequences even from very simple input. This is an example of a very general phenomenon first identified by me in the mid-1980s, which has nothing directly to do with sensitive dependence on input.

This generates a seemingly random sequence, even starting from simple input.

In[8]:= NestList[FractionalPart[3/2 #]&, 1, 15]

Out[8]=

After the values have been computed, one can safely find numerical approximations to them.

In[9]:= N[%]

Out[9]=

Here are the last 5 results after 1000 iterations, computed using exact numbers.

In[10]:= Take[N[NestList[FractionalPart[3/2 #]&, 1, 1000]], -5]

Out[10]=

Using machine-precision numbers gives completely incorrect results.

In[11]:= Take[NestList[FractionalPart[3/2 #]&, 1., 1000], -5]

Out[11]=

Many kinds of iterative procedures yield functions that depend sensitively on their input. Such functions also arise when one looks at solutions to differential equations. In effect, varying the independent parameter in the differential equation is a continuous analog of going from one step to the next in an iterative procedure.

This finds a solution to the Duffing equation with initial condition 1.

In[12]:= NDSolve[{x''[t] + 0.15 x'[t] - x[t] + x[t]^3 == 0.3 Cos[t],

x[0] == -1, x'[0] == 1}, x, {t, 0, 50}]

Out[12]=

Here is a plot of the solution.

In[13]:= Plot[Evaluate[x[t] /. %], {t, 0, 50}]

Out[13]=

Here is the same equation with initial condition 1.001.

In[14]:= NDSolve[{x''[t] + 0.15 x'[t] - x[t] + x[t]^3 == 0.3 Cos[t],

x[0] == -1, x'[0] == 1.001}, x, {t, 0, 50}]

Out[14]=

The solution progressively diverges from the one shown above.

In[15]:= Plot[Evaluate[x[t] /. %], {t, 0, 50}]

Out[15]=

Linear ProgrammingContents