This is documentation for Mathematica 3, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)
 Documentation / Mathematica / Add-ons / Standard Packages / NumericalMath  /

NumericalMath`Approximations`

A degree rational function is the ratio of a degree polynomial to a degree polynomial. Because rational functions only use the elementary arithmetic operations, they are very easy to evaluate numerically. The polynomial in the denominator allows one to approximate functions that have rational singularities. For these reasons a rational function is frequently useful in numerical work to approximate a given function.
There are various methods to perform this approximation. The methods differ in how they interpret the notion of the "goodness" of the approximation. Each method is useful for certain classes of problems. You can use this package to compute general rational interpolations and minimax approximations. The package Calculus`Pade` contains functions that perform Padé approximations and economized rational approximations.
There is a related class of approximation questions that involves the interpolation or fitting of a set of data points by an approximating function. In this type of situation you should use the built-in functions Fit, InterpolatingPolynomial and Interpolation. For more information, see the section covering numerical operations on data in The Mathematica Book

.


Rational interpolations.

One way to approximate a given function by a rational function is to choose a set of values for the independent variable and then construct a rational function that agrees with the given function at this set of values. This is what is done by RationalInterpolation.
There are two ways of using RationalInterpolation. If you just specify a range in the independent variable, then the set of values is chosen automatically in a way that ensures a reasonable approximation for the degree of approximation you have chosen. You can also give an explicit list of the set of values to be used. Note that in this case if you ask for a degree -approximation, you must specify a list of

values for the independent variable.

  • This loads the package.
  • In[1]:= <<NumericalMath`Approximations`

  • This gives a rational interpolation of degree to

    at 7 equally spaced points between 0 and 2.
  • In[2]:= ri1 = RationalInterpolation[
    Exp[x], {x, 2, 4},
    {0, 1/3, 2/3, 1, 4/3, 5/3, 2}]

    Out[2]=

  • This plots the difference between the function and its approximation. Note that the error tends to get larger near the endpoints.
  • In[3]:= Plot[ri1 - Exp[x], {x, 0, 2}]


  • Here Mathematica automatically chooses the interpolation points.
  • In[4]:= ri2 =
    RationalInterpolation[Exp[x],
    {x, 2, 4}, {x, 0, 2}]

    Out[4]=

  • The interpolation points are somewhat more bunched at the ends of the interval. This usually results in a smaller maximum error.
  • In[5]:= Plot[ri2 - Exp[x],
    {x, 0, 2}]



    Options for rational approximations.

    When you specify a range of values for RationalInterpolation, the interpolation points are chosen automatically. The option Bias allows you to bias the interpolation points to the right or to the left. Values for Bias must be numbers between and . A positive value causes the points to be biased to the right. The default is Bias

    ->0, which causes the points to be symmetrically distributed.

  • When you bias the distribution of the points to the right, you get smaller errors there and larger errors to the left.
  • In[6]:= ri3 = RationalInterpolation[
    Exp[x], {x, 2, 4}, {x, 0, 2},
    Bias -> .25]

    Out[6]=

  • This shows the influence of the bias.
  • In[7]:= Plot[ri3 - Exp[x],
    {x, 0, 2}]


    When you use RationalInterpolation, you get a rational function that agrees with the given function at a set of points. This guarantees that the rational function is, in one sense, close to the given function. A stronger requirement for a good rational approximation would be to require that the rational function be close to the given function over the entire interval. This type of rational approximation is produced by MiniMaxApproximation. This approximation is so named because it minimizes the maximum value of the relative error between the approximation and the given function. This means that minimax approximation to a given function is the rational function of the given degree that minimizes the maximum value of the quantity over the interval under consideration. Note that the term minimax approximation

    is also sometimes used for the rational function that minimizes the absolute error rather than the relative error used here.


    The minimax approximation.

    MiniMaxApproximation works using an iterative scheme. The first step is to construct a rational approximation using RationalInterpolation. This first approximation is then used to generate a better approximation using a scheme based on Remes's algorithm. Generating the new approximation consists of adjusting the choice of the interpolation points in a way that ensures that the relative error will diminish.
    MiniMaxApproximation returns a list with two parts: a list of the points at which the maximum error occurs and a list consisting of the rational approximation and the value of the maximum error. This extra information is provided not so much for the user's information as to provide the capability of restarting the procedure without having to start back at the beginning. This is useful because the algorithm is iterative, and if convergence does not occur before MaxIterations is reached, the incomplete answer is returned along with a warning.

  • This gives a list containing the points where the maximum error occurs and the desired interpolation, along with the value of the error.
  • In[8]:= mmlist = MiniMaxApproximation[Exp[x],
    {x, {0, 2}, 2, 4}]

    Out[8]=

  • This extracts the rational approximation.
  • In[9]:= mmfunc = mmlist[[2, 1]]

    Out[9]=

  • Here is a plot of the relative error in the approximation over the interval

    . Reducing the error at any one of the extrema will force the error to increase at one of the others.
  • In[10]:= Plot[1 - mmfunc/Exp[x], {x, 0, 2}]


    Because MiniMaxApproximation tries to minimize the maximum of the relative error, it is not possible to find a minimax approximation to a function that has a zero in the interval in question. The rational approximation would have to be exactly zero at the zero of the function, or the relative error would be infinite. It is still possible to deal with such functions, but the zero must be divided out of the function and then multiplied back into the rational function.

  • At

    the relative error is infinite.
  • In[11]:= MiniMaxApproximation[
    Cos[x], {x, {1, 2}, 2, 4}]

    MiniMaxApproximation::van: Failed to locate the extrema in 20 iterations. The function Cos[x] may be vanishing on the interval {1, 2} or the WorkingPrecision may be insufficient to get convergence.

    Out[11]=

  • Dividing by (x-Pi/2) "cancels" the zero and there is now no problem computing the approximation.
  • In[12]:= MiniMaxApproximation[
    Cos[x]/(x-Pi/2),{x,{1,2},2,4}][[2,1]]

    Out[12]=

  • Multiplying the approximation by (x-Pi/2) then gives the minimax approximation to Cos[x].
  • In[13]:= mmacos = % N[x - Pi/2]

    Out[13]=

  • This plots the relative error.
  • In[14]:= Plot[1 - mmacos/Cos[x],
    {x, 1, 2}]


    There are several ways in which MiniMaxApproximation can fail. In these cases, you will often get a message indicating what probably went wrong and what you can do to avoid the problem. For example, if you ask for a minimax approximation of degree , MiniMaxApproximation will look for a rational minimax approximation such that the relative error oscillates in sign and the extreme values are achieved

    times. Sometimes the extreme values occur more times. It may be possible to design a more robust algorithm to deal with this problem, but in practice it is usually quite simple just to ask for a minimax approximation with different degree.

  • When you try to compute this approximation you get a warning. Notice that there is not a single error, but a list of errors corresponding to the abscissas in the first part.
  • In[15]:= MiniMaxApproximation[
    Cos[x]/(x - Pi/2), {x, {1, 2}, 2, 2}]

    MiniMaxApproximation::extalt: The extrema of the error do not alternate in sign. It may be that MiniMaxApproximation has lost track of the extrema by going too fast. If so try increasing the values in the option Brake. It may be that the WorkingPrecision is insufficient. Otherwise there is an extra extreme value of the error, and MiniMaxApproximation cannot deal with this problem.

    Out[15]=

  • This extracts the approximation and cancels the factor.
  • In[16]:= badmma = %[[2]] N[x - Pi/2]

    Out[16]=

  • Notice that the relative error has seven local extrema rather than the expected six.
  • In[17]:= Plot[1 - badmma/Cos[x], {x, 1, 2}]


  • After changing the degree of the requested approximation, you no longer have the problem.
  • In[18]:= MiniMaxApproximation[
    Cos[x]/(x - Pi/2),
    {x, {1, 2}, 2, 3}][[2, 1]]

    Out[18]=

    Another thing that can happen is that the initial rational interpolation can have a zero in the denominator somewhere in the interval. In such cases it is usually easiest to ask for a minimax approximation of different degree. Occasionally, however, this approach does not solve the problem.
    A trick that will sometimes work is to start with an approximation that is valid over a shorter interval. Because the minimax approximation will usually change continuously as you lengthen the interval, you can use the approximation for the shorter interval as a starting point for an approximation on a slightly longer interval. By slowly stretching the interval, you may be able to eventually get a minimax approximation that is valid over the interval you desire.
    MiniMaxApproximation has several options that give the user control over how it works and two options that help in diagnosing failures.


    Options for minimax approximations.

    MiniMaxApproximation works by first finding a rational interpolation to the given function and then perturbing the coefficients until the error is equi-oscillatory. The option Bias is used to control the initial rational approximation in exactly the same way it is used in RationalInterpolation.
    As MiniMaxApproximation proceeds, it alternately locates the extrema of the relative error and then perturbs the coefficients in an effort to equalize the magnitudes of the extrema. If the extrema move only a small amount from one iteration to the next, their previous positions can be used as starting values in locating their new positions. If they move too much from one iteration to the next, MiniMaxApproximation gets lost. The way MiniMaxApproximation knows it is lost is if the extrema do not alternate in sign, two extrema are the same, or their abscissas are not in sorted order.
    The way to prevent MiniMaxApproximation from getting lost is to set the option Brake. Brake acts as a braking mechanism on the changes from one iteration to the next, but its influence eventually dies off to the point where there is no braking effect. When the algorithm has almost converged, there is no need to provide braking, because the changes are very small.
    A value for Brake must be a list of two positive integers. The first integer specifies how many iterations are to be affected by the braking, and the second integer specifies how much braking force is to be applied to the first iteration. Brake is much more important for minimax approximations of high degree, because in this case, the abscissas of the extrema are very close together.
    To perform its iterative scheme, MiniMaxApproximation must know the first two derivatives of the function being approximated. If Mathematica cannot compute the derivatives analytically, you must specify them explicitly. A related situation is when the derivatives can be found, but calculating them involves a lot of work, much of which is redundant. For example, in trying to find a minimax approximation to , Mathematica needs to evaluate to find the value of the function, Mathematica needs to evaluate to find the value of the first derivative, and Mathematica needs to evaluate to find the value of the second derivative. A much simpler way would be for the user to specify a function that returns a list of these three values for each value of . This is the purpose of the option Derivatives.
    There are two things to be aware of when you use this option. First, the function should not be allowed to evaluate until

    is actually a number, or the whole purpose of using the option will be defeated. Second, the function must be applicable to lists in the same way that a list of functions is applicable to lists.

  • This prevents derivs from evaluating unless x is a number.
  • In[19]:= derivs[x_?NumberQ] :=
    Block[{exp = Exp[x]}, {exp, exp, exp}]

  • This makes derivs behave correctly for arguments that are lists of numbers.
  • In[20]:= derivs[x_List] := Transpose[Map[derivs, x]]

  • Here is what happens if the derivatives are evaluated separately.
  • In[21]:= {Exp[x], Exp[x], Exp[x]} /. x-> {1,2,3}

    Out[21]=

  • We get the same behavior with our function.
  • In[22]:= derivs[x] /. x-> {1,2,3}

    Out[22]=

  • MiniMaxApproximation works with our definition for the derivatives.
  • In[23]:= MiniMaxApproximation[
    Exp[x], {x, {-1, 1}, 2, 2},
    Derivatives -> derivs[x]]

    Out[23]=

    To prevent infinite iteration, MiniMaxApproximation has the option MaxIterations. If convergence does not occur before the number of iterations after the braking stops reaches MaxIterations, a warning is returned along with the current approximation. If the problem is simply slow convergence, you can restart the iteration from the current approximation by inserting the approximation that was returned as the second argument to MiniMaxApproximation. You may find it useful to begin the new iteration with different options.

  • To get an example of a poor approximation, you can choose a small value of MaxIterations, a large bias, and no braking.
  • In[24]:= approx =
    MiniMaxApproximation[
    Exp[x], {x, {-1, 1}, 2, 2},
    Brake -> {0,0},
    MaxIterations -> 2, Bias -> .4]

    MiniMaxApproximation::conv: Warning: convergence was not complete.

    Out[24]=

  • The result of the previous approximation attempt is used as the starting point of the new iteration by inserting it as a second argument.
  • In[25]:= MiniMaxApproximation[
    Exp[x], approx,
    {x, {-1, 1}, 2, 2},
    Brake -> {0,0}]

    Out[25]=

    PrintFlag and PlotFlag are options to MiniMaxApproximation that are useful for diagnosing the reason for failure. Setting either of these options will have no effect on the calculations. If PlotFlag is set to True, a plot of the relative error in each iterated rational approximation will be generated. If these plots change dramatically from one iteration to the next, you should probably increase the braking. If PrintFlag is set to True, two things will happen. First, as the extrema are being located for the first time, lists of the changes in the approximations to the abscissas will be printed out. The numbers in these lists should rapidly decrease once they get reasonably small. After the extrema are located for the first time, lists of ordered pairs are printed consisting of abscissas of the extrema of the relative error and the value of the relative error at each of those abscissas.


    General rational interpolations.

    There are certain approximation problems in which you will want to use GeneralRationalInterpolation instead of RationalInterpolation. For example, you should use GeneralRationalInterpolation if you want to find a rational interpolating function to the inverse of a function that can only be evaluated by using FindRoot. In such a case, RationalInterpolation would be very slow.
    GeneralRationalInterpolation lets you do more general approximation problems by allowing the function that is to be approximated to be given parametrically. For example, the graph of the function is just the upper half of the unit circle. This can also be described parametrically as where . Thus you can compute an approximation to by specifying the function as

    Cos[t],Sin[t]
    In the general case, when you specify the functions in RationalInterpolation as



    ,

    , the expressions and are functions of . The function that is interpolated is the one whose graph is given parametrically by for .
    Note that you must always specify a symbol for the independent variable; using the parametric variable as the independent variable would be incorrect. GeneralRationalInterpolation takes the same options as RationalInterpolation

    .

  • This gives an approximation to the function whose graph is the upper half circle.
  • In[26]:= gri1 =
    GeneralRationalInterpolation[
    {Cos[t], Sin[t]}, {t, 2, 4}, x,
    Table[i Pi/6, {i,0,6}]]

    Out[26]=

  • The error is quite large near the endpoints.
  • In[27]:= Plot[gri1 - Sqrt[1 - x^2],
    {x, -1, 1}, PlotRange -> All]


  • You don't need to specify the interpolation points explicitly.
  • In[28]:= gri2 =
    GeneralRationalInterpolation[
    {Cos[t], Sin[t]},{t, 2, 4}, x,
    {t, 0, Pi}]

    Out[28]=

  • As is the case with RationalInterpolation, the error is often smaller if the interpolation points are not given explicitly. The points chosen automatically tend to be better distributed.
  • In[29]:= Plot[gri2 - Sqrt[1 - x^2], {x, -1, 1}]



    General minimax approximations.

    The function that is to be approximated is specified in GeneralMiniMaxApproximation in the same way as it is in GeneralRationalInterpolation. The options for GeneralMiniMaxApproximation are the same as for MiniMaxApproximation.

  • This gives a degree minimax approximation to the inverse of Exp

    .
  • In[30]:= gmma1 =
    GeneralMiniMaxApproximation[
    {Exp[t], t},
    {t, {1, 2}, 2, 4}, x]

    Out[30]=

  • Since there is an easy way to evaluate the inverse of Exp, it is also possible to use MiniMaxApproximation for this problem. The only difference between the solutions is in the abscissas of the extrema.
  • In[31]:= MiniMaxApproximation[
    Log[x],
    {x, {E, E^2}, 2, 4}]

    Out[31]=

  • This extracts the minimax approximation.
  • In[32]:= log = gmma1[[2,1]];

  • Here is a plot of the relative error in the approximation over the interval

    .
  • In[33]:= Plot[1 - log/Log[x], {x, E, E^2}]


    If you use the option Derivatives with GeneralMiniMaxApproximation, you must specify a list of derivatives for both parts of the parametrically defined function.

  • This prevents gderivs from acting on symbols.
  • In[34]:= gderivs[t_?NumberQ] :=
    Block[{exp = Exp[t]},
    {{exp, exp, exp}, {t,1,0}}
    ]

  • This makes gderivs have the correct behavior when a list is given as an argument.
  • In[35]:= gderivs[t_List] := Transpose[
    Map[gderivs, t],{3,1,2}]

  • We get the same result again.
  • In[36]:= GeneralMiniMaxApproximation[
    {Exp[t], t},
    { t, {1,2}, 2, 4}, x,
    Derivatives -> gderivs[t]]

    Out[36]=

    Another situation in which GeneralMiniMaxApproximation is useful is when you want to do an approximation and measure its goodness of fit using the absolute error instead of the relative error that is used by default. If you want to use a different metric for the error, you can specify it as the third part of the parametrically defined function. If the function is given as and is the rational minimax approximation to the function that relates to , then the error that is minimized is . If is not specified, it is taken to be the same as , and it is the maximum relative error that is minimized. If you want to minimize the absolute error, simply use the constant for

    , which is the default.

  • This gives a rational minimax approximation to the inverse of Exp with the maximum absolute error minimized.
  • In[37]:= gmma2 =
    GeneralMiniMaxApproximation[
    {Exp[t], t, 1},
    {t, {1, 2}, 2, 4}, x]

    Out[37]=

  • This extracts the approximation.
  • In[38]:= log = gmma2[[2,1]];

  • Now the absolute error is equi-oscillatory.
  • In[39]:= Plot[log - Log[x], {x, E, E^2}]


    If you get an approximation for which the error is unacceptably large, there are several things you can do. If you increase the degree of the numerator and/or the denominator, the minimax error will usually decrease; it cannot increase. Even shifting a degree from the numerator to the denominator or vice versa can affect the size of the error. If the interval for which the approximation is to be valid is very long, it is probably a good idea to look at the asymptotic behavior of the function and choose degrees for the numerator and denominator that give the correct asymptotic behavior. For example, to get an approximation to for large , the degrees of the numerator and denominator should be the same, since for large the function has a nearly constant value of .
    Another way to decrease the error is to shorten the interval for which the approximation is to be valid. Often it is better to have several low-degree approximations to cover a long interval than a single high-degree approximation. The extra storage required is not that much, and each of the simpler approximations will evaluate faster.
    To get an optimal minimax approximation you need to take into account the numerical behavior of the final approximation. It is usually a good idea to define the function so that the variable appearing in the approximation has values near the origin. Thus, instead of finding a minimax approximation to on the interval it would be better to find a minimax approximation to on the interval .
    In cases where you want to avoid all potential problems associated with loss of digits due to subtractive cancellation, you may even want to do some shifting of the variable after the approximation is found. For example, the rational minimax approximation of degree to on the interval has positive coefficients in the numerator and coefficients with alternating signs in the denominator. The relative error in the approximation is only about , but using the approximation in the given form could result in a larger relative error, since a few bits could be lost due to cancellation near the endpoints: cancellation occurs in the numerator near and in the denominator near . By replacing the in the numerator by and the in the denominator by , all coefficients in both the numerator and the denominator become positive, and the values of and will be between and ; no cancellation can occur. Of course all of this must be done to much higher precision to ensure that the coefficients themselves are correct to the required precision.
    It is very important to also consider the zeros of the function and its asymptotic behavior. The simplicity of the resulting minimax approximation is greatly affected by the extent to which you can trivially make the function look like a constant function. As an example, to find a minimax approximation to Gamma[1/2,

    x,Infinity] on the interval , you can consider the function Gamma[1/2,

    x,Infinity]Exp[x](x+4/7) (cf. Abramowitz and Stegun, Handbook of Mathematical Functions, 6.5.31; the 4/7 was chosen empirically). This function only varies a few percent over the interval in question; it will be much easier to find a minimax approximation to this function than to the original function.
    If you are attempting to minimize the maximum relative error, and there are zeros of the function in the interval in question, you will have to divide out the zeros first. If singularities occur in the interval they will have to be eliminated also, either by multiplying by a zero or subtracting them away.