Function Approximations Package

This loads the package.

Economized Rational Approximations Using Padé Approximations

EconomizedRationalApproximation[f,{x,{xmin,xmax},m,k}]
gives the economized rational approximation to f that is good in the range xmin to xmax and has degree (m,k).

Economized rational approximations.

A Padé approximation is very accurate near the center of expansion, but the error increases rapidly as you get farther away. If you are willing to sacrifice some of the goodness of fit near the center of expansion, it is possible to obtain a better fit over the entire interval under consideration. This is what the other types of approximations do.

With an economized rational approximation, the idea is to start with a Padé approximation and perturb it with a Chebyshev polynomial in such a way as to reduce the leading coefficient in the error. This perturbation does cause the vanished terms to reappear. However, the magnitude of the error does not increase very much near the center of expansion, and this small increase is compensated for by a decrease in the error farther away. With EconomizedRationalApproximation, you specify the interval over which the approximation is to work rather than the center of expansion. Ultimately, as the length of the interval goes to zero, the economized rational approximation approaches the Padé approximation.

Here is the Padé approximation of order to at .
This plots the difference between the Padé approximation and the true function. Notice that the approximation is very good near the center of expansion, but the error increases rapidly as you move away.
Here is the economized rational approximation of degree (2,4) of ex over the interval .
This gives the difference between the true function and the economized rational approximation. In general, you may even get a small nonvanishing constant term.
The error has been spread out when compared to the Padé approximation plotted above.
Even though the error at the endpoint is not particularly small, it is considerably smaller than what PadeApproximant gives.

Rational Interpolation and Minimax Approximations

A degree (m,k) rational function is the ratio of a degree m polynomial to a degree k polynomial. Because rational functions only use the elementary arithmetic operations, they are very easy to evaluate numerically. The polynomial in the denominator allows you to approximate functions that have rational singularities. For these reasons, a rational function is frequently useful in numerical work to approximate a given function.

There are various methods to perform this approximation. The methods differ in how they interpret the notion of the goodness of the approximation. Each method is useful for certain classes of problems. You can use this package to compute general rational interpolations and minimax approximations. The function PadeApproximant contains functionality that performs Padé approximations.

There is a related class of approximation questions that involves the interpolation or fitting of a set of data points by an approximating function. In this type of situation, you should use the builtin functions Fit, InterpolatingPolynomial, and Interpolation. For more information, see "Numerical Operations on Data".

RationalInterpolation[f,{x,m,k},{x1,x2,,xm+k+1}]
give a rational interpolation of degree (m,k) to the points (xi,f(xi))
RationalInterpolation[f,{x,m,k},{x,xmin,xmax}]
give a rational interpolation with the points xi chosen automatically

Rational interpolations.

One way to approximate a given function by a rational function is to choose a set of values for the independent variable and then construct a rational function that agrees with the given function at this set of values. This is what is done by RationalInterpolation.

There are two ways of using RationalInterpolation. If you just specify a range in the independent variable, then the set of values is chosen automatically in a way that ensures a reasonable approximation for the degree of approximation you have chosen. You can also give an explicit list of the set of values to be used. Note that in this case if you ask for a degree (m,k) approximation, you must specify a list of values for the independent variable.

This gives a rational interpolation of degree (2,4) to ex at seven equally spaced points between 0 and 2.
This plots the difference between the function and its approximation. Note that the error tends to get larger near the endpoints.
Here the Wolfram Language automatically chooses the interpolation points.
The interpolation points are somewhat more bunched at the ends of the interval. This usually results in a smaller maximum error.
option name
default value
WorkingPrecisionMachinePrecisionnumber of digits of precision to use
Bias0bias in the automatic choice of interpolation points

Options for rational approximations.

When you specify a range of x values for RationalInterpolation, the interpolation points are chosen automatically. The option Bias allows you to bias the interpolation points to the right or to the left. Values for Bias must be numbers between -1 and 1. A positive value causes the points to be biased to the right. The default is Bias->0, which causes the points to be symmetrically distributed.

When you bias the distribution of the points to the right, you get smaller errors there and larger errors to the left.
This shows the influence of the bias.

When you use RationalInterpolation, you get a rational function that agrees with the given function at a set of points. This guarantees that the rational function is, in one sense, close to the given function. A stronger requirement for a good rational approximation would be to require that the rational function be close to the given function over the entire interval. This type of rational approximation is produced by MiniMaxApproximation. This approximation is so named because it minimizes the maximum value of the relative error between the approximation and the given function. This means that minimax approximation to a given function is the rational function of the given degree that minimizes the maximum value of the quantity over the interval under consideration. Note that the term minimax approximation is also sometimes used for the rational function that minimizes the absolute error rather than the relative error used here.

MiniMaxApproximation[f,{x,{xmin,xmax},m,k}]
give the minimax approximation to f of degree (m,k) on the interval from xmin to xmax
MiniMaxApproximation[f,approx,{x,{xmin,xmax},m,k}]
give the minimax approximation starting the iterative algorithm with approx

The minimax approximation.

MiniMaxApproximation works using an iterative scheme. The first step is to construct a rational approximation using RationalInterpolation. This first approximation is then used to generate a better approximation using a scheme based on Remes's algorithm. Generating the new approximation consists of adjusting the choice of the interpolation points in a way that ensures that the relative error will diminish.

MiniMaxApproximation returns a list with two parts: a list of the points at which the maximum error occurs and a list consisting of the rational approximation and the value of the maximum error. This extra information is provided not so much for the user's information, but to provide the capability of restarting the procedure without having to start back at the beginning. This is useful because the algorithm is iterative, and if convergence does not occur before MaxIterations is reached, the incomplete answer is returned along with a warning.

This gives a list containing the points where the maximum error occurs and the desired interpolation, along with the value of the error.
This extracts the rational approximation.
Here is a plot of the relative error in the approximation over the interval . Reducing the error at any one of the extrema will force the error to increase at one of the others.

Because MiniMaxApproximation tries to minimize the maximum of the relative error, it is not possible to find a minimax approximation to a function that has a zero in the interval in question. The rational approximation would have to be exactly zero at the zero of the function, or the relative error would be infinite. It is still possible to deal with such functions, but the zero must be divided out of the function and then multiplied back into the rational function.

At the relative error is infinite.
Dividing by (x-Pi/2) cancels the zero and there is now no problem computing the approximation.
Multiplying the approximation by (x-Pi/2) then gives the minimax approximation to Cos[x].
This plots the relative error.

There are several ways in which MiniMaxApproximation can fail. In these cases, you will usually get a message indicating what probably went wrong and what you can do to avoid the problem. For example, if you ask for a minimax approximation of degree (m,k), MiniMaxApproximation will look for a rational minimax approximation such that the relative error oscillates in sign and the extreme values are achieved times. Sometimes the extreme values occur more times. It may be possible to design a more robust algorithm to deal with this problem, but in practice it is usually quite simple just to ask for a minimax approximation with different degree.

When you try to compute this approximation you get a warning. Notice that there is not a single error, but a list of errors corresponding to the abscissas in the first part.
This extracts the approximation and cancels the factor.
Notice that the relative error has seven local extrema rather than the expected six.
After changing the degree of the requested approximation, you no longer have the problem.

Another thing that can happen is that the initial rational interpolation can have a zero in the denominator somewhere in the interval. In such cases, it is usually easiest to ask for a minimax approximation of different degree. Occasionally, however, this approach does not solve the problem.

A trick that will sometimes work is to start with an approximation that is valid over a shorter interval. Because the minimax approximation will usually change continuously as you lengthen the interval, you can use the approximation for the shorter interval as a starting point for an approximation on a slightly longer interval. By slowly stretching the interval, you may be able to eventually get a minimax approximation that is valid over the interval you desire.

MiniMaxApproximation has several options that give the user control over how it works and two options that help in diagnosing failures.

option name
default value
WorkingPrecisionMachinePrecisionnumber of digits of precision to use
Bias0bias in the automatic choice of interpolation points
Brake{5,5}braking to apply on iterative algorithm
MaxIterations20maximum number of iterates after braking has ended
DerivativesAutomaticspecify a function to use for the derivatives
PrintFlagFalsewhether to print information about the relative error at each step in the iteration
PlotFlagFalsewhether to plot the relative error at each step in the iteration

Options for minimax approximations.

MiniMaxApproximation works by first finding a rational interpolation to the given function and then perturbing the coefficients until the error is equioscillatory. The option Bias is used to control the initial rational approximation in exactly the same way it is used in RationalInterpolation.

As MiniMaxApproximation proceeds, it alternately locates the extrema of the relative error and then perturbs the coefficients in an effort to equalize the magnitudes of the extrema. If the extrema move only a small amount from one iteration to the next, their previous positions can be used as starting values in locating their new positions. If they move too much from one iteration to the next, MiniMaxApproximation gets lost. The way MiniMaxApproximation knows it is lost is if the extrema do not alternate in sign, two extrema are the same, or their abscissas are not in sorted order.

The way to prevent MiniMaxApproximation from getting lost is to set the option Brake. Brake acts as a braking mechanism on the changes from one iteration to the next, but its influence eventually dies off to the point where there is no braking effect. When the algorithm has almost converged, there is no need to provide braking, because the changes are very small.

A value for Brake must be a list of two positive integers. The first integer specifies how many iterations are to be affected by the braking, and the second integer specifies how much braking force is to be applied to the first iteration. Brake is much more important for minimax approximations of high degree, because in this case, the abscissas of the extrema are very close together.

To perform its iterative scheme, MiniMaxApproximation must know the first two derivatives of the function being approximated. If the Wolfram Language cannot compute the derivatives analytically, you must specify them explicitly. A related situation is when the derivatives can be found, but calculating them involves a lot of work, much of which is redundant. For example, in trying to find a minimax approximation to ex, the Wolfram Language needs to evaluate ex to find the value of the function, the Wolfram Language needs to evaluate ex to find the value of the first derivative, and the Wolfram Language needs to evaluate ex to find the value of the second derivative. A much simpler way would be for the user to specify a function that returns a list of these three values for each value of x. This is the purpose of the option Derivatives.

There are two things to be aware of when you use this option. First, the function should not be allowed to evaluate until x is actually a number, or the whole purpose of using the option will be defeated.

This prevents derivs from evaluating unless x is a number.
MiniMaxApproximation works with the definition for derivs.

To prevent infinite iteration, MiniMaxApproximation has the option MaxIterations. After the braking stops, if convergence does not occur before the number of iterations reaches MaxIterations, a warning is returned along with the current approximation. If the problem is simply slow convergence, you can restart the iteration from the current approximation by inserting the approximation that was returned as the second argument to MiniMaxApproximation. You may find it useful to begin the new iteration with different options.

To get an example of a poor approximation, you can choose a small value of MaxIterations, a large bias, and no braking.
The result of the previous approximation attempt is used as the starting point of the new iteration by inserting it as a second argument.

PrintFlag and PlotFlag are options to MiniMaxApproximation that are useful for diagnosing the reason for failure. Setting either of these options will have no effect on the calculations. If PlotFlag is set to True, a plot of the relative error in each iterated rational approximation will be generated. If these plots change dramatically from one iteration to the next, you should probably increase the braking. If PrintFlag is set to True, two things will happen. First, as the extrema are being located for the first time, lists of the changes in the approximations to the abscissas are printed. The numbers in these lists should rapidly decrease once they get reasonably small. After the extrema are located for the first time, lists of ordered pairs are printed, consisting of abscissas of the extrema of the relative error and the value of the relative error at each of those abscissas.

GeneralRationalInterpolation[{fx,fy},{t,m,k},x,{t1,t2,,tn+k+1}]
give a rational interpolation of degree (m,k) to a function of x whose graph is given parametrically as a function of t
GeneralRationalInterpolation[{fx,fy},{t,m,k},x,{t,tmin,tmax}]
give a rational interpolation with the points ti chosen automatically

General rational interpolations.

There are certain approximation problems in which you will want to use GeneralRationalInterpolation instead of RationalInterpolation. For example, you should use GeneralRationalInterpolation if you want to find a rational interpolating function to the inverse of a function that can only be evaluated by using FindRoot. In such a case, RationalInterpolation would be very slow.

GeneralRationalInterpolation lets you do more general approximation problems by allowing the function that is to be approximated to be given parametrically. For example, the graph of the function is just the upper half of the unit circle. This can also be described parametrically as where . Thus you can compute an approximation to by specifying the function as {Cos[t],Sin[t]}.

In the general case, when you specify the functions in RationalInterpolation as {fx,fy}, the expressions fx and fy are functions of t. The function that is interpolated is the one whose graph is given parametrically by for tminttmax.

Note that you must always specify a symbol for the independent variable; using the parametric variable as the independent variable would be incorrect. GeneralRationalInterpolation takes the same options as RationalInterpolation.

This gives an approximation to the function whose graph is the upper half-circle.
The error is quite large near the endpoints.
You do not need to specify the interpolation points explicitly.
As is the case with RationalInterpolation, the error is often smaller if the interpolation points are not given explicitly. The points chosen automatically tend to be better distributed.
GeneralMiniMaxApproximation[{fx,fy},{t,{tmin,tmax},m,k},x]
give the rational approximation of degree (m,k) to a function of x whose graph is given parametrically as a function of t
GeneralMiniMaxApproximation[{fx,fy},approx,{t,{tmin,tmax},m,k},x]
give the minimax approximation starting the iterative algorithm with approx
GeneralMiniMaxApproximation[{fx,fy,g},{t,{tmin,tmax},m,k},x]
give the rational approximation computing the error using a factor g(t)

General minimax approximations.

The function that is to be approximated is specified in GeneralMiniMaxApproximation in the same way as it is in GeneralRationalInterpolation. The options for GeneralMiniMaxApproximation are the same as for MiniMaxApproximation.

This gives a degree (2,4) minimax approximation to the inverse of Exp.
Since there is an easy way to evaluate the inverse of Exp, it is also possible to use MiniMaxApproximation for this problem. The only difference between the solutions is in the abscissas of the extrema.
This extracts the minimax approximation.
Here is a plot of the relative error in the approximation over the interval .

If you use the option Derivatives with GeneralMiniMaxApproximation, you must specify a list of derivatives for both parts of the parametrically defined function.

This prevents gderivs from acting on symbols.
You get the same result again.

Another situation in which GeneralMiniMaxApproximation is useful is when you want to do an approximation and measure its goodness of fit using the absolute error instead of the relative error that is used by default. If you want to use a different metric for the error, you can specify it as the third part of the parametrically defined function. If the function is given as and is the rational minimax approximation to the function that relates to , then the error that is minimized is . If is not specified, it is taken to be the same as , and it is the maximum relative error that is minimized. If you want to minimize the absolute error, simply use the constant 1 for , which is the default.

This gives a rational minimax approximation to the inverse of Exp with the maximum absolute error minimized.
This extracts the approximation.
Now the absolute error is equioscillatory.

If you get an approximation for which the error is unacceptably large, there are several things you can do. If you increase the degree of the numerator and/or the denominator, the minimax error will usually decrease; it cannot increase. Even shifting a degree from the numerator to the denominator or vice versa can affect the size of the error. If the interval for which the approximation is to be valid is very long, it is probably a good idea to look at the asymptotic behavior of the function and choose degrees for the numerator and denominator that give the correct asymptotic behavior. For example, to get an approximation to for large , the degrees of the numerator and denominator should be the same, since for large the function has a nearly constant value of .

Another way to decrease the error is to shorten the interval for which the approximation is to be valid. Often it is better to have several lowdegree approximations to cover a long interval than a single highdegree approximation. The extra storage required is not that much, and each of the simpler approximations will evaluate faster.

To get an optimal minimax approximation, you need to take into account the numerical behavior of the final approximation. It is usually a good idea to define the function so that the variable appearing in the approximation has values near the origin. Thus, instead of finding a minimax approximation to on the interval , it would be better to find a minimax approximation to on the interval .

In cases where you want to avoid all potential problems associated with loss of digits due to subtractive cancellation, you may even want to do some shifting of the variable after the approximation is found. For example, the rational minimax approximation of degree to on the interval has positive coefficients in the numerator and coefficients with alternating signs in the denominator. The relative error in the approximation is only about , but using the approximation in the given form could result in a larger relative error, since a few bits could be lost due to cancellation near the endpoints: cancellation occurs in the numerator near and in the denominator near . By replacing the in the numerator by and the in the denominator by , all coefficients in both the numerator and the denominator become positive, and the values of and will be between and ; no cancellation can occur. Of course all of this must be done to much higher precision to ensure that the coefficients themselves are correct to the required precision.

It is very important to also consider the zeros of the function and its asymptotic behavior. The simplicity of the resulting minimax approximation is greatly affected by the extent to which you can trivially make the function look like a constant function. As an example, to find a minimax approximation to Gamma[1/2,x,Infinity] on the interval , you can consider the function Gamma[1/2,x,Infinity]Exp[x](x+4/7) (cf. Abramowitz and Stegun, Handbook of Mathematical Functions, 6.5.31; the 4/7 was chosen empirically). This function only varies a few percent over the interval in question; it will be much easier to find a minimax approximation to this function than to the original function.

If you are attempting to minimize the maximum relative error, and there are zeros of the function in the interval in question, you will have to divide out the zeros first. If singularities occur in the interval, they will have to be eliminated also, either by multiplying by a zero or subtracting them away.

Finding Roots Using Interpolation

The function FindRoot is useful for finding a root of a function. It is fairly robust and will almost always find a root if it is given a sufficiently close starting value and the root is simple (or if it is multiple and the option DampingFactor is appropriately set). To achieve this robustness, FindRoot makes compromises and is not particularly conservative about the number of function evaluations it uses. There are cases, however, where you know that the function is very well behaved, but evaluating it is extremely expensive, particularly for very high precision. In such cases, InterpolateRoot may be more efficient.

InterpolateRoot looks at previous evaluations of the function, say , , , and and forms the interpolating polynomial that passes through the points , , , and . The algorithm gets the next approximation for the root of by evaluating the interpolating polynomial at 0. It turns out that using all of the previous data is not the best strategy. While the convergence rate increases with the use of additional data points, the rate is never greater than quadratic. Further, the more data points used, the less robust the algorithm becomes. InterpolateRoot uses only the previous four data points since there is almost no benefit to using more.

InterpolateRoot[f,{x,a,b}]find a root of the function f near the starting points a and b
InterpolateRoot[eqn,{x,a,b}]find a root of the equation eqn near the starting points a and b

Root finding with InterpolateRoot.

option name
default value
AccuracyGoalAutomaticthe desired accuracy in the root being sought
MaxIterations15maximum number of function evaluations before giving up
WorkingPrecision40the maximum precision to use in the arithmetic calculations
ShowProgressFalsewhether to print intermediate results and other information as the algorithm progresses

Options for InterpolateRoot.

The Automatic choice for AccuracyGoal means that the AccuracyGoal will be chosen to be 20 digits less than the WorkingPrecision. It should be noted that AccuracyGoal as used in InterpolateRoot is different from AccuracyGoal in FindRoot. FindRoot is a much more general function that works for systems of equations. Trying to justify an accuracy in the value of the root itself is too difficult. FindRoot merely stops when the value of the function is sufficiently small. InterpolateRoot is much more specialized. It only works for a single function of a single variable at simple roots and assumes that the function is very well behaved. In such cases it is quite easy to justify an accuracy in the value of the root itself.

Set up a function whose root you wish to find, with a counter n to determine the number of evaluations.
This uses FindRoot to find Log[2].
Here is the number of function evaluations to get the result.
InterpolateRoot requires fewer function evaluations to get the same result.
You can observe how the approximations converge to the root.

Approximating Integrals with Interpolating Functions in the Integrand

The Wolfram Language function NIntegrate uses algorithms that assume that the integrand is smooth to at least several orders. InterpolatingFunction objects typically do not satisfy this assumption; they are continuous, but only piecewise smooth. The algorithms used by NIntegrate converge very slowly when applied to InterpolatingFunction objects, especially in several dimensions. NIntegrate allows the domain of integration to be broken up into several pieces and the integral evaluated over each piece. If the pieces of the domain correspond to the pieces over which the InterpolatingFunction is smooth, NIntegrate will converge much more rapidly. NIntegrateInterpolatingFunction automatically breaks up the domain of integration.

NIntegrateInterpolatingFunction[expr,{x,xmin,xmax}]
find a numerical approximation to an integral with InterpolatingFunction objects in the integrand
NIntegrateInterpolatingFunction[expr,{x,xmin,xmax},{y,ymin,ymax},]
find a numerical approximation to a multidimensional integral with InterpolatingFunction objects in the integrand

Numerical approximations to integrals with InterpolatingFunction objects in the integrand.

NIntegrateInterpolatingFunction uses the function NIntegrate, but it breaks up the domain into sections where the InterpolatingFunction object is smooth.

This creates an InterpolatingFunction object approximating an oscillatory function in two dimensions.
This list gives the time used to evaluate the integral plus the result of the integral.
NIntegrate produces almost exactly the same result, but takes much longer because the convergence is poor if the domain is not properly broken up.

If you simply need to find the integral of an InterpolatingFunction object (as opposed to a function of one), it is better to use Integrate because this gives you the result which is exact for the polynomial approximation used in the InterpolatingFunction object.

Order Star Plots

Analysis of the stability of numerical methods for solving differential equations is a considerably more difficult task than finding the order of approximation. This difficulty is reflected in the fact that there are many different ways of defining stability. This package assists in the determination of stability regions for numerical methods.

Stability regions are important because they reflect the rate at which errors are propagated in the approximate solution. Just as there are absolute and relative errors in numerical analysis, there are also absolute and relative measures of stability. This package renders order stars, which are useful in examining the relative stability of a method. Furthermore, by specifying the comparison function to be identically 1, you can also draw regions of absolute stability.

A given numerical method for a problem can be recast into the framework of approximation theory. The goal is then to study how well this approximant behaves when compared with the solution. There is a kind of paradox here, for if the solution were known then you would have no need to resort to a numerical approximation. However, you want to establish a framework which applies to any problem in a given class. Since generically analytic solutions to problems cannot be found, it is common to study how a numerical method behaves when it is applied to a linearized system. In the area of ordinary differential equations, for example, you might be interested in solutions to the system of equations:

for a generally nonlinear . It is common to replace this system by a scalar linear problem that you can solve, namely,

where is considered to be a complex constant, and you have fixed an initial condition, so that the equation is uniquely determined. Stability analysis is now a study of how well a numerical solution behaves when applied to the simplified differential equation (1). Equation (1) is often referred to as the scalar linear test problem, or Dahlquist's equation.

The discussion which follows concentrates on how to use the package. You should keep in mind that although the focus is on the behavior of an approximant, the underlying interest is in the numerical method from which the approximant arose when applied to some problem. For more information on this correspondence, stability analysis, and the theory of order stars, see the references at the end of this section.

OrderStarPlot[r,f]draw the order star depicting the region where r/f<1, for the functions r and f
OrderStarPlot[r,f,z]draw the region in the complex z plane where r/f<1, where r and f are functions of z
OrderStarPlot[r,f,OrderStarKind->Second]
draw the order star depicting the region where Re(r-f)<0

Drawing order stars.

Padé approximations are rational polynomial approximants where all parameters are chosen to maximize order at some local expansion point. Certain numerical methods such as RungeKutta methods are related to Padé approximants to the exponential.

This constructs a Padé approximant to . The function for doing this is loaded automatically. This approximant corresponds to the forward Euler method.
This is the relative stability region, or order star of the first kind, for the forward Euler method. The pole of the approximant is highlighted.
This is the absolute stability region for the forward Euler method, obtained as a relative comparison with 1.
OrderStarInterpolationspecify whether to display points where r and f are equal
OrderStarKindspecify which kind (First or Second) of order star to draw
OrderStarLegendspecify whether to include a plot legend containing the various symbols used
OrderStarPolesspecify whether to indicate the poles of the approximant and/or the function
OrderStarZerosspecify whether to indicate the zeros of the approximant and/or the function
OrderStarSymbolSizespecify the size of the symbols used to indicate zeros, poles, and interpolation points
OrderStarSymbolThicknessspecify the line thickness for the symbols used to indicate zeros, poles, and interpolation points

Options unique to OrderStarPlot.

When you ask for certain features to be displayed, and the Wolfram Language is unable to find these features, you will obtain the message OrderStarPlot::sols containing more specific information relating to the problem. Solve may also issue messages, such as when inverse functions are being used.

OrderStarPlot uses heuristics in order to determine what the independent variable is. You can save time in a very complicated expression by specifying the variable to use explicitly. If there is any ambiguity in the variable choice, then input returns unevaluated and an appropriate warning message is issued, since the function will not evaluate numerically.

This indicates the variable to use and highlights points where . This may not be possible in general if the relationship is nonalgebraic.

In addition to True and False, the options OrderStarInterpolation, OrderStarLegend, OrderStarPoles, and OrderStarZeros can take on lists of coordinate pairs to specify points that cannot be found automatically. As well as resizing the plot legend by specifying scaled coordinates, you can specify information to the legend, such as the style and size of the font to use.

The position of the legend is given in scaled coordinates, using the same syntax as that of Rectangle. Font style and size information are also specified, and the symbols used to represent zeros and poles are increased in size.

In addition to the many options unique to OrderStarPlot, there are several that are simply passed to ContourPlot and used to produce the plot.

AspectRatiospecify the aspect ratio of the plot
Axesspecify whether to draw axes
AxesOriginspecify the intersection of the axes
ClippingStylespecify the style of what should be drawn when curves or surfaces would extend beyond the plot range
ColorFunctionspecify a function to apply to color the stability and instability regions
FrameTicksspecify whether or where to place tick marks around the frame
MaxRecursionspecify how many recursive subdivisions can be made
PlotPointsspecify the number of sample points to use in constructing the contour plot
PlotRangespecify the plot range to be used in the plot
Ticksspecify whether or where to place tick marks along the axes

Options common to many graphics functions.

An important issue is whether an order star crosses the imaginary axis. Additionally, it may be interesting to illustrate symmetry around the real axis. In order to facilitate these comparisons, OrderStarPlot uses graphics options to render axes which pass through the origin.

OrderStarPlot resolves fine features by making use of the MaxRecursion and PlotPoints options of ContourPlot.

The default plot region is determined from essential features of the order star. However, this default can be overridden using standard Graphics options.

This defines a function.
This constructs a Padé approximant to the function.
You can change the default plot region and plot density using standard options. However, in doing so you can make the plot quite jagged.
Increasing MaxRecursion or PlotPoints can resolve this issue.

Order stars of the second kind are useful in the study of stability for linear multistep methods.

You can also plot the order star of the second kind.

Order stars provide a means of determining, at a glance, many important features of interest, such as A-stability. Furthermore, by considering a relative comparison with a function, order stars manage to encrypt the order of accuracy of a numerical scheme into the stability region.

Here are only the zeros and poles of the approximant, since there are no finite zeros or poles of . The numerical method corresponding to this approximant is Astable since the approximant has no poles in the left halfplane. Furthermore, a count of the sectors adjoining the origin tells you that the order of approximation is 5 (one less than the number of sectors).

Expositions of the theory of stability of numerical methods can be found in [1], [2], and [3]. More examples of applications of the package are provided in [4].