AsymptoticLess

AsymptoticLess[f,g,xx*]

gives conditions for or as xx*.

AsymptoticLess[f,g,{x1,,xn}{,,}]

gives conditions for or as {x1,,xn}{,,}.

Details and Options

  • Asymptotic less is also expressed as f is little-o of g, f is of order less than g, f grows slower than g, and f is dominated by g. The point x* is often assumed from context.
  • Asymptotic less is an order relation and means TemplateBox[{{f, (, x, )}}, Abs]<=c TemplateBox[{{g, (, x, )}}, Abs] when x is near x* for all constants .
  • Typical uses include expressing simple strict upper bounds for functions and sequences near some point. It is frequently used for solutions to equations and to give simple strict upper bounds for computational complexity.
  • For a finite limit point x* and {,,}:
  • AsymptoticLess[f[x],g[x],xx*]for all there exists such that 0<TemplateBox[{{x, -, {x, ^, *}}}, Abs]<delta(c,x^*) implies TemplateBox[{{f, (, x, )}}, Abs]<=c TemplateBox[{{g, (, x, )}}, Abs]
    AsymptoticLess[f[x1,,xn],g[x1,,xn],{x1,,xn}{,,}]for all there exists such that 0<TemplateBox[{{{, {{{x, _, 1}, -, {x, _, {(, 1, )}, ^, *}}, ,, ..., ,, {{x, _, n}, -, {x, _, {(, n, )}, ^, *}}}, }}}, Norm]<delta(epsilon,x^*) implies TemplateBox[{{f, (, {{x, _, 1}, ,, ..., ,, {x, _, n}}, )}}, Abs]<=c TemplateBox[{{g, (, {{x, _, 1}, ,, ..., ,, {x, _, n}}, )}}, Abs]
  • For an infinite limit point:
  • AsymptoticLess[f[x],g[x],x]for all there exists such that implies TemplateBox[{{f, (, x, )}}, Abs]<=c TemplateBox[{{g, (, x, )}}, Abs]
    AsymptoticLess[f[x1,,xn],g[x1,,xn],{x1,,xn}{,,}]for all there exists such that implies TemplateBox[{{f, (, {{x, _, 1}, ,, ..., ,, {x, _, n}}, )}}, Abs]<=c TemplateBox[{{g, (, {{x, _, 1}, ,, ..., ,, {x, _, n}}, )}}, Abs]
  • AsymptoticLess[f[x],g[x],xx*] exists if and only if Limit[Abs[f[x]/g[x]],xx*]0 when g[x] does not have an infinite set of zeros near x*.
  • The following options can be given:
  • Assumptions $Assumptionsassumptions on parameters
    Direction Realsdirection to approach the limit point
    GenerateConditions Automaticgenerate conditions for parameters
    MethodAutomaticmethod to use
    PerformanceGoal "Quality"what to optimize
  • Possible settings for Direction include:
  • Reals or "TwoSided"from both real directions
    "FromAbove" or -1from above or larger values
    "FromBelow" or +1from below or smaller values
    Complexesfrom all complex directions
    Exp[ θ]in the direction
    {dir1,,dirn}use direction diri for variable xi independently
  • DirectionExp[ θ] at x* indicates the direction tangent of a curve approaching the limit point x*.
  • Possible settings for GenerateConditions include:
  • Automaticnon-generic conditions only
    Trueall conditions
    Falseno conditions
    Nonereturn unevaluated if conditions are needed
  • Possible settings for PerformanceGoal include $PerformanceGoal, "Quality" and "Speed". With the "Quality" setting, Limit typically solves more problems or produces simpler results, but it potentially uses more time and memory.

Examples

open allclose all

Basic Examples  (2)

Verify that as :

Visualize the two functions:

Verify that as :

Visualize the two functions:

Scope  (9)

Compare functions that are not strictly positive:

Show diverges more slowly than at the origin:

Answers may be Boolean expressions rather than explicit True or False:

When comparing functions with parameters, conditions for the result may be generated:

By default, a two-sided comparison of the functions is made:

When comparing larger values of , is indeed smaller than x^2 TemplateBox[{{x}}, UnitStepSeq]:

The relationship fails for smaller values of :

Visualize the two functions:

Functions like Sqrt may have the same relationship in both real directions along the negative reals:

If approached from above in the complex plane, the same relationship is observed:

However, approaching from below in the complex plane produces a different result:

This is due to a branch cut where the imaginary part of Sqrt reverses sign as the axis is crossed:

Hence, the relationship does not hold in the complex plane in general:

Visualize the relative sizes of the functions when approached from the four real and imaginary directions:

Compare multivariate functions:

Visualize the norms of the two functions:

Compare multivariate functions at infinity:

Use parameters when comparing multivariate functions:

Options  (10)

Assumptions  (1)

Specify conditions on parameters using Assumptions:

Different assumptions can produce different results:

Direction  (5)

Test the relation from below:

Equivalently:

Test the relation from above:

Equivalently:

Test the relation at piecewise discontinuities:

Since it fails in one direction, the two-sided result is false as well:

Visualize the two functions and their ratio:

The relationship at a pole is independent of the direction of approach:

Test the relation at a branch cut:

Compute the relation, approaching from different quadrants:

Approaching the origin from the first quadrant:

Equivalently:

Approaching the origin from the second quadrant:

Approach the origin from the right half-plane:

Approaching the origin from the bottom half-plane:

Visualize the ratio of the functions, which becomes small near the origin except along :

GenerateConditions  (3)

Return a result without stating conditions:

This result is only valid if n>0:

Return unevaluated if the results depend on the value of parameters:

By default, conditions are generated that guarantee a result:

By default, conditions are not generated if only special values invalidate the result:

With GenerateConditions->True, even these non-generic conditions are reported:

PerformanceGoal  (1)

Use PerformanceGoal to avoid potentially expensive computations:

The default setting uses all available techniques to try to produce a result:

Applications  (18)

Basic Applications  (4)

Show that at 0:

Establish this symbolically:

Visualize the functions:

This relationship extends to all real powers:

Visualize functions with fractional and negative powers:

Show that at :

Establish this symbolically:

Visualize the relationship in a logarithmic plot:

This relationship extends to all real powers:

Visualize functions with fractional and negative powers:

Show that at 0:

This relationship can be inferred from the fact that TemplateBox[{{{x, ^, 2},  , {sin, (, {1, /, x}, )}}}, Abs]<=x^2:

Show that at :

Absolute and Relative Errors  (2)

A function approximates as with small absolute error if as . Show that approximates with small absolute error as :

Show that also approximates with small absolute error as :

Show that approximates with small absolute error as :

Show that approximates with small absolute error as :

A function approximates as with small relative error if as . Show that approximates with small relative error as :

Show that approximates with small relative error as :

But the preceding approximation does not have small absolute error:

Similarly, Stirling's formula approximation to has small relative error as :

But not small absolute error:

Asymptotic Approximation  (6)

Let be a function and an approximation to near . Then the approximation is asymptotic if at . In other words, the remainder or error is asymptotically smaller than the approximation. Show that is an asymptotic approximation to at :

Show that is an asymptotic approximation to at :

Show that is not an asymptotic approximation to at :

Show that Stirling's formula is an asymptotic approximation to as :

Show that is an asymptotic approximation to TemplateBox[{x}, PrimePi] as :

Series generates asymptotic approximation to elementary and special functions. For instance, generate a degree-10 approximation to at :

Show that the series is asymptotic:

Determine an asymptotic series of Cot[x] at 0:

Show that a series of Gamma[x] is asymptotic at -1:

Find an asymptotic approximation of at 0:

There can be subtleties with asymptotic approximation when the function to be approximated approaches zero infinitely many times in every neighborhood of the approximation point. As an example, consider the asymptotic expansion of TemplateBox[{1, x}, BesselJ] near :

The approximation is not asymptotic because at every zero of the Bessel function, the approximation is not quite zero:

Despite the ratio generally approaching zero, TemplateBox[{{{J, (, {1, ,, x}, )}, -, besselJ}}, Abs]<=c TemplateBox[{besselJ}, Abs] is violated infinitely many times:

On the other hand, consider the approximation of the never-zero Hankel function TemplateBox[{1, x}, HankelH1]:

This approximation is asymptotic:

So is the approximation of the Hankel function of the second kind, TemplateBox[{1, x}, HankelH2]:

As TemplateBox[{1, x}, BesselJ]=1/2 (TemplateBox[{1, x}, HankelH1]+TemplateBox[{1, x}, HankelH2]), its approximation can be understood as nearly asymptotic, being the sum of two such approximations:

Alternatively, consider the asymptotic approximation of 1+TemplateBox[{1, x}, BesselJ] near :

This is a true asymptotic approximation:

The limit of the ratio of the approximation and the function approaches consistently:

Use AsymptoticIntegrate to generate asymptotic approximations to definite integrals. For instance, find an asymptotic approximation to as and compare to the exact value:

Create an asymptotic approximation with a smaller number of terms:

This approximation is asymptotic to the exact integral as well as the first approximation:

Use AsymptoticIntegrate to generate asymptotic approximations to indefinite integrals, though there is a need to account for the constant of integration. Consider an approximation of as :

Show that the approximation is asymptotic:

However, the approximation is asymptotic after matching values at the expansion point:

Compute two different asymptotic approximations of as :

There is no symbolic result to compare to, but the process can be shown to be asymptotic:

Use AsymptoticDSolveValue to generate asymptotic approximations to a differential equation:

There is no exact result to compare to, but the process can be shown to be asymptotic:

Compare with the values of a numerical solution obtained using NDSolveValue:

Asymptotic Scales & Series  (2)

A sequence of functions is an asymptotic scale at iff at . Write a function that can check whether a finite list of functions is an asymptotic scale:

Integer powers give an asymptotic scale at , with TemplateBox[{≻, "≻", {1, /, {(, {x, ^, 3}, )}}, {1, /, {(, {x, ^, 2}, )}}, {1, /, x}, 1, x, {x, ^, 2}, {x, ^, 3}}, RowWithSeparators] as :

They also form an asymptotic scale at , but in the opposite order:

Powers of form an asymptotic scale, with TemplateBox[{≻, "≻", 1, {1, /, {(, {log, (, x, )}, )}}, {1, /, {(, {{log, ^, 2}, (, x, )}, )}}, {1, /, {(, {{log, ^, 3}, (, x, )}, )}}}, RowWithSeparators] as :

Form an asymptotic scale using and , with TemplateBox[{≻, "≻", 1, x, {{x, ^, 2},  , {log, (, x, )}}, {x, ^, 2}, {{x, ^, 3},  , {log, (, x, )}}, {x, ^, 3}, {{x, ^, 4},  , {log, (, x, )}}}, RowWithSeparators] as :

Show that as if :

Write a function that sorts functions by the asymptotic ordering at some point :

Generate a random list of integer power functions:

Asymptotically sort the list at :

Asymptotically sort the same list at :

Computational Complexity  (4)

Simple sorting algorithms (bubble sort, insertion sort) take approximately a n2 steps to sort n objects, while optimal general algorithms (heap sort, merge sort) take approximately b n Log[n] steps to do the sort. Show that optimal algorithms always take less time to sort large enough collections of objects:

Certain special algorithms (counting sort, radix sort), where there is information ahead of time about the possible inputs, can run in c n time. These are even faster than the optimal algorithms when they can be used:

Visualize the growth of the three time scales:

In a bubble sort, adjoining neighbors are compared and swapped if they are out of order. After one pass of n-1 comparisons, the largest element is at the end. The process is then repeated on remaining n-1 elements, and so forth, until only two elements at the very beginning remain. If comparison and swap takes c steps, the total number of steps for the sort is:

The polynomial is asymptotically approximated by just the quadratic term:

In a merge sort, the list of elements is split in two, each half is sorted, and then the two halves are combined. Thus, the time T[n] to do the sort will be the sum of some constant time b to compute the middle, 2T[n/2] to sort each half, and some multiple a n of the number of elements to combine the two halves:

Solve the recurrence equation to find the time t to sort n elements:

This expression is asymptotically approximated by just the last term in the expression:

The traveling salesperson problem (TSP) consists of finding the shortest route connecting cities. A naive algorithm is to try all routes. The HeldKarp algorithm improves that to roughly steps. Show that and hence the HeldKarp algorithm is faster:

Both algorithms show that the complexity class of TSP is no worse than EXPTIME, which are problems that can be solved in time . For HeldKarp, using for some suffices:

For the factorial, it is necessary to use a higher-degree polynomial, for example :

Approximate solutions can be found in time, so the approximate TSP is in the complexity class P of problems solvable in polynomial time. Any polynomial algorithm is faster than an exponential one, or :

Properties & Relations  (10)

AsymptoticLess is irreflexive, i.e. :

It is transitive, i.e. and implies :

And it is antisymmetric, i.e. implies :

AsymptoticLess[f[x],g[x],xx0] iff Limit[Abs[f[x]/g[x]],xx0]0:

In particular, the limit has to exist:

If , then :

If , then :

If , then :

If , then :

If , then :

If and , then :

If and , then :

If and , then :

Wolfram Research (2018), AsymptoticLess, Wolfram Language function, https://reference.wolfram.com/language/ref/AsymptoticLess.html.

Text

Wolfram Research (2018), AsymptoticLess, Wolfram Language function, https://reference.wolfram.com/language/ref/AsymptoticLess.html.

CMS

Wolfram Language. 2018. "AsymptoticLess." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/AsymptoticLess.html.

APA

Wolfram Language. (2018). AsymptoticLess. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/AsymptoticLess.html

BibTeX

@misc{reference.wolfram_2024_asymptoticless, author="Wolfram Research", title="{AsymptoticLess}", year="2018", howpublished="\url{https://reference.wolfram.com/language/ref/AsymptoticLess.html}", note=[Accessed: 21-November-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_asymptoticless, organization={Wolfram Research}, title={AsymptoticLess}, year={2018}, url={https://reference.wolfram.com/language/ref/AsymptoticLess.html}, note=[Accessed: 21-November-2024 ]}