AsymptoticLess
AsymptoticLess[f,g,xx^{*}]
gives conditions for or as xx^{*}.
AsymptoticLess[f,g,{x_{1},…,x_{n}}{,…,}]
gives conditions for or as {x_{1},…,x_{n}}{,…,}.
Details and Options
 Asymptotic less is also expressed as f is littleo of g, f is of order less than g, f grows slower than g, and f is dominated by g. The point x^{*} is often assumed from context.
 Asymptotic less is an order relation and means when x is near x^{*} for all constants .
 Typical uses include expressing simple strict upper bounds for functions and sequences near some point. It is frequently used for solutions to equations and to give simple strict upper bounds for computational complexity.
 For a finite limit point x^{*} and {,…,}:

AsymptoticLess[f[x],g[x],xx^{*}] for all there exists such that implies AsymptoticLess[f[x_{1},…,x_{n}],g[x_{1},…,x_{n}],{x_{1},…,x_{n}}{,…,}] for all there exists such that implies  For an infinite limit point:

AsymptoticLess[f[x],g[x],x∞] for all there exists such that implies AsymptoticLess[f[x_{1},…,x_{n}],g[x_{1},…,x_{n}],{x_{1},…,x_{n}}{∞,…,∞}] for all there exists such that implies  AsymptoticLess[f[x],g[x],xx^{*}] exists if and only if Limit[Abs[f[x]/g[x]],xx^{*}]0 when g[x] does not have an infinite set of zeros near x^{*}.
 The following options can be given:

Assumptions $Assumptions assumptions on parameters Direction Reals direction to approach the limit point GenerateConditions Automatic generate conditions for parameters Method Automatic method to use PerformanceGoal "Quality" what to optimize  Possible settings for Direction include:

Reals or "TwoSided" from both real directions "FromAbove" or 1 from above or larger values "FromBelow" or +1 from below or smaller values Complexes from all complex directions Exp[ θ] in the direction {dir_{1},…,dir_{n}} use direction dir_{i} for variable x_{i} independently  DirectionExp[ θ] at x^{*} indicates the direction tangent of a curve approaching the limit point x^{*}.
 Possible settings for GenerateConditions include:

Automatic nongeneric conditions only True all conditions False no conditions None return unevaluated if conditions are needed  Possible settings for PerformanceGoal include $PerformanceGoal, "Quality" and "Speed". With the "Quality" setting, Limit typically solves more problems or produces simpler results, but it potentially uses more time and memory.
Examples
open allclose allBasic Examples (2)
Scope (9)
Compare functions that are not strictly positive:
Show diverges more slowly than at the origin:
Answers may be Boolean expressions rather than explicit True or False:
When comparing functions with parameters, conditions for the result may be generated:
By default, a twosided comparison of the functions is made:
When comparing larger values of , is indeed smaller than :
The relationship fails for smaller values of :
Functions like Sqrt may have the same relationship in both real directions along the negative reals:
If approached from above in the complex plane, the same relationship is observed:
However, approaching from below in the complex plane produces a different result:
This is due to a branch cut where the imaginary part of Sqrt reverses sign as the axis is crossed:
Hence, the relationship does not hold in the complex plane in general:
Visualize the relative sizes of the functions when approached from the four real and imaginary directions:
Compare multivariate functions:
Visualize the norms of the two functions:
Options (10)
Assumptions (1)
Specify conditions on parameters using Assumptions:
Direction (5)
Test the relation at piecewise discontinuities:
Since it fails in one direction, the twosided result is false as well:
Visualize the two functions and their ratio:
The relationship at a pole is independent of the direction of approach:
Test the relation at a branch cut:
Compute the relation, approaching from different quadrants:
Approaching the origin from the first quadrant:
Approaching the origin from the second quadrant:
Approach the origin from the right halfplane:
Approaching the origin from the bottom halfplane:
Visualize the ratio of the functions, which becomes small near the origin except along :
GenerateConditions (3)
Return a result without stating conditions:
This result is only valid if n>0:
Return unevaluated if the results depend on the value of parameters:
By default, conditions are generated that guarantee a result:
By default, conditions are not generated if only special values invalidate the result:
With GenerateConditions>True, even these nongeneric conditions are reported:
PerformanceGoal (1)
Use PerformanceGoal to avoid potentially expensive computations:
The default setting uses all available techniques to try to produce a result:
Applications (18)
Basic Applications (4)
This relationship extends to all real powers:
Visualize functions with fractional and negative powers:
Visualize the relationship in a logarithmic plot:
This relationship extends to all real powers:
Visualize functions with fractional and negative powers:
Absolute and Relative Errors (2)
A function approximates as with small absolute error if as . Show that approximates with small absolute error as :
Show that also approximates with small absolute error as :
Show that approximates with small absolute error as :
Show that approximates with small absolute error as :
A function approximates as with small relative error if as . Show that approximates with small relative error as :
Show that approximates with small relative error as :
But the preceding approximation does not have small absolute error:
Similarly, Stirling's formula approximation to has small relative error as :
Asymptotic Approximation (6)
Let be a function and an approximation to near . Then the approximation is asymptotic if at . In other words, the remainder or error is asymptotically smaller than the approximation. Show that is an asymptotic approximation to at :
Show that is an asymptotic approximation to at :
Show that is not an asymptotic approximation to at :
Show that Stirling's formula is an asymptotic approximation to as :
Show that is an asymptotic approximation to as :
Series generates asymptotic approximation to elementary and special functions. For instance, generate a degree10 approximation to at :
Show that the series is asymptotic:
Determine an asymptotic series of Cot[x] at 0:
Show that a series of Gamma[x] is asymptotic at 1:
Find an asymptotic approximation of at 0:
There can be subtleties with asymptotic approximation when the function to be approximated approaches zero infinitely many times in every neighborhood of the approximation point. As an example, consider the asymptotic expansion of near :
The approximation is not asymptotic because at every zero of the Bessel function, the approximation is not quite zero:
Despite the ratio generally approaching zero, is violated infinitely many times:
On the other hand, consider the approximation of the neverzero Hankel function :
This approximation is asymptotic:
So is the approximation of the Hankel function of the second kind, :
As , its approximation can be understood as nearly asymptotic, being the sum of two such approximations:
Alternatively, consider the asymptotic approximation of near :
This is a true asymptotic approximation:
The limit of the ratio of the approximation and the function approaches consistently:
Use AsymptoticIntegrate to generate asymptotic approximations to definite integrals. For instance, find an asymptotic approximation to as and compare to the exact value:
Create an asymptotic approximation with a smaller number of terms:
This approximation is asymptotic to the exact integral as well as the first approximation:
Use AsymptoticIntegrate to generate asymptotic approximations to indefinite integrals, though there is a need to account for the constant of integration. Consider an approximation of as :
Show that the approximation is asymptotic:
However, the approximation is asymptotic after matching values at the expansion point:
Compute two different asymptotic approximations of as :
There is no symbolic result to compare to, but the process can be shown to be asymptotic:
Use AsymptoticDSolveValue to generate asymptotic approximations to a differential equation:
There is no exact result to compare to, but the process can be shown to be asymptotic:
Compare with the values of a numerical solution obtained using NDSolveValue:
Asymptotic Scales & Series (2)
A sequence of functions is an asymptotic scale at iff at . Write a function that can check whether a finite list of functions is an asymptotic scale:
Integer powers give an asymptotic scale at , with as :
They also form an asymptotic scale at , but in the opposite order:
Powers of form an asymptotic scale, with as :
Form an asymptotic scale using and , with as :
Write a function that sorts functions by the asymptotic ordering at some point :
Generate a random list of integer power functions:
Computational Complexity (4)
Simple sorting algorithms (bubble sort, insertion sort) take approximately a n^{2} steps to sort n objects, while optimal general algorithms (heap sort, merge sort) take approximately b n Log[n] steps to do the sort. Show that optimal algorithms always take less time to sort large enough collections of objects:
Certain special algorithms (counting sort, radix sort), where there is information ahead of time about the possible inputs, can run in c n time. These are even faster than the optimal algorithms when they can be used:
Visualize the growth of the three time scales:
In a bubble sort, adjoining neighbors are compared and swapped if they are out of order. After one pass of n1 comparisons, the largest element is at the end. The process is then repeated on remaining n1 elements, and so forth, until only two elements at the very beginning remain. If comparison and swap takes c steps, the total number of steps for the sort is:
The polynomial is asymptotically approximated by just the quadratic term:
In a merge sort, the list of elements is split in two, each half is sorted, and then the two halves are combined. Thus, the time T[n] to do the sort will be the sum of some constant time b to compute the middle, 2T[n/2] to sort each half, and some multiple a n of the number of elements to combine the two halves:
Solve the recurrence equation to find the time t to sort n elements:
This expression is asymptotically approximated by just the last term in the expression:
The traveling salesperson problem (TSP) consists of finding the shortest route connecting cities. A naive algorithm is to try all routes. The Held–Karp algorithm improves that to roughly steps. Show that and hence the Held–Karp algorithm is faster:
Both algorithms show that the complexity class of TSP is no worse than EXPTIME, which are problems that can be solved in time . For Held–Karp, using for some suffices:
For the factorial, it is necessary to use a higherdegree polynomial, for example :
Approximate solutions can be found in time, so the approximate TSP is in the complexity class P of problems solvable in polynomial time. Any polynomial algorithm is faster than an exponential one, or :
Properties & Relations (10)
AsymptoticLess is irreflexive, i.e. :
It is transitive, i.e. and implies :
And it is antisymmetric, i.e. implies :
AsymptoticLess[f[x],g[x],xx_{0}] iff Limit[Abs[f[x]/g[x]],xx_{0}]0: