Delay Differential Equations
Comparison and Contrast with ODEs | The Method of Steps |
Propagation and Smoothing of Discontinuities | Examples |
Storing History Data |
A delay differential equation is a differential equation where the time derivatives at the current time depend on the solution and possibly its derivatives at previous times:
Instead of a simple initial condition, an initial history function needs to be specified. The quantities and are called the delays or time lags. The delays may be constants, functions and of (time-dependent delays), or functions and (state-dependent delays). Delay equations with delays of the derivatives are referred to as neutral delay differential equations (NDDEs).
The equation processing code in NDSolve has been designed so that you can input a delay differential equation in essentially mathematical notation.
x[t-τ] | dependent variable x with delay τ |
x[t /; t≤t0]ϕ | specification of initial history function as expression ϕ for t less than t0 |
Inputting delays and initial history.
Currently, the implementation for DDEs in NDSolve only supports constant delays.
For simplicity, this documentation is written assuming that integration always proceeds from smaller to larger . However, NDSolve supports integration in the other direction if the initial history function is given for values above and the delays are negative.
Comparison and Contrast with ODEs
While DDEs look a lot like ODEs the theory for them is quite a bit more complicated and there are some surprising differences from ODEs. This section will show a few examples of the differences.
As long as the initial function satisfies , the solution for is always 1. [Z06] With ODEs, you could always integrate backward in time from a solution to obtain the initial condition.
For , the solutions are monotonic, for the solutions oscillate, and for the solutions approach a limit cycle. Of course, for the scalar ODE, solutions are monotonic independent of .
This simple scalar delay differential equation has chaotic solutions and the motion shown above looks very much like Brownian motion. [S07] As the delay is increased beyond a limit cycle appears, followed eventually by a period-doubling cascade leading to chaos before .
Stability is much more complicated for delay equations as well. It is well known that the linear ODE test equation has asymptotically stable solutions if and is unstable if .
The closest corresponding DDE is . Even if you consider just real and the situation is no longer so clear-cut. Shown below are some plots of solutions indicating this.
So the solution can be stable with and unstable with depending on the value of . A Manipulate is set up below so that you can investigate the plane.
Propagation and Smoothing of Discontinuities
The way discontinuities are propagated by the delays is an important feature of DDEs and has a profound effect on numerical methods for solving them.
In the example above, is continuous, but there is a jump discontinuity in at since approaching from the left the value is , given by the derivative of the initial history function , while approaching from the right the value is given by the DDE, giving .
Near , we have by the continuity of at and so is continuous at .
Differentiating the equation, we can conclude that so has a jump discontinuity at . Using essentially the same argument as above, we can conclude that at the second derivative is continuous.
Similarly, is continuous at or, in other words, at , is times differentiable. This is referred to as smoothing and holds generally for non-neutral delay equations. In some cases the smoothing can be faster than one order per interval.[Z06]
For neutral delay equations the situation is quite different.
It is easy to see that the solution is piecewise with continuous. However,
which has a discontinuity at every non-negative integer.
In general, there is no smoothing of discontinuities for neutral DDEs.
The propagation of discontinuities is very important from the standpoint of numerical solvers. If the possible discontinuity points are ignored, then the order of the solver will be reduced. If a discontinuity point is known, a more accurate solution can be found by integrating just up to the discontinuity point and then restarting the method just past the point with the new function values. This way, the integration method is used on smooth parts of the solution, leading to better accuracy and fewer rejected steps. From any given discontinuity points, future discontinuity points can be determined from the delays and detected by treating them as events to be located.
When there are multiple delays, the propagation of discontinuities can become quite complicated.
It is clear from the plot that there is a discontinuity at each non-negative integer as would be expected from the neutral delay . However, looking at the second and third derivative, it is clear that there are also discontinuities associated with points like , , propagated from the jump discontinuities in .
In fact, there is a whole tree of discontinuities that are propagated forward in time. A way of determining and displaying the discontinuity tree for a solution interval is shown in the subsection below.
Discontinuity Tree
Storing History Data
Once the solution has advanced beyond the first discontinuity point, some of the delayed values that need to be computed are outside the domain of the initial history function, and the computed solution needs to be used to get the values, typically by interpolating between steps previously taken. For the DDE solution to be accurate it is essential that the interpolation be as accurate as the method. This is achieved by using dense output for the ODE integration method (the output you get if you use the option InterpolationOrder->All in NDSolve). NDSolve has a general algorithm for obtaining dense output from most methods, so you can use just about any method as the integrator. Some methods, including the default for DDEs, have their own way of getting dense output, which is usually more efficient than the general method. Methods that are low enough order, such as "ExplicitRungeKutta" with "DifferenceOrder"->3 can just use a cubic Hermite polynomial as the dense output, so there is essentially no extra cost in keeping the history.
Since the history data is accessed frequently, it needs to have a quick lookup mechanism to determine which step to interpolate within. In NDSolve, this is done with a binary search mechanism and the search time is negligible compared with the cost of actual function evaluation.
The data for each successful step is saved before attempting the next step, and is saved in a data structure that can repeatedly be expanded efficiently. When NDSolve produces the solution, it simply takes this data and restructures it into an InterpolatingFunction object, so DDE solutions are always returned with dense output.
The Method of Steps
For constant delays, it is possible to get the entire set of discontinuities as fixed time. The idea of the method of steps is to simply integrate the smooth function over these intervals and restart on the next interval, being sure to reevaluate the function from the right. As long as the intervals do not get too small, the method works quite well in practice.
The method currently implemented for NDSolve is based on the method of steps.
Symbolic Method of Steps
This section defines a symbolic method of steps that illustrates how the method works. Note that to keep the code simpler and more to the point, it does not do any real argument checking. Also, the data structure and lookup for the history is not done in an efficient way, but for symbolic solutions this is a minor issue.
The method will also work for neutral DDEs.
The symbolic method will also work with symbolic parameter values as long as DSolve is still able to find the solution.
The reason the code was designed to take lists was so that it would work with systems.
Since the method computes the discontinuity tree, it will also work for multiple constant delays. However, with multiple delays, the solution may become quite complicated quickly and DSolve can bog down with huge expressions.
Examples
Lotka–Volterra Equations with Delay
The Lotka–Volterra system models the growth and interaction of animal species assuming that the effect of one species on another is continuous and immediate. A delayed effect of one species on another can be modeled by introducing time lags in the interaction terms.
With no delays, and the system (1) has an invariant that is constant for all and there is a (neutrally) stable periodic solution.
In this example, the effect of even a small delay is to destabilize the periodic orbit. With different parameters in the delayed Lotka–Volterra system it has been shown that there are globally attractive equilibria.[TZ08]
Enzyme Kinetics
modeling enzyme kinetics where is a substrate supply maintained at a constant level and molecules of the end product inhibits the reaction step . [HNW93]
The system has an equilibrium when , , and .
Mackey–Glass Equation
The Mackey–Glass equation was proposed to model the production of white blood cells. There are both periodic and chaotic solutions.
It is interesting to check the accuracy of the chaotic solution.
By the end of the interval, the differences between methods is order 1. Large deviation is typical in chaotic systems and in practice it is not possible or even necessary to get a very accurate solution for a large interval. However, if you do want a high quality solution, NDSolve allows you to use higher precision. For DDEs with higher precision, the "StiffnessSwitching" method is recommended.