# IDA Method for NDSolve

The IDA package is part of the SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers) developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory. As described in the IDA user guide [HT99], "IDA is a general purpose solver for the initial value problem for systems of differential-algebraic equations (DAEs). The name IDA stands for Implicit Differential-Algebraic solver. IDA is based on DASPK ...". DASPK [BHP94], [BHP98] is a Fortran code for solving large-scale differential-algebraic systems.

In the Wolfram Language, an interface has been provided to the IDA package so that rather than needing to write a function in C for evaluating the residual and compiling the program, the Wolfram Language generates the function automatically from the equations you input to NDSolve.

IDA solves the system (1) with Backward Differentiation Formula (BDF) methods of orders 1 through 5, implemented using a variable-step form. The BDF of order at time is given by the formula

The coefficients depend on the order and past step sizes. Applying the BDF to the DAE (1) gives a system of nonlinear equations to solve:

The solution of the system is achieved by Newton-type methods, typically using an approximation to the Jacobian

"Its [IDA's] most notable feature is that, in the solution of the underlying nonlinear system at each time step, it offers a choice of Newton/direct methods or an Inexact Newton/Krylov (iterative) method." [HT99] In the Wolfram Language, you can access these solvers using method options or use the default Wolfram Language LinearSolve, which switches automatically to direct sparse solvers for large problems.

At each step of the solution, IDA computes an estimate of the local truncation error, and the step size and order are chosen so that the weighted norm Norm[E_{n}/w_{n}] is less than 1. The component, , of is given by

The values prec and acc are taken from the NDSolve settings for the PrecisionGoal->prec and AccuracyGoal->acc.

Because IDA provides a great deal of flexibility, particularly in the way nonlinear equations are solved, there are a number of method options which allow you to control how this is done. You can use the method options to IDA by giving NDSolve the option Method->{IDA,ida method options}.

In[1]:= |

Out[1]= |

IDA method option name | default value | |

"ImplicitSolver" | "Newton" | how to solve the implicit equations |

"MaxDifferenceOrder" | 5 | the maximum order BDF to use |

When strict accuracy of intermediate values computed with the InterpolatingFunction object returned from NDSolve is important, you will want to use the NDSolve method option setting InterpolationOrder->All that uses interpolation based on the order of the method, sometimes called dense output, to represent the solution between time steps. By default NDSolve stores a minimal amount of data to represent the solution well enough for graphical purposes. Keeping the amount of data small saves on both memory and time for more complicated solutions.

As an example which highlights the difference between minimal output and full method interpolation order, consider tracking a quantity derived from the solution of a simple linear equation where the exact solution can be computed using DSolve.

In[2]:= |

In[3]:= |

In[4]:= |

Out[4]= |

The exact solution will be compared with solutions computed with and without dense output.

In[5]:= |

Out[5]= |

In[6]:= |

Out[6]= |

In[7]:= |

Out[7]= |

From the plot, it is quite apparent that when the time steps get large, the default solution output has a much larger error between time steps. The dense output solution represents the solution computed by the solver even between time steps. Keep in mind, however, that the dense output solution takes up much more space.

In[8]:= |

Out[8]= |

When the quantity you want to derive from the solution is complicated, you can ensure that it is locally kept within tolerances by giving it as an algebraic quantity, forcing the solver to keep its error in control.

In[9]:= |

Out[9]= |

In[10]:= |

Out[10]= |

In[11]:= |

Out[11]= |

You can see from the plot that the error is somewhat smaller when the quantity is computed algebraically along with the solution.

The remainder of this documentation will focus on suboptions of the two possible settings for the option, which can be or . With , the Jacobian or an approximation to it is computed and the linear system is solved to find the Newton step. On the other hand, is an iterative method and, rather than computing the entire Jacobian, a directional derivative is computed for each iterative step.

The method has one method option, , which you can use to tell the Wolfram Language how to solve the linear equations required to find the Newton step. There are several possible values for this option.

Automatic | this is the default: automatically switch between using the Wolfram Language LinearSolve and Band methods depending on the bandwidth of the Jacobian; for systems with larger bandwidth, this will automatically switch to a direct sparse solver for large systems with sparse Jacobians |

"Band" | use the IDA band method (see the IDA user manual for more information) |

"Dense" | use the IDA dense method (see the IDA user manual for more information) |

Possible settings for the option.

The method may be substantially faster, but is typically quite a bit more tricky to use because to really be effective it typically requires a preconditioner, which can be supplied via a method option. There are also some other method options that control the Krylov subspace process. To use these, refer to the IDA user guide [HT99].

GMRES method option name | default value | |

"Preconditioner" | Automatic | a Wolfram Language function that returns another function that preconditions |

"OrthogonalizationType" | "ModifiedGramSchmidt" | this can also be (see variable gstype in the IDA user guide) |

"MaxKrylovSubspaceDimension" | Automatic | maximum subspace dimension (see variable maxl in the IDA user guide) |

"MaxKrylovRestarts" | Automatic | maximum number of restarts (see variable maxrs in the IDA user guide) |

As an example problem, consider a two-dimensional Burgers' equation

This can typically be solved with an ordinary differential equation solver, but in this example two things are achieved by using the DAE solver. First, boundary conditions are enforced as algebraic conditions. Second, NDSolve is forced to use conservative differencing by using an algebraic term. For comparison, a known exact solution will be used for initial and boundary conditions.

In[12]:= |

In[13]:= |

In[14]:= |

In[15]:= |

Out[15]= |

You can see from the plot that with , there is a fairly steep front. This moves with constant speed.

In[16]:= |

Out[16]= |

Since there is an exact solution to compare to, the overall solution accuracy can be compared as well.

In[17]:= |

In[18]:= |

Out[18]= |

In the following, a comparison will be made with different settings for the options of the IDA method. To emphasize the option settings, a function will be defined to time the computation of the solution and give the maximal error.

In[19]:= |

In[20]:= |

Out[20]= |

In[21]:= |

Out[21]= |

The method is not very effective because the bandwidth of the Jacobian is relatively large, partly because of the fourth-order derivatives used and partly because the one-sided stencils used near the boundary add width at the top and bottom. You can specify the bandwidth explicitly.

In[22]:= |

Out[22]= |

While the solution time was smaller, notice that the error is slightly greater and the total number of time steps is a lot greater. If the problem had been more stiff, the iterations likely would not have converged because it was missing information from the other direction. Ideally, the bandwidth should not eliminate information from an entire dimension.

In[23]:= |

Out[23]= |

In[24]:= |

Out[24]= |

With the more appropriate setting of the bandwidth, the solution is still slightly slower than in the default case. The method can sometimes be effective on two-dimensional problems, but is usually most effective on one-dimensional problems.

In[25]:= |

Out[25]= |

This is incredibly slow! Using the method without a preconditioner is not recommended for this very reason. However, finding a good preconditioner is not usually trivial. For this example, a diagonal preconditioner will be used.

The setting of the option should be a function f, which accepts four arguments that will be given to it by NDSolve such that returns another function that will apply the preconditioner to the residual vector. (See IDA user guide [HT99] for details on how the preconditioner is used.) The arguments t, x, x', c are the current time, solution vector, solution derivative vector, and the constant c in formula (2) above. For example, if you can determine a procedure that would generate an appropriate preconditioner matrix as a function of these arguments, you could use

"Preconditioner"->Function[{t,x,xp,c},LinearSolve[P[t,x,xp,c]]]

to produce a LinearSolveFunction object which will effectively invert the preconditioner matrix . Typically, for each time the preconditioner function is set up, it is applied to the residual vector several times, so using some sort of factorization such as is contained in a LinearSolveFunction is a good idea.

For the diagonal case, the inverse can be affected simply by using the reciprocal. The most difficult part of setting up a diagonal preconditioner is keeping in mind that values on the boundary should not be affected by it.

In[26]:= |

Out[26]//Short= | |

In[27]:= |

Out[27]//Short= | |

In[28]:= |

In[29]:= |

Out[29]= |

Thus, even with a crude preconditioner, the method computes the solution faster than using the direct sparse solvers.

For PDE discretizations with higher-order temporal derivatives or systems of PDEs, you may need to look at the corresponding NDSolve`StateData object to determine how the variables are ordered so that you can get the structural form of the preconditioner correctly.