# Numerical Solution of Boundary Value Problems (BVP)

## "Shooting" Method

The shooting method works by considering the boundary conditions as a multivariate function of initial conditions at some point, reducing the boundary value problem to finding the initial conditions that give a root. The advantage of the shooting method is that it takes advantage of the speed and adaptivity of methods for initial value problems. The disadvantage of the method is that it is not as robust as finite difference or collocation methods: some initial value problems with growing modes are inherently unstable even though the BVP itself may be quite well posed and stable.

The shooting method looks for initial conditions so that . Since you are varying the initial conditions, it makes sense to think of as a function of them, so shooting can be thought of as finding such that

After setting up the function for , the problem is effectively passed to FindRoot to find the initial conditions giving the root. The default method is to use Newton's method, which involves computing the Jacobian. While the Jacobian can be computed using finite differences, the sensitivity of solutions of an initial value problem (IVP) to its initial conditions may be too much to get reasonably accurate derivative values, so it is advantageous to compute the Jacobian as a solution to ODEs.

### Linearization and Newton's Method

Linear problems can be described by

where is a matrix and is a vector both possibly depending on , is a constant vector, and are constant matrices.

Let . Then, differentiating both the IVP and boundary conditions with respect to gives

Since is linear, when thought of as a function of , you have , so the value of for which satisfies

for any particular initial condition .

For nonlinear problems, let be the Jacobian for the nonlinear ODE system, and let be the Jacobian of the boundary condition. Then computation of for the linearized system gives the Jacobian for the nonlinear system for a particular initial condition, leading to a Newton iteration,

### "StartingInitialConditions"

For boundary value problems, there is no guarantee of uniqueness as there is in the initial value problem case. will find only one solution. Just as you can affect the particular solution FindRoot gets for a system of nonlinear algebraic equations by changing the starting values, you can change the solution that finds by giving different initial conditions to start the iterations from.

is an option of the method that allows you to specify the values and position of the initial conditions to start the shooting process from.

The shooting method by default starts with zero initial conditions so that if there is a zero solution, it will be returned.

In[105]:= |

Out[106]= |

By default, starts from the left side of the interval and shoots forward in time. There are cases where it is advantageous to go backward, or even from a point somewhere in the middle of the interval.

Consider the linear boundary value problem

For moderate values of , the initial value problem starting at becomes unstable because of the growing and terms. Similarly, starting at , instability arises from the term, though this is not as large as the term in the forward direction. Beyond some value of , shooting will not be able to get a good solution because the sensitivity in either direction will be too great. However, up to that point, choosing a point in the interval that balances the growth in the two directions will give the best solution.

In[107]:= |

In[110]:= |

Out[110]= |

Shooting from , the method gives warning messages about an ill-conditioned matrix and that the boundary conditions are not satisfied as well as they should be. This is because a small error at is amplified by Since the reciprocal of this is of the same order of magnitude as the local truncation error, visible errors such as those seen in the plot are not surprising. In the reverse direction, the magnification will be much less: , so the solution should be much better.

In[111]:= |

Out[111]= |

A good point to choose is one that will balance the sensitivity in each direction, which is about at . With this, the error with will still be under reasonable control.

In[112]:= |

Out[112]= |

### Option Summary

option name | default value | |

"StartingInitialConditions" | Automatic | the initial conditions to initiate the shooting method from |

"ImplicitSolver" | Automatic | the method to use for solving the implicit equation defined by the boundary conditions; this should be an acceptable value for the Method option of FindRoot |

"MaxIterations" | Automatic | how many iterations to use for the implicit solver method |

"Method" | Automatic | the method to use for integrating the system of ODEs |

## "Chasing" Method

The method of chasing came from a manuscript of Gelfand and Lokutsiyevskii first published in English in [BZ65] and further described in [Na79]. The idea is to establish a set of auxiliary problems that can be solved to find initial conditions at one of the boundaries. Once the initial conditions are determined, the usual methods for solving initial value problems can be applied. The chasing method is, in effect, a shooting method that uses the linearity of the problem to good advantage.

where , is the coefficient matrix, and is the inhomogeneous coefficient vector, with linear boundary conditions

where is a coefficient vector. From this, construct the augmented homogeneous system

The chasing method amounts to finding a vector function such that and . Once the function is known, if there is a full set of boundary conditions, solving

can be used to determine initial conditions that can be used with the usual initial value problem solvers. Note that the solution to system (3) is nontrivial because the first component of is always 1.

Thus, solving the boundary value problem is reduced to solving the auxiliary problems for the . Differentiating the equation for gives

substituting the differential equation,

Since this should hold for all solutions , you have the initial value problem for ,

Given where you want to have solutions to all of the boundary value problems, *Mathematica* just uses NDSolve to solve the auxiliary problems for by integrating them to . The results are then combined into the matrix of (3) that is solved for to obtain the initial value problem that NDSolve integrates to give the returned solution.

This variant of the method is further described in and used by the *MathSource* package [R98], which also allows you to solve linear eigenvalue problems.

There is an alternative, nonlinear way to set up the auxiliary problems that is closer to the original method proposed by Gelfand and Lokutsiyevskii. Assume that the boundary conditions are linearly independent (if not, then the problem is insufficiently specified). Then in each , there is at least one nonzero component. Without loss of generality, assume that . Now solve for in terms of the other components of , , where and . Using (5) and replacing , and thinking of in terms of the other components of , you get the nonlinear equation

where is with the column removed, and is the column of A. The nonlinear method can be more numerically stable than the linear method, but it has the disadvantage that integration along the real line may lead to singularities. This problem can be eliminated by integrating on a contour in the complex plane. However, the integration in the complex plane typically has more numerical errors than a simple integration along the real line, so in practice, the nonlinear method does not typically give results better than the linear method. For this reason, and because it is also generally faster, the default for *Mathematica* is to use the linear method.

In[113]:= |

Out[113]= |

In[114]:= |

Out[114]= |

In[115]:= |

Out[116]= |

In general, you cannot expect the boundary value equations to be satisfied to the close tolerance of Equal.

In[117]:= |

Out[117]= |

They are typically only satisfied at most tolerances that come from the AccuracyGoal and PrecisionGoal options of NDSolve. Usually, the actual accuracy and precision is less because these are used for local, not global, error control.

In[118]:= |

Out[118]= |

When you give NDSolve a problem that has no solution, numerical error may make it appear to be a solvable problem. Typically, NDSolve will issue a warning message.

In[125]:= |

Out[125]= |

In this case, it is not able to integrate over the entire interval because of nonexistence.

Another situation in which the equations can be ill-conditioned is when the boundary conditions do not give a unique solution.

In[120]:= |

Out[120]= |

In[122]:= |

Out[122]= |

In[126]:= |

Out[130]= |

Typically the default values *Mathematica* uses work fine, but you can control the chasing method by giving NDSolve the option Method->{"Chasing", chasing options}. The possible are shown in the following table.

option name | default value | |

Method | Automatic | the numerical method to use for computing the initial value problems generated by the chasing algorithm |

"ExtraPrecision" | 0 | number of digits of extra precision to use for solving the auxiliary initial value problems |

"ChasingType" | "LinearChasing" | the type of chasing to use, which can be either or |

Options for the method of NDSolve.

The method itself has two options.

option name | default value | |

"ContourType" | "Ellipse" | the shape of contour to use when integration in the complex plane is required, which can either be or |

"ContourRatio" | 1/10 | the ratio of the width to the length of the contour; typically a smaller number gives more accurate results, but yields more numerical difficulty in solving the equations |

Options for the option of the method.

These options, especially , can be useful in cases where there is a strong sensitivity to computed initial conditions.

In[131]:= |

Out[132]= |

In[133]:= |

Out[134]= |

In[135]:= |

Out[136]= |

Increasing the extra precision beyond this really will not help because a significant part of the error results from computing the solution once the initial conditions are found. To reduce this, you need to give more stringent AccuracyGoal and PrecisionGoal options to NDSolve.

In[137]:= |

Out[138]= |

## Boundary Value Problems with Parameters

In many of the applications where boundary value problems arise, there may be undetermined parameters, such as eigenvalues, in the problem itself that may be a part of the desired solution. By introducing the parameters as dependent variables, the problem can often be written as a boundary value problem in standard form.

For example, the flow in a channel can be modeled by

where (the Reynolds number) is given, but is to be determined.

To find the solution and the value of , just add the equation .

In[1]:= |

Out[1]= |