# The Affine Covariant Newton Method

The affine covariant Newton method presented here is an implementation of the NLEQ_ERR algorithm [Deu06], and is available as a method option to FindRoot. To illustrate how the algorithm works and its advantages, some simple examples of the affine covariant Newton solver will be shown. Its main application area is for solving discretized nonlinear partial differential equations. These systems of equations can become large, and it is important to pay attention to evaluation costs both in the computational speed and the amount of memory needed to solve these equations.

We start by specifying an example system of equations for which the roots are sought. Consider the nonlinear system of equations:

e^{x-2}-y | = | 0 |

y^{2}-x | = | 0 |

Given the variables and starting values, FindRoot will find the roots for this system of equations.

To make things a bit easier and more uniform throughout this tutorial, set up a vector-valued function f that takes a vector argument and evaluates all equations.

In[2]:= |

In[3]:= |

For comparisons in the following text, it is useful to have a reference solution that is known to be very precise.

Here is a helper function that makes calling FindRoot easier for the type of root finding this tutorial is presenting.

In[6]:= |

Newton methods, such as the default method of FindRoot, will need to evaluate the function f and compute and evaluate the Jacobian of the function f. The evaluation of the Jacobian can be computationally expensive. The fact that the Jacobian needs to be evaluated is not something that can be avoided in the type of Newton methods discussed here; however, there exist algorithms that minimize the amount of the Jacobian evaluations. To investigate the evaluations needed by FindRoot, monitors such as EvaluationMonitor and StepMonitor can be made use of.

It is instructive to monitor the number of function evaluations, the number of steps needed to get to a solution and the number of Jacobian evaluations during the root-finding process.

Seven steps and 10 function evaluations were needed to find a solution. Sometimes a trial step is made, which increases the number of function evaluations, but that trial step is then rejected. A rejected step does not increase the step count but increases the function evaluation count. We also see that seven Jacobian evaluations were needed to complete the solution process.

FindRoot has several method options, among them "AffineCovariantNewton". The affine covariant Newton method is explained in detail in [Deu06].

The affine covariant Newton method needed more steps and function evaluations but fewer Jacobian evaluations. In a case such as solving nonlinear partial differential equations, where evaluating the Jacobian is computationally expensive, not having to evaluate the Jacobian as often can make a big difference in being able to find a solution to the system of equations efficiently.

Note that the precision of the affine covariant solution is less than what the default FindRoot method found, but it is still within the default settings for PrecisionGoal. Before we look at that, however, we inspect the "AffineCovariantNewton" method option "BroydenUpdates" that can be set to True or False. Broyden updates are a relatively inexpensive way of estimating updated factors of the Jacobian without having to refactor the Jacobian. The affine covariant Newton solver estimates when to use Broyden updates, and computes them efficiently. If during the solution process the nonlinearity increases and Broyden updates can no longer be made useful, the algorithm switches back to compute the Jacobians. The Broyden update algorithm implemented is the QNERR algorithm [Deu06].

With Broyden updates switched off, finding a solution takes fewer steps and function evaluations but more Jacobian evaluations than when Broyden updates are done.

Coming back to the previous issue of increasing the precision of a sought solution, the precision increase can be done by increasing the precision goal.

The affine covariant Newton method is an error norm–controlled algorithm, and as such, an AccuracyGoal is not monitored. In other words, the algorithm only monitors the error and not the residual. The error is associated with the PrecisionGoal, while the residual is associated with the accuracy goal.

#### Method Options

The affine covariant Newton method takes a number of options in addition to the FindRoot options. These method options will be discussed. First, a list of available options is presented. Then these options are illustrated with examples, and typical scenarios where the options may be beneficial are presented.

The following method options can be given for the Method"AffineCovariantNewton" in FindRoot.

option name | default value | |

"BroydenUpdates" | Automatic | Broyden updates |

"InitialDampingFactor" | Automatic | the damping factor in the first step |

"InverseJacobian" | Automatic | specify the inverse of the Jacobian if known |

"LinearSolver" | Automatic | specify a linear solver function or options |

"MinimalDampingFactor" | Automatic | how small the damping factor can become |

Method options for Method"AffineCovariantNewton" in FindRoot.

##### "BroydenUpdates"

Broyden updates are a way to reuse a previously computed Jacobian. Reusing a previously computed Jacobian is much more efficient than recomputing a new Jacobian. The downside is that the Broyden update may not be as precise as a newly computed Jacobian. The affine covariant Newton method algorithm has a mechanism implemented that switches to Broyden updates when it is safe to do so, and may also switch back to compute the Jacobians when Broyden updates are no longer usable. Nevertheless, Broyden updates can be disabled.

##### "MinimalDampingFactor"

During step of the solution process, this equation is solved: . The Jacobian is inverted to obtain a new increment . This increment is then added to the previous solution to obtain an updated solution . is a damping factor between that restricts the step size taken. For extreme nonlinearities, the default minimal damping factor can be reduced. The default value is 10^{-4}. Often, this should not be necessary.

##### "InitialDampingFactor"

When it is known that the nonlinearity is not too extreme, a larger initial damping factor can be chosen. The default initial damping factor is 1/100. A chosen value should be larger than the "MinimalDampingFactor" but smaller or equal to 1: .

A larger initial damping factor can save Jacobian evaluations.

##### "InverseJacobian"

The Newton algorithm actually needs the inverse of the Jacobian to do its job. This inverse is computed by creating a LinearSolveFunction. In some cases, the inverse Jacobian may be known or easy to compute. In such a scenario, the inverse Jacobian can be passed to the "AffineCovariantNewton" method via a method option.

In[23]:= |

Specifying the inverse of the Jacobian does not have any affect on the number of Jacobian evaluations. The fact that the count of Jacobian evaluations are shown to be 0 is an unfortunate shortcoming of the way the inverse Jacobian interacts with FindRoot.

##### "LinearSolver"

It is possible change the linear solver or the options used by LinearSolve to construct the inverse of the Jacobian. This is done with the "LinearSolver" method option.

When an iterative method like "Krylov" is specified as a method to LinearSolve, then the factorization of the Jacobian is not stored when possible, but computed anew every time it is needed.