There are some close connections between finding a local minimum and solving a set of nonlinear equations. Given a set of equations in unknowns, seeking a solution is equivalent to minimizing the sum of squares when the residual is zero at the minimum, so there is a particularly close connection to the Gauss-Newton methods. In fact, the Gauss-Newton step for local minimization and the Newton step for nonlinear equations are exactly the same. Also, for a smooth function, Newton's method for local minimization is the same as Newton's method for the nonlinear equations . Not surprisingly, many aspects of the algorithms are similar; however, there are also important differences.
Another thing in common with minimization algorithms is the need for some kind of step control. Typically, step control is based on the same methods as minimization except that it is applied to a merit function, usually the smooth 2-norm squared, .
|"Newton"||use the exact Jacobian or a finite difference approximation to solve for the step based on a locally linear model|
|"Secant"||work without derivatives by constructing a secant approximation to the Jacobian using past steps; requires two starting conditions in each dimension|
|"Brent"||method in one dimension that maintains bracketing of roots; requires two starting conditions that bracket a root|
Basic method choices for FindRoot.