This is documentation for Mathematica 6, which was
based on an earlier version of the Wolfram Language.
 Mathematica Tutorial

# Introduction to Solving Nonlinear Equations

There are some close connections between finding a "local minimum" and solving a set of nonlinear equations. Given a set of n equations in n unknowns, seeking a solution r (x) = 0 is equivalent to minimizing the sum of squares r (x). r (x) when the residual is zero at the minimum, so there is a particularly close connection to the Gauss-Newton methods. In fact, the Gauss-Newton step for local minimization and the Newton step for nonlinear equations are exactly the same. Also, for a smooth function, "Newton's method" for local minimization is the same as Newton's method for the nonlinear equations f=0. Not surprisingly, many aspects of the algorithms are similar, however, there are also important differences.
Another thing in common with minimization algorithms is the need for some kind of "step control". Typically, step control is based on the same methods as minimization except that they are applied to a merit function, usually the smooth 2-norm squared, r (x). r (x).
 "Newton" use the exact Jacobian or a finite difference approximation to solve for the step based on a locally linear model "Secant" Work without derivatives by constructing a secant approximation to the Jacobian using n past steps, require two starting conditions in each dimension "Brent" method in one dimension that maintains bracketing of roots, requires two starting conditions that bracket a root

Basic method choices for FindRoot.