Trust Region Methods
A trust region method has a region around the current search point, where the quadratic model
for "
local minimization" is "trusted" to be correct and steps are chosen to stay within this region. The size of the region is modified during the search, based on how well the model agrees with actual function evaluations.
Very typically, the trust region is taken to be an ellipse such that
D p ≤
.
D is a diagonal scaling (often taken from the diagonal of the approximate Hessian) and
is the trust region radius, which is updated at each step.
When the step based on the quadratic model alone lies within the trust region, then, assuming the function value get smaller, that step will be chosen. Thus, just as with "
line search" methods, the step control does not interfere with the convergence of the algorithm near to a minimum where the quadratic model is good. When the step based on the quadratic model lies outside the trust region, a step just up to the boundary of the trust region is chosen, such that the step is an approximate minimizer of the quadratic model on the boundary of the trust region.
Once a step
p_{k} is chosen, the function is evaluated at the new point, and the actual function value is checked against the value predicted by the quadratic model. What is actually computed is the ratio of actual to predicted reduction.
If {class > TI} ;; XMLElement[img, {src > /chars/Rho.gif, width > 6, height > 19, style > verticalalign:middle;}, {}] ;; XMLElement[sub, {}, {XMLElement[I, {}, {k}]}] is close to 1, then the quadratic model is quite a good predictor and the region can be increased in size. On the other hand, if
_{k} is too small, the region is decreased in size. When
_{k} is below a threshold,
, the step is rejected and recomputed. You can control this threshold with the method option
"AcceptableStepRatio">. Typically the value of
is quite small to avoid rejecting steps that would be progress towards a minimum. However, if obtaining the quadratic model at a point is quite expensive (e.g., evaluating the Hessian takes a relatively long time), a larger value of
will reduce the number of Hessian evaluations, but it may increase the number of function evaluations.
To start the trust region algorithm, an initial radius
needs to be determined. By default
Mathematica uses the size of the step based on the model (
1) restricted by a fairly loose relative stepsize limit. However, in some cases, this may take you out of the region you are primarily interested in, so you can specify a starting radius
_{0} using the option
"StartingScaledStepSize"→
_{0}
. The option contains
Scaled in its name because the trust region radius works through the diagonal scaling
D, so this is not an absolute step size.
This loads a package that contains some utility functions: 
This shows the steps and evaluations taken during a search for a local minimum of a function similar to Rosenbrock's function, using Newton's method with trust region step control.
Out[2]=  

The plot looks quite bad because the search has extended over such a large region that the fine structure of the function cannot really be seen on that scale.
This shows the steps and evaluations for the same function, but with a restricted initial trust region radius _{0}. Here the search stays much closer to the initial condition and follows the narrow valley.
Out[3]=  

It is also possible to set an overall maximum bound for the trust region radius by using the option
"MaxScaledStepSize"→
_{max} so that for any step,
_{k}≤
_{max}.
Trust region methods can also have difficulties with functions which are not smooth due to problems with numerical roundoff in the function computation. When the function is not sufficiently smooth, the radius of the trust region will keep getting reduced. Eventually, it will get to the point at which it is effectively zero.
This finds a local minimum for the function using the default method. The default method in this case is the (trust region) LevenbergMarquardt method since the function is a sum of squares.
Out[5]=  

The message means that the size of the trust region has become effectively zero relative to the size of the search point, so steps taken would have negligible effect.
Note: On some platforms, due to subtle differences in machine arithmetic, the message may not show up. This is because the reasons leading to the message have to do with numerical uncertainty, which can vary between different platforms.
This makes a plot of the variation function along the X_{1}direction at the final point found.
Out[6]=  

The plot along one direction makes it fairly clear why no more improvement is possible. Part of the reason the LevenbergMarquardt method gets into trouble in this situation is that convergence is relatively slow because the residual is nonzero at the minimum. With
Newton's method, the convergence is faster, and the full quadratic model allows for a better estimate of step size, so that
FindMinimum can have more confidence that the default tolerances have been satisfied.
Out[52]=  
The following table summarizes the options for controlling trust region step control.
  
"AcceptableStepRatio"  1/10000  the threshold , such that when the actual to prediction reduction _{k}≥, the search is moved to the computed step 
"MaxScaledStepSize"   the value _{max}, such that the trust region size _{k}<_{max} for all steps 
"StartingScaledStepSize"  Automatic  the initial trust region size _{0} 
Method options for "StepControl">"TrustRegion".