Test Problems
All of the test problems presented in [
MGH81] have been coded into
Mathematica in the
Optimization`UnconstrainedProblems` package. A data structure is used so that the problems can be processed for solution and testing with
FindMinimum and
FindRoot in a seamless way. The lists of problems for
FindMinimum and
FindRoot are in
$FindMinimumProblems and
$FindRootProblems, respectively, and a problem can be accessed using
GetFindMinimumProblem and
GetFindRootProblem.
$FindMinimumProblems  List of problems that are appropriate for FindMinimum 
GetFindMinimumProblem[prob]  Get the problem prob using the default size and starting values in a FindMinimumProblem data structure 
GetFindMinimumProblem[prob,{n,m}] 
 Get the problem prob with n variables such that it is a sum of m squares in a FindMinimumProblem data structure 
GetFindMinimumProblem[prob,size,start] 
 Get the problem prob with given size and starting value start in a FindMinimumProblem data structure 
FindMinimumProblem[f,vars,opts,prob,size] 
 A data structure that contains a minimization problem to be solved by FindMinimum 
Accessing FindMinimum problems.
$FindRootProblems  List of problems that are appropriate for FindRoot 
GetFindRootProblem[prob]  Get the problem prob using the default size and starting values in a FindRootProblem data structure 
GetFindRootProblem[prob,n]  Get the problem prob with n variables (and n equations) in a FindRootProblem data structure 
GetFindRootProblem[prob,n,start]  Get the problem prob with size n and starting value start in a FindRootProblem data structure 
FindRootProblem[f,vars,opts,prob,size] 
 A data structure that contains a minimization problem to be solved by FindRoot 
Accessing FindRoot problems.
GetFindMinimumProblem and
GetFindRootProblem are both pass options to be used by other commands. They also accept the option
Variables>vars which is used to specify what variables to use for the problems.
  
Variables  X_{#}&  A function that is applied to the integers 1, ... n to generate the variables for a problem with n variables or a list of length n containing the variables 
Specifying variable names.
This gets the Beale problem in a FindMinimumProblem data structure.
Out[2]=  

This gets the Powell singular function problem in a FindRootProblem data structure.
Out[3]=  

Once you have a
FindMinimumProblem or
FindRootProblem object, in addition to simply solving the problem, there are various tests that you can run.
ProblemSolve[p,opts]  Solve the problem in p, giving the same output as FindMinimum or FindRoot 
ProblemStatistics[p,opts]  Solve the problem, giving a list {sol, stats}, where sol is the output of ProblemSolve[p] and evals is a list of rules indicating the number steps and evaluations used 
ProblemTime[p,opts]  Solve the problem giving a list {sol, Time>time}, where sol is the output of ProblemSolve[p] and time is time taken to solve the problem. If time is less than a second, the problem will be solved multiple times to get an average timing 
ProblemTest[p,opts]  Solve the problem, giving a list of rules including the step and evaluation statistics and time from ProblemStatistics[p] and ProblemTime[p] along with rules indicating the accuracy and precision of the solution as compared with a reference solution 
FindMinimumPlot[p,opts]  Plot the steps and evaluation points for solving a FindMinimumProblem p 
FindRootPlot[p,opts]  Plot the steps and evaluation points for solving a FindRootProblem p 
Operations with FindMinimumProblem and FindRootProblem data objects.
Any of the previous commands shown can take options that are passed on directly to
FindMinimum or
FindRoot and override any options for these functions which may have been specified when the problem was set up.
This uses FindRoot to solve the Powell singular function problem and gives the root.
Out[4]=  

This does the same as above, but includes statistics on steps and evaluations required.
Out[5]=  

This uses FindMinimum to solve the Beale problem and averages the timing over several trials to get the average time it takes to solve the problem.
Out[6]=  

This uses FindMinimum to solve the Beale problem, compares the result with a reference solution, and gives a list of rules indicating the results of the test.
Out[7]=  

ProblemTest gives a way to easily compare two different methods for the same problem.
This uses FindMinimum to solve the Beale problem using Newton's method, compares the result with a reference solution, and gives a list of rules indicating the results of the test.
Out[8]=  

Most of the rules returned by these functions are selfexplanatory, but a few require some description. Here is a table clarifying those rules.
"FunctionAccuracy"  The accuracy of the function value Log[10, error in f] 
"FunctionPrecision"  The precision of the function value Log[10, relative error in f] 
"SpatialAccuracy"  The accuracy in the position of the minimizer or root Log[10, error in x] 
"SpatialPrecision"  The precision in the position of the minimizer or root Log[10, relative error in x] 
"Messages"  A list of messages issued during the solution of the problem 
A very useful comparison is to see how a list of methods affect on a particular problem. This is easy to do by setting up a FindMinimumProblem object and mapping a problem test over a list of methods.
This gets the Chebyquad problem. The output has been abbreviated to save space.
Out[9]//Short= 
 

Here is a list of possible methods. 
This makes a table comparing the different methods in terms of accuracy and computation time.
Out[11]//TableForm= 
 

It is possible to generate tables of how a particular method does on a variety of problems by mapping over the names in
$FindMinimumProblems or
$FindRootProblems.
This sets up a function that tests a problem with FindMinimum using its default settings except with a large setting for MaxIterations so that the default (LevenbergMarquardt) method can run to convergence. 
This makes a table showing some of the results from testing all the problems in $FindMinimumProblems. It may take several minutes to run.
Out[13]//TableForm= 
 

The two cases where the spatial accuracy is shown as
ERROR are for linear problems, which do not have an isolated minimizer. The one case, which has a spatial accuracy that is quite poor, has multiple minimizers, and the method goes to a different minimum than the reference one. Many of these functions have multiple local minima, so be aware that the error may be reported as large only because a method went to a different minimum than the reference one.