Test Problems

All the test problems presented in [MGH81] have been coded into the Wolfram Language in the package. A data structure is used so that the problems can be processed for solution and testing with FindMinimum and FindRoot in a seamless way. The lists of problems for FindMinimum and FindRoot are in and , respectively, and a problem can be accessed using and .

$FindMinimumProblemslist of problems that are appropriate for FindMinimum
GetFindMinimumProblem[prob]get the problem prob using the default size and starting values in a data structure
GetFindMinimumProblem[prob,{n,m}]get the problem prob with n variables such that it is a sum of m squares in a data structure
GetFindMinimumProblem[prob,size,start]
get the problem prob with given size and starting value start in a data structure
FindMinimumProblem[f,vars,opts,prob,size]
a data structure that contains a minimization problem to be solved by FindMinimum

Accessing FindMinimum problems.

$FindRootProblemslist of problems that are appropriate for FindRoot
GetFindRootProblem[prob]get the problem prob using the default size and starting values in a data structure
GetFindRootProblem[prob,n]get the problem prob with n variables (and n equations) in a data structure
GetFindRootProblem[prob,n,start]get the problem prob with size n and starting value start in a data structure
FindRootProblem[f,vars,opts,prob,size]
a data structure that contains a minimization problem to be solved by FindRoot

Accessing FindRoot problems.

and are both pass options to be used by other commands. They also accept the option Variables->vars which is used to specify what variables to use for the problems.

option name
default value
VariablesX#&a function that is applied to the integers to generate the variables for a problem with variables or a list of length containing the variables

Specifying variable names.

This loads the package.
In[1]:=
Click for copyable input
This gets the Beale problem in a data structure.
In[2]:=
Click for copyable input
Out[2]=
This gets the Powell singular function problem in a data structure.
In[3]:=
Click for copyable input
Out[3]=

Once you have a or object, in addition to simply solving the problem, there are various tests that you can run.

ProblemSolve[p,opts]solve the problem in p, giving the same output as FindMinimum or FindRoot
ProblemStatistics[p,opts]solve the problem, giving a list , where sol is the output of ProblemSolve[p] and evals is a list of rules indicating the number of steps and evaluations used
ProblemTime[p,opts]solve the problem giving a list , where sol is the output of ProblemSolve[p] and time is time taken to solve the problem; if time is less than a second, the problem will be solved multiple times to get an average timing
ProblemTest[p,opts]solve the problem, giving a list of rules including the step and evaluation statistics and time from ProblemStatistics[p] and ProblemTime[p] along with rules indicating the accuracy and precision of the solution as compared with a reference solution
FindMinimumPlot[p,opts]plot the steps and evaluation points for solving a p
FindRootPlot[p,opts]plot the steps and evaluation points for solving a p

Operations with and data objects.

Any of the previous commands shown can take options that are passed on directly to FindMinimum or FindRoot and override any options for these functions which may have been specified when the problem was set up.

This uses FindRoot to solve the Powell singular function problem and gives the root.
In[4]:=
Click for copyable input
Out[4]=
This does the same as the previous example, but includes statistics on steps and evaluations required.
In[5]:=
Click for copyable input
Out[5]=
This uses FindMinimum to solve the Beale problem and averages the timing over several trials to get the average time it takes to solve the problem.
In[6]:=
Click for copyable input
Out[6]=
This uses FindMinimum to solve the Beale problem, compares the result with a reference solution, and gives a list of rules indicating the results of the test.
In[7]:=
Click for copyable input
Out[7]=

gives a way to easily compare two different methods for the same problem.

This uses FindMinimum to solve the Beale problem using Newton's method, compares the result with a reference solution, and gives a list of rules indicating the results of the test.
In[8]:=
Click for copyable input
Out[8]=

Most of the rules returned by these functions are self-explanatory, but a few require some description. Here is a table clarifying those rules.

"FunctionAccuracy"the accuracy of the function value -Log[10,error in f]
"FunctionPrecision"the precision of the function value -Log[10,relative error in f]
"SpatialAccuracy"the accuracy in the position of the minimizer or root -Log[10,error in x]
"SpatialPrecision"the precision in the position of the minimizer or root -Log[10,relative error in x]
"Messages"a list of messages issued during the solution of the problem

A very useful comparison is to see how a list of methods affect a particular problem. This is easy to do by setting up a object and mapping a problem test over a list of methods.

This gets the Chebyquad problem. The output has been abbreviated to save space.
In[9]:=
Click for copyable input
Out[9]//Short=
Here is a list of possible methods.
In[10]:=
Click for copyable input
This makes a table comparing the different methods in terms of accuracy and computation time.
In[11]:=
Click for copyable input
Out[11]//TableForm=

It is possible to generate tables of how a particular method affects a variety of problems by mapping over the names in or .

This sets up a function that tests a problem with FindMinimum using its default settings except with a large setting for MaxIterations so that the default (LevenbergMarquardt) method can run to convergence.
In[12]:=
Click for copyable input
This makes a table showing some of the results from testing all the problems in . It may take several minutes to run.
In[13]:=
Click for copyable input
Out[13]//TableForm=

The two cases where the spatial accuracy is shown as are for linear problems, which do not have an isolated minimizer. The one case, which has a spatial accuracy that is quite poor, has multiple minimizers, and the method goes to a different minimum than the reference one. Many of these functions have multiple local minima, so be aware that the error may be reported as large only because a method went to a different minimum than the reference one.