Numerical Nonlinear Global Optimization
Introduction
Numerical algorithms for constrained nonlinear optimization can be broadly categorized into gradient-based methods and direct search methods. Gradient-based methods use first derivatives (gradients) or second derivatives (Hessians). Examples are the sequential quadratic programming (SQP) method, the augmented Lagrangian method, and the (nonlinear) interior point method. Direct search methods do not use derivative information. Examples are Nelder–Mead, genetic algorithm and differential evolution, and simulated annealing. Direct search methods tend to converge more slowly, but can be more tolerant to the presence of noise in the function and constraints.
Typically, algorithms only build up a local model of the problems. Furthermore, many such algorithms insist on certain decrease of the objective function, or decrease of a merit function that is a combination of the objective and constraints, to ensure convergence of the iterative process. Such algorithms will, if convergent, only find local optima, and are called local optimization algorithms. In the Wolfram Language local optimization problems can be solved using FindMinimum.
Global optimization algorithms, on the other hand, attempt to find the global optimum, typically by allowing decrease as well as increase of the objective/merit function. Such algorithms are usually computationally more expensive. Global optimization problems can be solved exactly using Minimize or numerically using NMinimize.
This solves a nonlinear programming problem,
using Minimize, which gives an exact solution
The NMinimize Function
NMinimize and NMaximize implement several algorithms for finding constrained global optima. The methods are flexible enough to cope with functions that are not differentiable or continuous and are not easily trapped by local optima.
Finding a global optimum can be arbitrarily difficult, even without constraints, and so the methods used may fail. It may frequently be useful to optimize the function several times with different starting conditions and take the best of the results.
The constraints to NMinimize and NMaximize may be either a list or a logical combination of equalities, inequalities, and domain specifications. Equalities and inequalities may be nonlinear. Specify a domain for a variable using Element, for example, Element[x,Integers] or x∈Integers. Variables must be either integers or real numbers, and will be assumed to be real numbers unless specified otherwise. Constraints are generally enforced by adding penalties when points leave the feasible region.
In order for NMinimize to work, it needs a rectangular initial region in which to start. This is similar to giving other numerical methods a starting point or starting points. The initial region is specified by giving each variable a finite upper and lower bound. This is done by including in the constraints, or {x,a,b} in the variables. If both are given, the bounds in the variables are used for the initial region, and the constraints are just used as constraints. If no initial region is specified for a variable x, the default initial region of is used. Different variables can have initial regions defined in different ways.
NMinimize and NMaximize have several optimization methods available: Automatic, "DifferentialEvolution", "NelderMead", "RandomSearch", and "SimulatedAnnealing". The optimization method is controlled by the Method option, which either takes the method as a string, or takes a list whose first element is the method as a string and whose remaining elements are method‐specific options. All method‐specific option, left‐hand sides should also be given as strings.
With the default method, NMinimize picks which method to use based on the type of problem. If the objective function and constraints are linear, LinearOptimization is used. If there are integer variables, or if the head of the objective function is not a numeric function, differential evolution is used. For everything else, it uses Nelder–Mead, but if Nelder–Mead does poorly, it switches to differential evolution.
Because the methods used by NMinimize may not improve every iteration, convergence is only checked after several iterations have occurred.
Numerical Algorithms for Constrained Global Optimization
Nelder–Mead
The Nelder–Mead method is a direct search method. For a function of variables, the algorithm maintains a set of points forming the vertices of a polytope in -dimensional space. This method is often termed the "simplex" method, which should not be confused with the well-known simplex method for linear programming.
At each iteration, points form a polytope. The points are ordered so that A new point is then generated to replace the worst point
Let be the centroid of the polytope consisting of the best points, . A trial point is generated by reflecting the worst point through the centroid, , where is a parameter.
If the new point is neither a new worst point nor a new best point, , replaces .
If the new point is better than the best point, , the reflection is very successful and can be carried out further to , where is a parameter to expand the polytope. If the expansion is successful, , replaces ; otherwise the expansion failed, and replaces .
If the new point is worse than the second worst point, , the polytope is assumed to be too large and needs to be contracted. A new trial point is defined as
where is a parameter. If , the contraction is successful, and replaces . Otherwise a further contraction is carried out.
The process is assumed to have converged if the difference between the best function values in the new and old polytope, as well as the distance between the new best point and the old best point, are less than the tolerances provided by AccuracyGoal and PrecisionGoal.
Strictly speaking, Nelder–Mead is not a true global optimization algorithm; however, in practice it tends to work reasonably well for problems that do not have many local minima.
option name | default value | |
"ContractRatio" | 0.5 | ratio used for contraction |
"ExpandRatio" | 2.0 | ratio used for expansion |
"InitialPoints" | Automatic | set of initial points |
"PenaltyFunction" | Automatic | function applied to constraints to penalize invalid points |
"PostProcess" | Automatic | whether to post-process using local search methods |
"RandomSeed" | 0 | starting value for the random number generator |
"ReflectRatio" | 1.0 | ratio used for reflection |
"ShrinkRatio" | 0.5 | ratio used for shrinking |
"Tolerance" | 0.001 | tolerance for accepting constraint violations |
Differential Evolution
Differential evolution is a simple stochastic function minimizer.
The algorithm maintains a population of points, , where typically , with being the number of variables.
During each iteration of the algorithm, a new population of points is generated. The th new point is generated by picking three random points, , , and , from the old population, and forming , where is a real scaling factor. Then a new point is constructed from and by taking the th coordinate from with probability and otherwise taking the coordinate from . If , then replaces in the population. The probability is controlled by the "CrossProbability" option.
The process is assumed to have converged if the difference between the best function values in the new and old populations, as well as the distance between the new best point and the old best point, are less than the tolerances provided by AccuracyGoal and PrecisionGoal.
The differential evolution method is computationally expensive, but is relatively robust and tends to work well for problems that have more local minima.
option name | default value | |
"CrossProbability" | 0.5 | probability that a gene is taken from xi |
"InitialPoints" | Automatic | set of initial points |
"PenaltyFunction" | Automatic | function applied to constraints to penalize invalid points |
"PostProcess" | Automatic | whether to post-process using local search methods |
"RandomSeed" | 0 | starting value for the random number generator |
"ScalingFactor" | 0.6 | scale applied to the difference vector in creating a mate |
"SearchPoints" | Automatic | size of the population used for evolution |
"Tolerance" | 0.001 | tolerance for accepting constraint violations |
DifferentialEvolution specific options.
Simulated Annealing
Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.
At each iteration, a new point, , is generated in the neighborhood of the current point, . The radius of the neighborhood decreases with each iteration. The best point found so far, , is also tracked.
If , replaces and . Otherwise, replaces with a probability . Here is the function defined by BoltzmannExponent, is the current iteration, is the change in the objective function value, and is the value of the objective function from the previous iteration. The default function for is .
Like the RandomSearch method, SimulatedAnnealing uses multiple starting points, and finds an optimum starting from each of them.
The default number of starting points, given by the option SearchPoints, is , where is the number of variables.
For each starting point, this is repeated until the maximum number of iterations is reached, the method converges to a point, or the method stays at the same point consecutively for the number of iterations given by LevelIterations.
option name | default value | |
"BoltzmannExponent" | Automatic | exponent of the probability function |
"InitialPoints" | Automatic | set of initial points |
"LevelIterations" | 50 | maximum number of iterations to stay at a given point |
"PenaltyFunction" | Automatic | function applied to constraints to penalize invalid points |
"PerturbationScale" | 1.0 | scale for the random jump |
"PostProcess" | Automatic | whether to post-process using local search methods |
"RandomSeed" | 0 | starting value for the random number generator |
"SearchPoints" | Automatic | number of initial points |
"Tolerance" | 0.001 | tolerance for accepting constraint violations |
SimulatedAnnealing specific options.
Random Search
The random search algorithm works by generating a population of random starting points and uses a local optimization method from each of the starting points to converge to a local minimum. The best local minimum is chosen to be the solution.
The possible local search methods are Automatic and "InteriorPoint". The default method is Automatic, which uses FindMinimum with unconstrained methods applied to a system with penalty terms added for the constraints. When Method is set to "InteriorPoint", a nonlinear interior-point method is used.
The default number of starting points, given by the option SearchPoints, is , where is the number of variables.
Convergence for RandomSearch is determined by convergence of the local method for each starting point.
RandomSearch is fast, but does not scale very well with the dimension of the search space. It also suffers from many of the same limitations as FindMinimum. It is not well suited for discrete problems and others where derivatives or secants give little useful information about the problem.
option name | default value | |
"InitialPoints" | Automatic | set of initial points |
"Method" | Automatic | which method to use for minimization |
"PenaltyFunction" | Automatic | function applied to constraints to penalize invalid points |
"PostProcess" | Automatic | whether to post-process using local search methods |
"RandomSeed" | 0 | starting value for the random number generator |
"SearchPoints" | Automatic | number of points to use for starting local searches |
"Tolerance" | 0.001 | tolerance for accepting constraint violations |