Exact Global Optimization
Introduction
Global optimization problems can be solved exactly using Minimize, Maximize, MinValue, MaxValue, ArgMin and ArgMax.
Algorithms
Depending on the type of problem, several different algorithms can be used.
The most general method is based on the cylindrical algebraic decomposition (CAD) algorithm. It applies when the objective function and the constraints are real algebraic functions. The method can always compute global extrema (or extremal values, if the extrema are not attained). If parameters are present, the extrema can be computed as piecewise-algebraic functions of the parameters. A downside of the method is its high, doubly exponential complexity in the number of variables. In certain special cases not involving parameters, faster methods can be used.
When the objective function and all constraints are linear, global extrema can be computed exactly using the simplex algorithm. This approach can also be used for linear-fractional objective functions.
For univariate problems, equation and inequality solving methods are used to find the constraint solution set and discontinuity points and zeros of the derivative of the objective function within the set.
Another approach to finding global extrema is to find all the local extrema, using the Lagrange multipliers or the Karush–Kuhn–Tucker (KKT) conditions, and pick the smallest (or the greatest). However, for a fully automatic method, there are additional complications. In addition to solving the necessary conditions for local extrema, it needs to check smoothness of the objective function and smoothness and nondegeneracy of the constraints. It also needs to check for extrema at the boundary of the set defined by the constraints and at infinity, if the set is unbounded. This in general requires exact solving of systems of equations and inequalities over the reals, which may lead to CAD computations that are harder than in the optimization by CAD algorithm. Currently the Wolfram Language uses Lagrange multipliers only for equational constraints within a bounded box or for a single inequality constraint with a bounded solution set. The method also requires that the number of stationary points and the number of singular points of the constraints be finite. An advantage of this method over the CAD-based algorithm is that it can solve some transcendental problems, as long as they lead to systems of equations that the Wolfram Language can solve.
Optimization by Cylindrical Algebraic Decomposition
Examples
Customized CAD Algorithm for Optimization
The cylindrical algebraic decomposition (CAD) algorithm is available in the Wolfram Language directly as CylindricalDecomposition. The algorithm is described in more detail in "Real Polynomial Systems". Setting the CAD algorithm options described in "Real Polynomial Systems", in particular the CADConstruction option, may affect the performance of optimization functions. The following describes how to customize the CAD algorithm to solve the global optimization problem.
Reduction to Minimizing a Coordinate Function
Suppose it is required to minimize an algebraic function over the solution sets of algebraic constraints , where is a vector of variables and is a vector of parameters. Let be a new variable. The minimization of over the constraints is equivalent to the minimization of the coordinate function over the semialgebraic set given by .
If happens to be a monotonic function of one variable , a new variable is not needed, as can be minimized instead.
The Projection Phase of CAD
The variables are projected, with first, then the new variable , and then the parameters .
If algebraic functions are present, they are replaced with new variables; equations and inequalities satisfied by the new variables are added. The variables replacing algebraic functions are projected first. They also require special handling in the lifting phase of the algorithm.
Projection operator improvements by Hong, McCallum, and Brown can be used here, including the use of equational constraints. Note that if a new variable needs to be introduced, there is at least one equational constraint, namely .
The Lifting Phase of CAD
The parameters are lifted first, constructing the algebraic function equation and inequality description of the cells. If there are constraints that depend only on parameters and you can determine that is identically false over a parameter cell, you do not need to lift this cell further.
When lifting the minimization variable , you start with the smallest values of , and proceed (lifting the remaining variables in the depth-first manner) until you find the first cell for which the constraints are satisfied. If this cell corresponds to a root of a projection polynomial in , the root is the minimum value of , and the coordinates corresponding to of any point in the cell give a point at which the minimum is attained. If parameters are present, you get a parametric description of a point in the cell in terms of roots of polynomials bounding the cell. If there are no parameters, you can simply give the sample point used by the CAD algorithm. If the first cell satisfying the constraints corresponds to an interval , where is a root of a projection polynomial in , then is the infimum of values of , and the infimum value is not attained. Finally, if the first cell satisfying the constraints corresponds to an interval , is unbounded from below.
Linear Optimization
When the objective function and all constraints are linear, global extrema can be computed exactly using the simplex algorithm.
Univariate Optimization
For univariate problems, equation and inequality solving methods are used to find the constraint solution set and discontinuity points and zeros of the derivative of the objective function within the set.
Optimization by Finding Stationary and Singular Points
Here is a set of 3-dimensional examples with the same constraints. Depending on the objective function, the maximum is attained at a stationary point of the objective function on the solution set of the constraints, at a stationary point of the restriction of the objective function to the boundary of the solution set of the constraints, and at the boundary of the boundary of the solution set of the constraints.
Optimization over the Integers
Integer Linear Optimization
An integer linear optimization problem is an optimization problem in which the objective function is linear, the constraints are linear and bounded, and the variables range over the integers.
To solve an integer linear optimization problem the Wolfram Language first solves the equational constraints, reducing the problem to one containing inequality constraints only. Then it uses lattice reduction techniques to put the inequality system in a simpler form. Finally, it solves the simplified optimization problem using a branch-and-bound method.
Optimization over the Reals Combined with Integer Solution Finding
Suppose a function needs to be minimized over the integer solution set of constraints . Let be the minimum of over the real solution set of . If there exists an integer point satisfying , then is the minimum of over the integer solution set of . Otherwise you try to find an integer solution of , and so on. This heuristic works if you can solve the real optimization problem and all the integer solution finding problems, and you can find an integer solution within a predefined number of attempts. (By default, the Wolfram Language tries 10 candidate optimum values. This can be changed using the IntegerOptimumCandidates system option.)
Optimization over Regions
Regions can be used to specify constraints. The method used for solving an optimization problem over a region depends on what type of region description is easier to obtain.