# Exact Global Optimization

## Introduction

Exact global optimization problems can be solved exactly using Minimize and Maximize.

In[1]:= |

Out[1]= |

In[2]:= |

Out[2]= |

## Algorithms

Depending on the type of problem, several different algorithms can be used.

The most general method is based on the cylindrical algebraic decomposition (CAD) algorithm. It applies when the objective function and the constraints are real algebraic functions. The method can always compute global extrema (or extremal values, if the extrema are not attained). If parameters are present, the extrema can be computed as piecewise-algebraic functions of the parameters. A downside of the method is its high, doubly exponential complexity in the number of variables. In certain special cases not involving parameters, faster methods can be used.

When the objective function and all constraints are linear with rational number coefficients, global extrema can be computed exactly using the simplex algorithm.

For univariate problems, equation and inequality solving methods are used to find the constraint solution set and discontinuity points and zeros of the derivative of the objective function within the set.

Another approach to finding global extrema is to find all the local extrema, using the Lagrange multipliers or the Karush–Kuhn–Tucker (KKT) conditions, and pick the smallest (or the greatest). However, for a fully automatic method, there are additional complications. In addition to solving the necessary conditions for local extrema, it needs to check smoothness of the objective function and smoothness and nondegeneracy of the constraints. It also needs to check for extrema at the boundary of the set defined by the constraints and at infinity, if the set is unbounded. This in general requires exact solving of systems of equations and inequalities over the reals, which may lead to CAD computations that are harder than in the optimization by CAD algorithm. Currently the Wolfram Language uses Lagrange multipliers only for equational constraints within a bounded box. The method also requires that the number of stationary points and the number of singular points of the constraints be finite. An advantage of this method over the CAD-based algorithm is that it can solve some transcendental problems, as long as they lead to systems of equations that the Wolfram Language can solve.

## Optimization by Cylindrical Algebraic Decomposition

### Examples

In[3]:= |

Out[3]= |

In[4]:= |

Out[4]= |

In[5]:= |

Out[13]= |

### Customized CAD Algorithm for Optimization

The cylindrical algebraic decomposition (CAD) algorithm is available in the Wolfram Language directly as CylindricalDecomposition. The algorithm is described in more detail in "Real Polynomial Systems". The following describes how to customize the CAD algorithm to solve the global optimization problem.

### Reduction to Minimizing a Coordinate Function

Suppose it is required to minimize an algebraic function over the solution sets of algebraic constraints , where is a vector of variables and is a vector of parameters. Let be a new variable. The minimization of over the constraints is equivalent to the minimization of the coordinate function over the semialgebraic set given by .

If happens to be a monotonic function of one variable , a new variable is not needed, as can be minimized instead.

### The Projection Phase of CAD

The variables are projected, with first, then the new variable , and then the parameters .

If algebraic functions are present, they are replaced with new variables; equations and inequalities satisfied by the new variables are added. The variables replacing algebraic functions are projected first. They also require special handling in the lifting phase of the algorithm.

Projection operator improvements by Hong, McCallum, and Brown can be used here, including the use of equational constraints. Note that if a new variable needs to be introduced, there is at least one equational constraint, namely .

### The Lifting Phase of CAD

The parameters are lifted first, constructing the algebraic function equation and inequality description of the cells. If there are constraints that depend only on parameters and you can determine that is identically false over a parameter cell, you do not need to lift this cell further.

When lifting the minimization variable , you start with the smallest values of , and proceed (lifting the remaining variables in the depth-first manner) until you find the first cell for which the constraints are satisfied. If this cell corresponds to a root of a projection polynomial in , the root is the minimum value of , and the coordinates corresponding to of any point in the cell give a point at which the minimum is attained. If parameters are present, you get a parametric description of a point in the cell in terms of roots of polynomials bounding the cell. If there are no parameters, you can simply give the sample point used by the CAD algorithm. If the first cell satisfying the constraints corresponds to an interval , where is a root of a projection polynomial in , then is the infimum of values of , and the infimum value is not attained. Finally, if the first cell satisfying the constraints corresponds to an interval , is unbounded from below.

### Strict Inequality Constraints

If there are no parameters, all constraints are strict inequalities, and you only need the extremum value, then a significantly simpler version of the algorithm can be used. (You can safely make inequality constraints strict if you know that , where is the solution set of the constraints.) In this case many lower-dimensional cells can be disregarded; hence, the projection may only consist of the leading coefficients, the resultants, and the discriminants. In the lifting phase, only full-dimensional cells need be constructed; hence, there is no need for algebraic number computations.

Experimental`Infimum[{f,cons},{x,y,…}] | |

find the infimum of values of f on the set of points satisfying the constraints cons. | |

Experimental`Supremum[{f,cons},{x,y,…}] | |

find the supremum of values of f on the set of points satisfying the constraints cons. |

In[14]:= |

Out[14]= |

## Linear Optimization

When the objective function and all constraints are linear, global extrema can be computed exactly using the simplex algorithm.

In[15]:= |

Out[22]= |

In[23]:= |

Out[31]= |

## Univariate Optimization

For univariate problems, equation and inequality solving methods are used to find the constraint solution set and discontinuity points and zeros of the derivative of the objective function within the set.

In[32]:= |

Out[32]= |

In[33]:= |

Out[33]= |

In[34]:= |

Out[34]= |

## Optimization by Finding Stationary and Singular Points

In[35]:= |

Out[35]= |

In[36]:= |

Out[36]= |

In[37]:= |

Out[37]= |

In[38]:= |

Out[38]= |

In[39]:= |

Out[39]= |

Here is a set of 3-dimensional examples with the same constraints. Depending on the objective function, the maximum is attained at a stationary point of the objective function on the solution set of the constraints, at a stationary point of the restriction of the objective function to the boundary of the solution set of the constraints, and at the boundary of the boundary of the solution set of the constraints.

In[40]:= |

Out[40]= |

In[41]:= |

Out[41]= |

In[42]:= |

Out[42]= |

In[43]:= |

Out[43]= |

In[44]:= |

Out[44]= |

In[45]:= |

Out[45]= |

In[46]:= |

Out[46]= |

In[47]:= |

Out[47]= |

## Optimization over the Integers

### Integer Linear Programming

An integer linear programming problem is an optimization problem in which the objective function is linear, the constraints are linear and bounded, and the variables range over the integers.

To solve an integer linear programming problem the Wolfram Language first solves the equational constraints, reducing the problem to one containing inequality constraints only. Then it uses lattice reduction techniques to put the inequality system in a simpler form. Finally, it solves the simplified optimization problem using a branch-and-bound method.

In[48]:= |

Out[58]= |

### Optimization over the Reals Combined with Integer Solution Finding

Suppose a function needs to be minimized over the integer solution set of constraints . Let be the minimum of over the real solution set of . If there exists an integer point satisfying , then is the minimum of over the integer solution set of . Otherwise you try to find an integer solution of , and so on. This heuristic works if you can solve the real optimization problem and all the integer solution finding problems, and you can find an integer solution within a predefined number of attempts. (By default the Wolfram Language tries 10 candidate optimum values. This can be changed using the system option.)

In[59]:= |

Out[59]= |

In[60]:= |

Out[60]= |