How to | Do Constrained Nonlinear Optimization

An important subset of optimization problems is constrained nonlinear optimization, where the function is not linear and the parameter values are constrained to certain regions. The Wolfram Language is capable of solving these as well as a variety of other optimization problems.

A simple optimization problem is to find the largest value of such that the point is within 1 unit of the origin. The first argument of Maximize has the function and the constraint in a list; the second argument lists the variables:

This output is a list whose first element is the maximum value found; the second element sol[[2]] is a list of rules for the values of the independent variables that give that maximum. The notation sol[[2]] is the short form for Part[sol,2]:

You can check to see if this solution meets the constraint by substituting the solution into the original problem. To do this, use /.sol[[2]] following the original problem. The /. symbol is the short form of ReplaceAll:

Many optimization problems involve finding the smallest value of some function:

You can see this result as a decimal approximation with N:

    

If your goal is numerical results, it is more efficient to use the numerical versions NMinimize and NMaximize from the start. Note that NMinimize and NMaximize use numeric algorithms and may give results that are not global optima:

Maximize and Minimize symbolically analyze the expression to optimize, giving proven global optima. If you have an expression that cannot be analyzed by symbolic techniques, NMaximize and NMinimize will be more useful and efficient.

Here is a simple expression that Maximize and Minimize cannot work with because NIntegrate does not evaluate if the parameter is not a number:

Here is a function for this expression that you can use with NMinimize. ?NumericQ on the argument protects the function from evaluating NIntegrate when is not a number. The ? is the short form for PatternTest:

You can use NMinimize and NMaximize to perform this optimization numerically. The second argument gives two starting values for finding slopes:

NMinimize, NMaximize, Minimize, and Maximize seek global minima or maxima. The sometimes faster functions FindMinimum and FindMaximum seek local minima or maxima:

    

A common optimization problem is to find parameters that minimize the error between a curve and data. This example calls FindFit to find a quadratic fit to these data points:

The arguments to FindFit are the data, the expression, its list of parameters, and the variable. This solves the problem:

You can add constraints. For example, suppose that all the parameters have to be positive; put those constraints in a list with the expression:

This compares the two fits. They are very close except when is small: