How to | Do Constrained Nonlinear Optimization
An important subset of optimization problems is constrained nonlinear optimization, where the function is not linear and the parameter values are constrained to certain regions. The Wolfram Language is capable of solving these as well as a variety of other optimization problems.
A simple optimization problem is to find the largest value of such that the point is within 1 unit of the origin. The first argument of Maximize has the function and the constraint in a list; the second argument lists the variables:
This output is a list whose first element is the maximum value found; the second element sol[] is a list of rules for the values of the independent variables that give that maximum. The notation sol[] is the short form for Part[sol,2]:
You can check to see if this solution meets the constraint by substituting the solution into the original problem. To do this, use /.sol[] following the original problem. The /. symbol is the short form of ReplaceAll:
You can see this result as a decimal approximation with N:
If your goal is numerical results, it is more efficient to use the numerical versions NMinimize and NMaximize from the start. Note that NMinimize and NMaximize use numeric algorithms and may give results that are not global optima:
Maximize and Minimize symbolically analyze the expression to optimize, giving proven global optima. If you have an expression that cannot be analyzed by symbolic techniques, NMaximize and NMinimize will be more useful and efficient.
Here is a function for this expression that you can use with NMinimize. ?NumericQ on the argument protects the function from evaluating NIntegrate when is not a number. The ? is the short form for PatternTest:
A common optimization problem is to find parameters that minimize the error between a curve and data. This example calls FindFit to find a quadratic fit to these data points:
The arguments to FindFit are the data, the expression, its list of parameters, and the variable. This solves the problem: