NMinimize
NMinimize[f,x]
minimizes f numerically with respect to x.
NMinimize[f,{x,y,…}]
minimizes f numerically with respect to x, y, ….
NMinimize[{f,cons},{x,y,…}]
minimizes f numerically subject to the constraints cons.
NMinimize[…,x∈rdom]
constrains x to be in the region or domain rdom.
Details and Options
 NMinimize always attempts to find a global minimum of f subject to the constraints given.
 NMinimize is typically used to find the smallest possible values given constraints. In different areas, this may be called the best strategy, best fit, best configuration and so on.
 NMinimize returns a list of the form {f_{min},{x>x_{min},y>y_{min},…}}.
 If f and cons are linear or convex, the result given by NMinimize will be the global minimum, over both real and integer values; otherwise, the result may sometimes only be a local minimum.
 If NMinimize determines that the constraints cannot be satisfied, it returns {Infinity,{x>Indeterminate,…}}.
 NMinimize supports a modeling language where the objective function f and constraints cons are given in terms of expressions depending on scalar or vector variables. f and cons are typically parsed into very efficient forms, but as long as f and the terms in cons give numerical values for numerical values of the variables, NMinimize can often find a solution.
 The constraints cons can be any logical combination of:

lhs==rhs equations lhs>rhs, lhs≥rhs, lhs<rhs, lhs≤rhs inequalities (LessEqual, …) lhsrhs, lhsrhs, lhsrhs, lhsrhs vector inequalities (VectorLessEqual, …) {x,y,…}∈rdom region or domain specification  NMinimize[{f,cons},x∈rdom] is effectively equivalent to NMinimize[{f,cons&&x∈rdom},x].
 For x∈rdom, the different coordinates can be referred to using Indexed[x,i].
 Possible domains rdom include:

Reals real scalar variable Integers integer scalar variable Vectors[n,dom] vector variable in Matrices[{m,n},dom] matrix variable in ℛ vector variable restricted to the geometric region  By default, all variables are assumed to be real.
 The following options can be given:

AccuracyGoal Automatic number of digits of final accuracy sought EvaluationMonitor None expression to evaluate whenever f is evaluated MaxIterations Automatic maximum number of iterations to use Method Automatic method to use PrecisionGoal Automatic number of digits of final precision sought StepMonitor None expression to evaluate whenever a step is taken WorkingPrecision MachinePrecision the precision used in internal computations  The settings for AccuracyGoal and PrecisionGoal specify the number of digits to seek in both the value of the position of the minimum, and the value of the function at the minimum.
 NMinimize continues until either of the goals specified by AccuracyGoal or PrecisionGoal is achieved.
 The methods for NMinimize fall into two classes.The first class of guaranteed methods uses properties of the problem so that, when the method converges, the minimum found is guaranteed to be global. The second class of heuristic methods uses methods that may include multiple local searches, commonly adjusted by some stochasticity, to home in on a global minimum. These methods often do find the global minimum, but are not guaranteed to do so.
 Methods that are guaranteed to give a global minimum when they converge to a solution include:

"Convex" use only convex methods "MOSEK" use the commercial MOSEK library for convex problems "Gurobi" use the commercial Gurobi library for convex problems "Xpress" use the commercial Xpress library for convex problems  Heuristic methods include:

"NelderMead" simplex method of Nelder and Mead "DifferentialEvolution" use differential evolution "SimulatedAnnealing" use simulated annealing "RandomSearch" use the best local minimum found from multiple random starting points "Couenne" use the Couenne library for nonconvex mixedinteger nonlinear problems
Examples
open allclose allBasic Examples (3)
Scope (40)
Basic Uses (12)
Minimize subject to constraints :
Several linear inequality constraints can be expressed with VectorGreaterEqual:
Use v>= or \[VectorGreaterEqual] to enter the vector inequality sign :
An equivalent form using scalar inequalities:
The inequality may not be the same as due to possible threading in :
To avoid unintended threading in , use Inactive[Plus]:
Use constant parameter equations to avoid unintended threading in :
VectorGreaterEqual represents a conic inequality with respect to the "NonNegativeCone":
To explicitly specify the dimension of the cone, use {"NonNegativeCone",n}:
Minimize subject to the constraint :
Specify the constraint using a conic inequality with "NormCone":
Minimize the function subject to the constraint :
Use Indexed to access components of a vector variable, e.g. :
Use Vectors[n,dom] to specify the dimension and domain of a vector variable when it is ambiguous:
Specify nonnegative constraints using NonNegativeReals ():
An equivalent form using vector inequality :
Specify nonpositive constraints using NonPositiveReals ():
An equivalent form using vector inequalities:
Or constraints can be specified:
Domain Constraints (4)
Specify integer domain constraints using Integers:
Specify integer domain constraints on vector variables using Vectors[n,Integers]:
Specify nonnegative integer domain constraints using NonNegativeIntegers ():
Specify nonpositive integer domain constraints using NonPositiveIntegers ():
Region Constraints (5)
Find the minimum distance between two regions:
Find the minimum such that the triangle and ellipse still intersect:
Find the disk of minimum radius that contains the given three points:
Using Circumsphere gives the same result directly:
Linear Problems (5)
With linear objectives and constraints, when a minimum is found it is global:
The constraints can be equality and inequality constraints:
Use Equal to express several equality constraints at once:
An equivalent form using several scalar equalities:
Use VectorLessEqual to express several LessEqual inequality constraints at once:
Use v<= to enter the vector inequality in a compact form:
An equivalent form using scalar inequalities:
Use Interval to specify bounds on variable:
Convex Problems (7)
Use "NonNegativeCone" to specify linear functions of the form :
Use v>= to enter the vector inequality in a compact form:
Minimize a convex quadratic function subject to linear constraints:
Plot the region and minimizing point:
Minimize a convex quadratic function subject to a set of convex quadratic constraints:
Plot the region and the minimizing point:
Find the minimum distance between two convex regions:
Minimize such that is positive semidefinite:
Show the minimizer on a plot of the objective function:
Minimize the convex objective function such that is positive semidefinite and :
Plot the region and the minimizing point:
Minimize a convex objective function over a 4norm unit disk:
Transformable to Convex (4)
Minimize the perimeter of a rectangle such that the area is 1 and the height is at most half the width:
This problem is logconvex and is solved by making a transformation {hExp[],wExp[ ]} and taking logarithms to get the convex problem:
Minimize the quasiconvex function subject to inequality and norm constraints. The objective is quasiconvex because it is a product of a nonnegative function and a nonpositive function over the domain:
Quasiconvex problems can be solved as a parametric convex optimization problem for the parameter :
Plot the objective as a function of the levelset :
For a levelset value between the interval , the smallest objective is found:
The problem becomes infeasible when the levelset value is increased:
Minimize subject to the constraint . The objective is not convex but can be represented by a difference of convex function where and are convex functions:
Plot the region and the minimizing point:
Minimize subject to the constraints . The constraint is not convex but can be represented by a difference of convex constraint where and are convex functions:
Options (6)
AccuracyGoal & PrecisionGoal (2)
This enforces a convergence criteria and :
This enforces a convergence criteria and , which is not achievable with the default machineprecision computation:
Setting a high WorkingPrecision makes the process convergent:
EvaluationMonitor (1)
StepMonitor (1)
Steps taken by NMinimize in finding the minimum of the classic Rosenbrock function:
WorkingPrecision (1)
With the working precision set to , by default AccuracyGoal and PrecisionGoal are set to :
Applications (14)
Geometry Problems (4)
Find the minimum distance between two disks of radius 1 centered at and . Let be a point on disk 1. Let be a point on disk 2. The objective is to minimize subject to constraints :
Visualize the positions of the two points:
Find the radius and center of a minimal enclosing ball that encompasses a given region:
Minimize the radius subject to the constraints :
The minimal enclosing ball can be found efficiently using BoundingRegion:
Find the smallest ellipsoid parametrized as that encompasses a set of points in 3D by minimizing the volume:
For each point , the constraint must be satisfied:
Minimizing the volume is equivalent to minimizing :
Convert the parametrized ellipse into the explicit form :
A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:
Find the smallest square that can contain circles of given radius for and do not overlap. Specify the number of circles and the radius of each circle:
If is the center of circle , then the objective is to minimize . The objective can be transformed so as to minimize and :
DataFitting Problems (3)
Minimize subject to the constraints for a given matrix a and vector b:
Fit a cubic curve to discrete data such that the first and last points of the data lie on the curve:
Construct the matrix using DesignMatrix:
Define the constraint so that the first and last points must lie on the curve:
Find the coefficients by minimizing :
Find a fit less sensitive to outliers to nonlinear discrete data by minimizing :
Fit the data using the bases . The interpolating function will be :
Compare the interpolating function with the reference function:
Classification Problems (3)
Find a line that separates two groups of points and :
For separation, set 1 must satisfy and set 2 must satisfy :
The objective is to minimize , which gives twice the thickness between and :
Find a quadratic polynomial that separates two groups of 3D points and :
Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:
For separation, set 1 must satisfy and set 2 must satisfy :
Find the separating polynomial by minimizing :
The polynomial separating the two groups of points is:
Plot the polynomial separating the two datasets:
Separate a given set of points into different groups. This is done by finding the centers for each group by minimizing , where is a given local kernel and is a given penalty parameter:
The kernel is a nearest neighbor () function such that , else . For this problem, nearest neighbors are selected:
For each data point, there exists a corresponding center. Data belonging to the same group will have the same center value:
Image Processing (1)
Facility Location Problems (1)
Find the positions of various cell towers and the range needed to serve clients located at :
Each cell tower consumes power proportional to its range, which is given by . The objective is to minimize the power consumption:
Let be a decision variable indicating that if client is covered by cell tower :
Each cell tower must be located such that its range covers some of the clients:
Each cell tower can cover multiple clients:
Each cell tower has a minimum and maximum coverage:
Find the cell tower positions and their ranges:
Extract cell tower position and range:
Visualize the positions and ranges of the towers with respect to client locations:
Portfolio Optimization (1)
Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:
The return is given by , where is a vector of expected return value of each individual stock:
The risk is given by ; is a riskaversion parameter and :
The objective is to maximize return while minimizing risk for a specified riskaversion parameter:
The effect on market prices of stocks due to the buying and selling of stocks is modeled by :
The weights must all be greater than 0 and the weights plus market impact costs must add to 1:
Compute the returns and corresponding risk for a range of riskaversion parameters:
The optimal over a range of gives an upperbound envelope on the tradeoff between return and risk:
Compute the weights for a specified number of riskaversion parameters:
By accounting for the market costs, a diversified portfolio can be obtained for low risk aversion, but when the risk aversion is high, the market impact cost dominates, due to purchasing a less diversified stock:
Trajectory Optimization (1)
Find a path through circular obstacles such that the distance between the start and end points is minimized:
The path is discretized into different points with distance of between points, where is the trajectory length being minimized:
The points cannot be inside the circular objects:
The start and end points are known:
Properties & Relations (9)
NMinimize gives the minimum value and rules for the minimizing values of the variables:
NArgMin gives a list of the minimizing values:
NMinValue gives only the minimum value:
Maximizing a function f is equivalent to minimizing f:
For convex problems, ConvexOptimization may be used to obtain additional solution properties:
For convex problems with parameters, using ParametricConvexOptimization gives a ParametricFunction:
The ParametricFunction may be evaluated for values of the parameter:
Define a function for the parametric problem using NMinimize:
Compare the speed of the two approaches:
Derivatives of the ParametricFunction can also be computed:
For convex problems with parametric constraints, RobustConvexOptimization finds an optimum that works for all possible values of the parameters:
NMinimize may find a smaller value for particular values of the parameters:
This minimizer does not satisfy the constraints for all allowed values of α and β:
The minimum value found for particular values of the parameters is less than or equal to the robust minimum:
NMinimize aims to find a global minimum, while FindMinimum attempts to find a local minimum:
Minimize finds a global minimum and can work in infinite precision:
FindFit can use NMinimize to find the global optimal fit. This sets up a model:
Create a function from the model and parameters, and generate sample points:
By default, FindFit only finds the local optimal fit:
Using the NMinimize method finds the global optimal fit:
Use RegionDistance and RegionNearest to compute the distance and the nearest point:
Both can be computed using NMinimize:
Use RegionBounds to compute the bounding box:
Possible Issues (3)
For nonlinear functions, NMinimize may sometimes find only a local minimum:
Specifying a starting interval can help in achieving a better local minimum:
NMinimize finds a local minimum of a twodimensional function on a disk:
Specifying a starting interval helps in achieving the global minimum:
Define a function that does numerical integration for a given parameter:
Compute with a parameter value of 2:
Applying the function to a symbolic parameter generates a message from NIntegrate:
This can also lead to warnings when the function is used with other numerical functions like NMinimize:
Define a function that only evaluates when its argument is a numerical value to avoid these messages:
Compute with a numerical value:
The function does not evaluate when its argument is nonnumerical:
The function can now be used with other numerical functions such as NMinimize:
Text
Wolfram Research (2003), NMinimize, Wolfram Language function, https://reference.wolfram.com/language/ref/NMinimize.html (updated 2022).
CMS
Wolfram Language. 2003. "NMinimize." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2022. https://reference.wolfram.com/language/ref/NMinimize.html.
APA
Wolfram Language. (2003). NMinimize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/NMinimize.html