minimizes f numerically with respect to x.


minimizes f numerically with respect to x, y, .


minimizes f numerically subject to the constraints cons.


constrains x to be in the region or domain rdom.

Details and Options

  • NMinimize always attempts to find a global minimum of f subject to the constraints given.
  • NMinimize is typically used to find the smallest possible values given constraints. In different areas, this may be called the best strategy, best fit, best configuration and so on.
  • NMinimize returns a list of the form {fmin,{x->xmin,y->ymin,}}.
  • If f and cons are linear or convex, the result given by NMinimize will be the global minimum, over both real and integer values; otherwise, the result may sometimes only be a local minimum.
  • If NMinimize determines that the constraints cannot be satisfied, it returns {Infinity,{x->Indeterminate,}}.
  • NMinimize supports a modeling language where the objective function f and constraints cons are given in terms of expressions depending on scalar or vector variables. f and cons are typically parsed into very efficient forms, but as long as f and the terms in cons give numerical values for numerical values of the variables, NMinimize can often find a solution.
  • The constraints cons can be any logical combination of:
  • lhs==rhsequations
    lhs>rhs, lhsrhs, lhs<rhs, lhsrhsinequalities (LessEqual, )
    lhsrhs, lhsrhs, lhsrhs, lhsrhsvector inequalities (VectorLessEqual, )
    {x,y,}rdomregion or domain specification
  • NMinimize[{f,cons},xrdom] is effectively equivalent to NMinimize[{f,cons&&xrdom},x].
  • For xrdom, the different coordinates can be referred to using Indexed[x,i].
  • Possible domains rdom include:
  • Realsreal scalar variable
    Integersinteger scalar variable
    Vectors[n,dom]vector variable in
    Matrices[{m,n},dom]matrix variable in
    vector variable restricted to the geometric region
  • By default, all variables are assumed to be real.
  • The following options can be given:
  • AccuracyGoalAutomaticnumber of digits of final accuracy sought
    EvaluationMonitorNoneexpression to evaluate whenever f is evaluated
    MaxIterationsAutomaticmaximum number of iterations to use
    MethodAutomaticmethod to use
    PrecisionGoalAutomaticnumber of digits of final precision sought
    StepMonitorNoneexpression to evaluate whenever a step is taken
    WorkingPrecisionMachinePrecisionthe precision used in internal computations
  • The settings for AccuracyGoal and PrecisionGoal specify the number of digits to seek in both the value of the position of the minimum, and the value of the function at the minimum.
  • NMinimize continues until either of the goals specified by AccuracyGoal or PrecisionGoal is achieved.
  • The methods for NMinimize fall into two classes.The first class of guaranteed methods uses properties of the problem so that, when the method converges, the minimum found is guaranteed to be global. The second class of heuristic methods uses methods that may include multiple local searches, commonly adjusted by some stochasticity, to home in on a global minimum. These methods often do find the global minimum, but are not guaranteed to do so.
  • Methods that are guaranteed to give a global minimum when they converge to a solution include:
  • "Convex"use only convex methods
    "MOSEK"use the commercial MOSEK library for convex problems
    "Gurobi"use the commercial Gurobi library for convex problems
  • Heuristic methods include:
  • "NelderMead"simplex method of Nelder and Mead
    "DifferentialEvolution"use differential evolution
    "SimulatedAnnealing"use simulated annealing
    "RandomSearch"use the best local minimum found from multiple random starting points


open allclose all

Basic Examples  (3)

Find the global minimum of an unconstrained problem:

Extract the minimizing argument:

Find the global minimum of problems with constraints:

Minimize a function over a geometric region:

Plot it:

Scope  (38)

Basic Uses  (12)

Minimize subject to constraints :

Several linear inequality constraints can be expressed with VectorGreaterEqual:

Use v>= or \[VectorGreaterEqual] to enter the vector inequality sign :

An equivalent form using scalar inequalities:

Use a vector variable :

The inequality may not be the same as due to possible threading in :

To avoid unintended threading in , use Inactive[Plus]:

Use constant parameter equations to avoid unintended threading in :

VectorGreaterEqual represents a conic inequality with respect to the "NonNegativeCone":

To explicitly specify the dimension of the cone, use {"NonNegativeCone",n}:

Find the solution:

Minimize subject to the constraint :

Specify the constraint using a conic inequality with "NormCone":

Find the solution:

Minimize the function subject to the constraint :

Use Indexed to access components of a vector variable, e.g. TemplateBox[{x, 1}, IndexedDefault]:

Use Vectors[n,dom] to specify the dimension and domain of a vector variable when it is ambiguous:

Specify non-negative constraints using NonNegativeReals (TemplateBox[{}, NonNegativeReals]):

An equivalent form using vector inequality :

Specify non-positive constraints using NonPositiveReals (TemplateBox[{}, NonPositiveReals]):

An equivalent form using vector inequalities:

Or constraints can be specified:

Domain Constraints  (4)

Specify integer domain constraints using Integers:

Specify integer domain constraints on vector variables using Vectors[n,Integers]:

Specify non-negative integer domain constraints using NonNegativeIntegers (TemplateBox[{}, NonNegativeIntegers]):

Specify non-positive integer domain constraints using NonPositiveIntegers (TemplateBox[{}, NonPositiveIntegers]):

Region Constraints  (5)

Minimize over a region:

Plot it:

Find the minimum distance between two regions:

Plot it:

Find the minimum such that the triangle and ellipse still intersect:

Plot it:

Find the disk of minimum radius that contains the given three points:

Plot it:

Using Circumsphere gives the same result directly:

Use to specify that is a vector in with TemplateBox[{x}, Norm]<=1:

Linear Problems  (5)

With linear objectives and constraints, when a minimum is found it is global:

The constraints can be equality and inequality constraints:

Use Equal to express several equality constraints at once:

An equivalent form using several scalar equalities:

Use VectorLessEqual to express several LessEqual inequality constraints at once:

Use v<= to enter the vector inequality in a compact form:

An equivalent form using scalar inequalities:

Use Interval to specify bounds on variable:

Convex Problems  (7)

Use "NonNegativeCone" to specify linear functions of the form :

Use v>= to enter the vector inequality in a compact form:

Minimize a convex quadratic function subject to linear constraints:

Plot the region and minimizing point:

Minimize a convex quadratic function subject to a set of convex quadratic constraints:

Plot the region and the minimizing point:

Find the minimum distance between two convex regions:

Plot it:

Minimize such that is positive semidefinite:

Show the minimizer on a plot of the objective function:

Minimize the convex objective function such that is positive semidefinite and :

Plot the region and the minimizing point:

Minimize a convex objective function over a 4-norm unit disk:

Plot the region and the minimizing point:

Transformable to Convex  (2)

Minimize the perimeter of a rectangle such that the area is 1 and the height is at most half the width:

This problem is log-convex and is solved by making a transformation {hExp[],wExp[ ]} and taking logarithms to get the convex problem:

Minimize the quasi-convex function subject to inequality and norm constraints. The objective is quasi-convex because it is a product of a non-negative function and a non-positive function over the domain:

Quasi-convex problems can be solved as a parametric convex optimization problem for the parameter :

Plot the objective as a function of the level-set :

For a level-set value between the interval , the smallest objective is found:

The problem becomes infeasible when the level-set value is increased:

General Problems  (3)

Minimize a linear objective subject to nonlinear constraints:

Plot it:

Minimize a nonlinear objective subject to linear constraints:

Plot the objective and the minimizing point:

Minimize a nonlinear objective subject to nonlinear constraints:

Plot it:

Options  (6)

AccuracyGoal & PrecisionGoal  (2)

This enforces a convergence criteria and :

This enforces a convergence criteria and , which is not achievable with the default machine-precision computation:

Setting a high WorkingPrecision makes the process convergent:

EvaluationMonitor  (1)

Record all the points evaluated during the solution process of a function with a ring of minima:

Plot all the visited points that are close in objective function value to the final solution:

Method  (1)

Specifying a non-default method could give a better solution:

StepMonitor  (1)

Steps taken by NMinimize in finding the minimum of the classic Rosenbrock function:

WorkingPrecision  (1)

With the working precision set to , by default AccuracyGoal and PrecisionGoal are set to :

Applications  (12)

Geometry Problems  (3)

Find the minimum distance between two disks of radius 1 centered at and . Let be a point on disk 1. Let be a point on disk 2. The objective is to minimize subject to constraints :

Visualize the positions of the two points:

Find the radius and center of a minimal enclosing ball that encompasses a given region:

Minimize the radius subject to the constraints :

Visualize the enclosing ball:

The minimal enclosing ball can be found efficiently using BoundingRegion:

Find the smallest ellipsoid parametrized as {x:TemplateBox[{{{a, ., x}, +, b}}, Norm]<=1} that encompasses a set of points in 3D by minimizing the volume:

For each point , the constraint TemplateBox[{{{a, ., {p, _, i}}, +, b,  }}, Norm]<=1, i=1,2,...,n must be satisfied:

Minimizing the volume is equivalent to minimizing :

Convert the parametrized ellipse into the explicit form :

A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:

Data-Fitting Problems  (3)

Minimize subject to the constraints for a given matrix a and vector b:

Fit a cubic curve to discrete data such that the first and last points of the data lie on the curve:

Construct the matrix using DesignMatrix:

Define the constraint so that the first and last points must lie on the curve:

Find the coefficients by minimizing :

Compare fit with data:

Find a fit less sensitive to outliers to nonlinear discrete data by minimizing :

Fit the data using the bases . The interpolating function will be :

Find the solution:

Visualize the fit:

Compare the interpolating function with the reference function:

Classification Problems  (3)

Find a line that separates two groups of points and :

For separation, set 1 must satisfy and set 2 must satisfy :

The objective is to minimize , which gives twice the thickness between and :

The separating line is:

Find a quadratic polynomial that separates two groups of 3D points and :

Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:

For separation, set 1 must satisfy and set 2 must satisfy :

Find the separating polynomial by minimizing :

The polynomial separating the two groups of points is:

Plot the polynomial separating the two datasets:

Separate a given set of points into different groups. This is done by finding the centers for each group by minimizing , where is a given local kernel and is a given penalty parameter:

The kernel is a k-nearest neighbor () function such that , else . For this problem, nearest neighbors are selected:

The objective is:

Find the group centers:

For each data point, there exists a corresponding center. Data belonging to the same group will have the same center value:

Extract and plot the grouped points:

Image Processing  (1)

Recover a corrupted image by finding an image that is closest under the total variation norm:

Create a corrupted image by randomly deleting 40% of the data points.

The objective is to minimize sum_(i=1)^(n-1)sum_(j=1)^(m-1)sqrt(TemplateBox[{{TemplateBox[{u, {i, +, 1}, j}, IndexedDefault], -, TemplateBox[{u, i, j}, IndexedDefault]}}, Abs]^2+TemplateBox[{{TemplateBox[{u, i, {j, +, 1}}, IndexedDefault], -, TemplateBox[{u, i, j}, IndexedDefault]}}, Abs]^2), where is the image data:

Assume that any nonzero data points TemplateBox[{u, i, j}, IndexedDefault] are uncorrupted. For these positions, set TemplateBox[{u, i, j}, IndexedDefault]=u_(i j)^(orig):

Find the solution and show the restored image:

Facility Location Problems  (1)

Find the positions of various cell towers and the range needed to serve clients located at :

Each cell tower consumes power proportional to its range, which is given by . The objective is to minimize the power consumption:

Let be a decision variable indicating that if client is covered by cell tower :

Each cell tower must be located such that its range covers some of the clients:

Each cell tower can cover multiple clients:

Each cell tower has a minimum and maximum coverage:

Collect all the variables:

Find the cell tower positions and their ranges:

Extract cell tower position and range:

Visualize the positions and ranges of the towers with respect to client locations:

Portfolio Optimization  (1)

Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:

The return is given by , where is a vector of expected return value of each individual stock:

The risk is given by ; is a risk-aversion parameter and :

The objective is to maximize return while minimizing risk for a specified risk-aversion parameter:

The effect on market prices of stocks due to the buying and selling of stocks is modeled by :

The weights must all be greater than 0 and the weights plus market impact costs must add to 1:

Compute the returns and corresponding risk for a range of risk-aversion parameters:

The optimal over a range of gives an upper-bound envelope on the tradeoff between return and risk:

Compute the weights for a specified number of risk-aversion parameters:

By accounting for the market costs, a diversified portfolio can be obtained for low risk aversion, but when the risk aversion is high, the market impact cost dominates, due to purchasing a less diversified stock:

Properties & Relations  (9)

NMinimize gives the minimum value and rules for the minimizing values of the variables:

NArgMin gives a list of the minimizing values:

NMinValue gives only the minimum value:

Maximizing a function f is equivalent to minimizing -f:

For convex problems, ConvexOptimization may be used to obtain additional solution properties:

Get the dual solution:

For convex problems with parameters, using ParametricConvexOptimization gives a ParametricFunction:

The ParametricFunction may be evaluated for values of the parameter:

Define a function for the parametric problem using NMinimize:

Compare the speed of the two approaches:

Derivatives of the ParametricFunction can also be computed:

For convex problems with parametric constraints, RobustConvexOptimization finds an optimum that works for all possible values of the parameters:

NMinimize may find a smaller value for particular values of the parameters:

This minimizer does not satisfy the constraints for all allowed values of α and β:

The minimum value found for particular values of the parameters is less than or equal to the robust minimum:

NMinimize aims to find a global minimum, while FindMinimum attempts to find a local minimum:

Minimize finds a global minimum and can work in infinite precision:

FindFit can use NMinimize to find the global optimal fit. This sets up a model:

Create a function from the model and parameters, and generate sample points:

By default, FindFit only finds the local optimal fit:

Using the NMinimize method finds the global optimal fit:

Use RegionDistance and RegionNearest to compute the distance and the nearest point:

Both can be computed using NMinimize:

Use RegionBounds to compute the bounding box:

Use NMaximize and NMinimize to compute the same bounds:

Possible Issues  (3)

For nonlinear functions, NMinimize may sometimes find only a local minimum:

Specifying a starting interval can help in achieving a better local minimum:

NMinimize finds a local minimum of a two-dimensional function on a disk:

Specifying a starting interval helps in achieving the global minimum:

Define a function that does numerical integration for a given parameter:

Compute with a parameter value of 2:

Applying the function to a symbolic parameter generates a message from NIntegrate:

This can also lead to warnings when the function is used with other numerical functions like NMinimize:

Define a function that only evaluates when its argument is a numerical value to avoid these messages:

Compute with a numerical value:

The function does not evaluate when its argument is non-numerical:

The function can now be used with other numerical functions such as NMinimize:

Wolfram Research (2003), NMinimize, Wolfram Language function, https://reference.wolfram.com/language/ref/NMinimize.html (updated 2021).


Wolfram Research (2003), NMinimize, Wolfram Language function, https://reference.wolfram.com/language/ref/NMinimize.html (updated 2021).


@misc{reference.wolfram_2021_nminimize, author="Wolfram Research", title="{NMinimize}", year="2021", howpublished="\url{https://reference.wolfram.com/language/ref/NMinimize.html}", note=[Accessed: 29-November-2021 ]}


@online{reference.wolfram_2021_nminimize, organization={Wolfram Research}, title={NMinimize}, year={2021}, url={https://reference.wolfram.com/language/ref/NMinimize.html}, note=[Accessed: 29-November-2021 ]}


Wolfram Language. 2003. "NMinimize." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2021. https://reference.wolfram.com/language/ref/NMinimize.html.


Wolfram Language. (2003). NMinimize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/NMinimize.html