NArgMin
NArgMin[f,x]
gives a position xmin at which f is numerically globally minimized.
NArgMin[f,{x,y,…}]
gives a position {xmin,ymin,…} at which f is numerically globally minimized.
NArgMin[{f,cons},{x,y,…}]
gives a position at which f is numerically globally minimized subject to the constraints cons.
NArgMin[…,x∈reg]
constrains x to be in the region reg.
Details and Options
- NArgMin is also known as global optimization (GO).
- NArgMin always attempts to find a global minimum of f subject to the constraints given.
- NArgMin is typically used to find the smallest possible values given constraints. In different areas, this may be called the best strategy, best fit, best configuration and so on.
- NArgMin returns a list of the form {xmin,ymin,…}.
- NArgMin[…,{x,y,…}] is effectively equivalent to {x,y,…}/.Last[NMinimize[…,{x,y,…},…].
- If f and cons are linear or convex, the result given by NArgMin will be the global minimum, over both real and integer values; otherwise, the result may sometimes only be a local minimum.
- If NArgMin determines that the constraints cannot be satisfied, it returns {Indeterminate,…}.
- NArgMin supports a modeling language where the objective function f and constraints cons are given in terms of expressions depending on scalar or vector variables. f and cons are typically parsed into very efficient forms, but as long as f and the terms in cons give numerical values for numerical values of the variables, NArgMin can often find a solution.
- The constraints cons can be any logical combination of:
-
lhs==rhs equations lhs>rhs, lhs≥rhs, lhs<rhs, lhs≤rhs inequalities (LessEqual, …) lhsrhs, lhsrhs, lhsrhs, lhsrhs vector inequalities (VectorLessEqual, …) {x,y,…}∈rdom region or domain specification - NArgMin[{f,cons},x∈rdom] is effectively equivalent to NArgMin[{f,cons&&x∈rdom},x].
- For x∈rdom, the different coordinates can be referred to using Indexed[x,i].
- Possible domains rdom include:
-
Reals real scalar variable Integers integer scalar variable Vectors[n,dom] vector variable in Matrices[{m,n},dom] matrix variable in ℛ vector variable restricted to the geometric region - By default, all variables are assumed to be real.
- The following options can be given:
-
AccuracyGoal Automatic number of digits of final accuracy sought EvaluationMonitor None expression to evaluate whenever f is evaluated MaxIterations Automatic maximum number of iterations to use Method Automatic method to use PrecisionGoal Automatic number of digits of final precision sought StepMonitor None expression to evaluate whenever a step is taken WorkingPrecision MachinePrecision the precision used in internal computations - The settings for AccuracyGoal and PrecisionGoal specify the number of digits to seek in both the value of the position of the minimum, and the value of the function at the minimum.
- NArgMin continues until either of the goals specified by AccuracyGoal or PrecisionGoal is achieved.
- The methods for NArgMin fall into two classes. The first class of guaranteed methods uses properties of the problem so that, when the method converges, the minimum found is guaranteed to be global. The second class of heuristic methods uses methods that may include multiple local searches, commonly adjusted by some stochasticity, to home in on a global minimum. These methods often do find the global minimum, but are not guaranteed to do so.
- Methods that are guaranteed to give a global minimum when they converge to a solution include:
-
"Convex" use only convex methods "MOSEK" use the commercial MOSEK library for convex problems "Gurobi" use the commercial Gurobi library for convex problems "Xpress" use the commercial Xpress library for convex problems - Heuristic methods include:
-
"NelderMead" simplex method of Nelder and Mead "DifferentialEvolution" use differential evolution "SimulatedAnnealing" use simulated annealing "RandomSearch" use the best local minimum found from multiple random starting points "Couenne" use the Couenne library for non-convex mixed-integer nonlinear problems
Examples
open allclose allBasic Examples (4)
Scope (40)
Basic Uses (12)
Find values of that minimize subject to constraints :
Several linear inequality constraints can be expressed with VectorGreaterEqual:
Use v>= or \[VectorGreaterEqual] to enter the vector inequality sign :
An equivalent form using scalar inequalities:
The inequality may not be the same as due to possible threading in :
To avoid unintended threading in , use Inactive[Plus]:
Use constant parameter equations to avoid unintended threading in :
VectorGreaterEqual represents a conic inequality with respect to the "NonNegativeCone":
To explicitly specify the dimension of the cone, use {"NonNegativeCone",n}:
Find a point that minimizes subject to the constraint :
Specify the constraint using a conic inequality with "NormCone":
Find a point that minimizes the function subject to the constraint :
Use Indexed to access components of a vector variable, e.g. :
Use Vectors[n,dom] to specify the dimension and domain of a vector variable when it is ambiguous:
Specify non-negative constraints using NonNegativeReals ():
An equivalent form using vector inequality :
Specify non-positive constraints using NonPositiveReals ():
An equivalent form using vector inequalities:
Or constraints can be specified:
Domain Constraints (4)
Specify integer domain constraints using Integers:
Specify integer domain constraints on vector variables using Vectors[n,Integers]:
Specify non-negative integer domain constraints using NonNegativeIntegers ():
Specify non-positive integer domain constraints using NonPositiveIntegers ():
Region Constraints (5)
Find values of that minimize over a region:
Find points and that minimize the distance between two regions:
Find the minimum such that the triangle and ellipse still intersect:
Find the disk of minimum radius that contains the given three points:
Using Circumsphere gives the same result directly:
Linear Problems (5)
With linear objectives and constraints, when a minimum is found it is global:
The constraints can be equality and inequality constraints:
Use Equal to express several equality constraints at once:
An equivalent form using several scalar equalities:
Use VectorLessEqual to express several LessEqual inequality constraints at once:
Use v<= to enter the vector inequality in a compact form:
An equivalent form using scalar inequalities:
Use Interval to specify bounds on a variable:
Convex Problems (7)
Use "NonNegativeCone" to specify linear functions of the form :
Use v>= to enter the vector inequality in a compact form:
Find a point that minimizes a convex quadratic function subject to linear constraints:
Plot the region and minimizing point:
Find a point that minimizes a convex quadratic function subject to a set of convex quadratic constraints:
Plot the region and the minimizing point:
Find points and that minimize the distance between two convex regions:
Find a point that minimizes such that is positive semidefinite:
Show the minimizer on a plot of the objective function:
Find a point that minimizes the convex objective function such that is positive semidefinite and :
Plot the region and the minimizing point:
Find a point that minimizes a convex objective function over a 4-norm unit disk:
Transformable to Convex (4)
Find values of height and weight that minimize the perimeter of a rectangle with area 1 and :
This problem is log-convex and is solved by making a transformation {hExp[],wExp[ ]} and taking logarithms to get the convex problem:
Find a point that minimizes the quasi-convex function subject to inequality and norm constraints. The objective is quasi-convex because it is a product of a non-negative function and a non-positive function over the domain:
Quasi-convex problems can be solved as parametric convex optimization problems for the parameter :
Plot the objective as a function of the level-set :
For a level-set value between the interval , the smallest objective is found:
The problem becomes infeasible when the level-set value is increased:
Minimize subject to the constraint . The objective is not convex but can be represented by a difference of convex function where and are convex functions:
Plot the region and the minimizing point:
Minimize subject to the constraints . The constraint is not convex but can be represented by a difference of convex constraint where are convex functions:
General Problems (3)
Options (7)
AccuracyGoal & PrecisionGoal (2)
This enforces convergence criteria and :
This enforces convergence criteria and , which is not achievable with the default machine-precision computation:
Setting a high WorkingPrecision makes the process convergent:
EvaluationMonitor (1)
Method (2)
Some methods may give suboptimal results for certain problems:
The automatically chosen method gives the optimal solution for this problem:
The automatic method choice for this non-convex problem is method "Couenne":
Plot the objective function along with the global minimum point:
Use method "NelderMead" for problems with many variables when speed is essential:
StepMonitor (1)
Steps taken by NArgMin in finding the minimum of the classic Rosenbrock function:
WorkingPrecision (1)
With the working precision set to , by default AccuracyGoal and PrecisionGoal are set to :
Applications (14)
Geometry Problems (4)
Find the closest points located on two different disks of radius 1 centered at and . Let be a point on disk 1. Let be a point on disk 2. The objective is to minimize subject to constraints :
Visualize the positions of the two points:
Find the radius and center of a minimal enclosing ball that encompasses a given region:
Minimize the radius subject to the constraints :
The minimal enclosing ball can be found efficiently using BoundingRegion:
Find the values of and that minimize the volume of the ellipsoid parametrized as that encompasses a set of points in 3D:
For each point , the constraint must be satisfied:
Minimizing the volume is equivalent to minimizing :
Convert the parametrized ellipse into the explicit form :
A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:
Find the smallest square that can contain circles of given radii for and that do not overlap. Specify the number of circles and the radius of each circle:
If is the center of circle , then the objective is to minimize . The objective can be transformed so as to minimize and :
The circles are contained in the square . Find the bounding box and the centers:
Data-Fitting Problems (3)
Find a point that minimizes subject to the constraints for a given matrix a and vector b:
Fit a cubic curve to discrete data such that the first and last points of the data lie on the curve:
Construct the matrix using DesignMatrix:
Define the constraint so that the first and last points must lie on the curve:
Find the coefficients by minimizing :
Find a fit less sensitive to outliers to nonlinear discrete data by finding the value of that minimizes :
Fit the data using the bases . The interpolating function will be :
Compare the interpolating function with the reference function:
Classification Problems (3)
Find a line that separates two groups of points and :
For separation, set 1 must satisfy and set 2 must satisfy :
The objective is to minimize , which gives twice the thickness between and :
Find a quadratic polynomial that separates two groups of 3D points and :
Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:
For separation, set 1 must satisfy and set 2 must satisfy :
Find the separating polynomial by minimizing :
The polynomial separating the two groups of points is:
Plot the polynomial separating the two datasets:
Separate a given set of points into different groups. This is done by finding the centers for each group by minimizing , where is a given local kernel and is a given penalty parameter:
The kernel is a -nearest neighbor () function such that , else . For this problem, nearest neighbors are selected:
For each data point, there exists a corresponding center. Data belonging to the same group will have the same center value:
Image Processing (1)
Facility Location Problems (1)
Find the positions of various cell towers and the range needed to serve clients located at :
Each cell tower consumes power proportional to its range, which is given by . The objective is to minimize the power consumption:
Let be a decision variable indicating that if client is covered by cell tower :
Each cell tower must be located such that its range covers some of the clients:
Each cell tower can cover multiple clients:
Each cell tower has a minimum and maximum coverage:
Find the cell tower positions and their ranges:
Extract cell tower position and range:
Visualize the positions and ranges of the towers with respect to client locations:
Portfolio Optimization (1)
Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:
The return is given by , where is a vector of the expected return value of each individual stock:
The risk is given by ; is a risk-aversion parameter and :
The objective is to maximize return while minimizing risk for a specified risk-aversion parameter:
The effect on market prices of stocks due to the buying and selling of stocks is modeled by :
The weights must all be greater than 0 and the weights plus market impact costs must add to 1:
Compute the returns and corresponding risk for a range of risk-aversion parameters:
The optimal over a range of gives an upper-bound envelope on the tradeoff between return and risk:
Compute the weights for a specified number of risk-aversion parameters:
By accounting for the market costs, a diversified portfolio can be obtained for low risk aversion, but when the risk aversion is high, the market impact cost dominates, due to purchasing a less diversified stock:
Trajectory Optimization (1)
Find a path through circular obstacles such that the distance between the start and end points is minimized:
The path is discretized into different points. The distance between these points must be less than where is the length to be minimized:
The points cannot be inside the circular objects:
The start and end points are known:
Properties & Relations (6)
NMinimize gives the minimum value and rules for the minimizing values of the variables:
NArgMin gives a list of the minimizing values:
NMinValue gives only the minimum value:
Maximizing a function f is equivalent to minimizing -f:
For convex problems, ConvexOptimization may be used to obtain additional solution properties:
For convex problems with parameters, using ParametricConvexOptimization gives a ParametricFunction:
The ParametricFunction may be evaluated for values of the parameter:
Define a function for the parametric problem using NArgMin:
Compare the speeds of the two approaches:
Derivatives of the ParametricFunction can also be computed:
For convex problems with parametric constraints, RobustConvexOptimization finds an optimum that works for all possible values of the parameters:
NArgMin may find a minimizer that gives a smaller minimum value for particular values of the parameters:
This minimizer does not satisfy the constraints for all allowed values of and :
The minimum value found for particular values of the parameters is less than or equal to the robust minimum:
Use RegionNearest to compute a nearest point in the given region:
It can be computed using NArgMin:
Text
Wolfram Research (2008), NArgMin, Wolfram Language function, https://reference.wolfram.com/language/ref/NArgMin.html (updated 2024).
CMS
Wolfram Language. 2008. "NArgMin." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/NArgMin.html.
APA
Wolfram Language. (2008). NArgMin. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/NArgMin.html