FindMinimum

FindMinimum[f,x]

searches for a local minimum in f, starting from an automatically selected point.

FindMinimum[f,{x,x0}]

searches for a local minimum in f, starting from the point x=x0.

FindMinimum[f,{{x,x0},{y,y0},}]

searches for a local minimum in a function of several variables.

FindMinimum[{f,cons},{{x,x0},{y,y0},}]

searches for a local minimum subject to the constraints cons.

FindMinimum[{f,cons},{x,y,}]

starts from a point within the region defined by the constraints.

Details and Options

  • FindMinimum returns a list of the form {fmin,{x->xmin}}, where fmin is the minimum value of f found, and xmin is the value of x for which it is found.
  • If the starting point for a variable is given as a list, the values of the variable are taken to be lists with the same dimensions.
  • The constraints cons can contain equations, inequalities or logical combinations of these.
  • The constraints cons can be any logical combination of:
  • lhs==rhsequations
    lhs>rhs or lhs>=rhs inequalities
    {x,y,}regregion specification
  • FindMinimum first localizes the values of all variables, then evaluates f with the variables being symbolic, and then repeatedly evaluates the result numerically.
  • FindMinimum has attribute HoldAll, and effectively uses Block to localize variables.
  • FindMinimum[f,{x,x0,x1}] searches for a local minimum in f using x0 and x1 as the first two values of x, avoiding the use of derivatives.
  • FindMinimum[f,{x,x0,xmin,xmax}] searches for a local minimum, stopping the search if x ever gets outside the range xmin to xmax.
  • Except when f and cons are both linear, the results found by FindMinimum may correspond only to local, but not global, minima.
  • By default, all variables are assumed to be real.
  • For linear f and cons, xIntegers can be used to specify that a variable can take on only integer values.
  • The following options can be given:
  • AccuracyGoalAutomaticthe accuracy sought
    EvaluationMonitorNoneexpression to evaluate whenever f is evaluated
    GradientAutomaticthe list of gradient components for f
    MaxIterationsAutomaticmaximum number of iterations to use
    MethodAutomaticmethod to use
    PrecisionGoalAutomaticthe precision sought
    StepMonitorNoneexpression to evaluate whenever a step is taken
    WorkingPrecisionMachinePrecisionthe precision used in internal computations
  • The settings for AccuracyGoal and PrecisionGoal specify the number of digits to seek in both the value of the position of the minimum, and the value of the function at the minimum.
  • FindMinimum continues until either of the goals specified by AccuracyGoal or PrecisionGoal is achieved.
  • Possible settings for Method include "ConjugateGradient", "PrincipalAxis", "LevenbergMarquardt", "Newton", "QuasiNewton", "InteriorPoint", and "LinearProgramming", with the default being Automatic.

Examples

open allclose all

Basic Examples  (4)

Find a local minimum, starting the search at :

Extract the value of x at the local minimum:

Find a local minimum, starting at , subject to constraints :

Find the minimum of a linear function, subject to linear and integer constraints:

Find a minimum of a function over a geometric region:

Plot it:

Scope  (12)

With different starting points, you may get different local minima:

Local minimum of a two-variable function starting from x=2, y=2:

Local minimum constrained within a disk:

Starting point does not have to be provided:

For linear objective and constraints, integer constraints can be imposed:

Or constraints can be specified:

Find a minimum in a region:

Plot it:

Find the minimum distance between two regions:

Plot it:

Find the minimum such that the triangle and ellipse still intersect:

Plot it:

Find the disk of minimum radius that contains the given three points:

Plot it:

Using Circumsphere gives the same result directly:

Use to specify that is a vector in :

Find the minimum distance between two regions:

Plot it:

Options  (7)

AccuracyGoal & PrecisionGoal  (2)

This enforces convergence criteria and :

This enforces convergence criteria and :

Setting a high WorkingPrecision makes the process convergent:

EvaluationMonitor  (1)

Plot convergence to the local minimum:

Gradient  (1)

Use a given gradient; the Hessian is computed automatically:

Supply both gradient and Hessian:

Method  (1)

In this case, the default derivative-based methods have difficulties:

Direct search methods that do not require derivatives can be helpful in these cases:

NMinimize also uses a range of direct search methods:

StepMonitor  (1)

Steps taken by FindMinimum in finding the minimum of a function:

WorkingPrecision  (1)

Set the working precision to ; by default AccuracyGoal and PrecisionGoal are set to :

Applications  (1)

Annual returns (R) of long bonds from S&P 500 from 1973 to 1994:

Compute the mean and covariance from the returns:

Minimize the volatility subject to at least 10% return:

Properties & Relations  (2)

FindMinimum tries to find a local minimum; NMinimize attempts to find a global minimum:

Minimize finds a global minimum and can work in infinite precision:

FindMinimum gives both the value of the minimum and the minimizer point:

FindArgMin gives the location of the minimum:

FindMinValue gives the value at the minimum:

Possible Issues  (6)

With machine-precision arithmetic, even functions with smooth minima may seem bumpy:

Going beyond machine precision often avoids such problems:

If the constraint region is empty, the algorithm will not converge:

If the minimum value is not finite, the algorithm will not converge:

The integer linear programming algorithm is only available for machine-number problems:

Sometimes providing a suitable starting point can help the algorithm to converge:

It can be time-consuming to compute functions symbolically:

Restricting the function definition prevents symbolic evaluation:

Introduced in 1988
 (1.0)
 |
Updated in 2000
 (4.1)
2003
 (5.0)
2007
 (6.0)
2014
 (10.0)