Wolfram Research, Inc.

3.9.9 Linear Programming

FindMinimum can find local minima for arbitrary functions. In solving optimization problems, it is however often important to be able to find global maxima and minima.

Linear programming provides a way to do this for linear functions. In general, linear programming allows you to find the global minimum or maximum of any linear function subject to a set of constraints defined by linear inequalities.

For a function of variables, the constraints effectively define a region in -dimensional space. Each linear inequality gives a plane in -dimensional space which forms one of the sides of the region.

Solving linear optimization problems.

The functions ConstrainedMax and ConstrainedMin allow you to specify an "objective function" to maximize or minimize, together with a set of linear constraints on variables. Mathematica assumes in all cases that the variables are constrained to have non-negative values.

The maximum value of in this case is attained when and .

In[1]:= ConstrainedMax[x + y, {x < 1, y < 2}, {x, y}]

Out[1]=

Mathematica assumes that and are constrained to be non-negative.

In[2]:= ConstrainedMin[x + y, {x < 1, y < 2}, {x, y}]

Out[2]=

Here is a slightly more complicated linear programming problem.

In[3]:= ConstrainedMax[17x - 20y + 18z,

{x - y + z < 10, x < 5, x + z > 20}, {x, y, z}]

Out[3]=

You can give both exact and approximate numbers as coefficients in linear programming problems. When the numbers you give are exact, Mathematica will always generate results in terms of exact rational numbers.

If the coefficients in your input are exact, Mathematica will give exact rational numbers in the output.

In[4]:= ConstrainedMin[x + 3 y + 7 z,

{x - 3 y < 7, 2 x + 3 z >= 5, x + y + z < 10},

{x, y, z}]

Out[4]=

Mathematica allows you to use both strict inequalities of the form a < b and non-strict inequalities of the form a <= b in specifying linear programming problems. If you work with approximate numbers, then these types of inequalities cannot be distinguished. However, if you work with exact numbers, then these types of inequalities could in principle yield different results.

It turns out that the solution to any linear programming problem always corresponds to a point which lies at the boundary of the region defined for that problem. With non-strict inequalities, the solution point lies actually on the boundary. With strict inequalities, however, the solution point in principle lies infinitesimally within the boundary. Mathematica ignores this issue when giving exact results for linear programming problems. Thus Mathematica may yield a result like x -> 1 when in fact x must in principle be infinitesimally smaller than 1.

As a result of this phenomenon, exact results you get from functions like ConstrainedMax may have the property that they do not satisfy strict inequalities that appear in the constraints you specify. The results would however satisfy non-strict versions of the same inequalities.

The result x -> 1 satisfies x <= 1 but not x < 1.

In[5]:= ConstrainedMax[x, {x < 1}, {x}]

Out[5]=

ConstrainedMax and ConstrainedMin allow you to give an arbitrary list of inequalities, as well as equations, to specify constraints between variables. In some cases, these constraints may turn out to be inconsistent. In such cases, ConstrainedMax and ConstrainedMin return unevaluated.

The constraints given here are inconsistent.

In[6]:= ConstrainedMax[x, {x < 1, x > 2}, {x}]

Out[6]=

In most practical linear optimization problems, the region corresponding to the constraints you specify is finite in all directions. It is however possible to give inequalities which specify infinite regions. In such cases, there may be no bound on the objective function, and Mathematica will return infinite or indeterminate results.

The region specified in this case is unbounded.

In[7]:= ConstrainedMax[x, {x > 1}, {x}]

Out[7]=

In ConstrainedMax and ConstrainedMin the objective functions and constraints are specified in symbolic form. Sometimes, however, it is more convenient to handle linear programming problems simply by setting up vectors and matrices which represent the appropriate coefficients of the linear functions that appear. You can do this using LinearProgramming.

Linear programming in matrix form.

Here is a linear programming problem given in symbolic form.

In[8]:= ConstrainedMin[2x - 3y,

{x + y < 10, x - y > 2, x > 1}, {x, y}]

Out[8]=

Here is the same problem given in matrix form.

In[9]:= LinearProgramming[{2, -3},

{{-1, -1}, {1, -1}, {1, 0}}, {-10, 2, 1}]

Out[9]=