Linear Programming

Introduction

Linear programming problems are optimization problems where the objective function and constraints are all linear.

The Wolfram Language has a collection of algorithms for solving linear optimization problems with real variables, accessed via LinearProgramming, FindMinimum, FindMaximum, NMinimize, NMaximize, Minimize, and Maximize. LinearProgramming gives direct access to linear programming algorithms, provides the most flexibility for specifying the methods used, and is the most efficient for large-scale problems. FindMinimum, FindMaximum, NMinimize, NMaximize, Minimize, and Maximize are convenient for solving linear programming problems in equation and inequality form.

This solves a linear programming problem

using Minimize.
In[1]:=
Click for copyable input
Out[1]=
This solves the same problem using NMinimize. NMinimize returns a machine-number solution.
In[2]:=
Click for copyable input
Out[2]=
This solves the same problem using LinearProgramming.
In[3]:=
Click for copyable input
Out[3]=

The LinearProgramming Function

LinearProgramming is the main function for linear programming with the most flexibility for specifying the methods used, and is the most efficient for large-scale problems.

The following options are accepted.

option name
default value
MethodAutomaticmethod used to solve the linear optimization problem
ToleranceAutomaticconvergence tolerance

Options for LinearProgramming.

The Method option specifies the algorithm used to solve the linear programming problem. Possible values are Automatic, , , and . The default is Automatic, which automatically chooses from the other methods based on the problem size and precision.

The Tolerance option specifies the convergence tolerance.

Examples

Difference between Interior Point and Simplex and/or Revised Simplex

The simplex and revised simplex algorithms solve a linear programming problem by moving along the edges of the polytope defined by the constraints, from vertices to vertices with successively smaller values of the objective function, until the minimum is reached. The Wolfram Language's implementation of these algorithms uses dense linear algebra. A unique feature of the implementation is that it is possible to solve exact/extended precision problems. Therefore these methods are suitable for small-sized problems for which non-machine-number results are needed, or a solution on the vertex is desirable.

Interior point algorithms for linear programming, loosely speaking, iterate from the interior of the polytope defined by the constraints. They get closer to the solution very quickly, but unlike the simplex/revised simplex algorithms, do not find the solution exactly. The Wolfram Language's implementation of an interior point algorithm uses machine-precision sparse linear algebra. Therefore for large-scale machine-precision linear programming problems, the interior point method is more efficient and should be used.

This solves a linear programming problem that has multiple solutions (any point that lies on the line segment between and is a solution); the interior point algorithm gives a solution that lies in the middle of the solution set.
In[6]:=
Click for copyable input
Out[6]=
Using Simplex or , a solution at the boundary of the solution set is given.
In[7]:=
Click for copyable input
Out[7]=
This shows that the interior point method is much faster for the following random sparse linear programming problem of 200 variables and gives similar optimal value.
In[43]:=
Click for copyable input
Out[44]=
In[45]:=
Click for copyable input
Out[45]=
In[46]:=
Click for copyable input
Out[46]=

Finding Dual Variables

Given the general linear programming problem

its dual is

It is useful to know solutions for both for some applications.

The relationship between the solutions of the primal and dual problems is given by the following table.

if the primal is
then the dual problem is
feasiblefeasible
unboundedinfeasible or unbounded
infeasibleunbounded or infeasible

When both problems are feasible, then the optimal values of (P) and (D) are the same, and the following complementary conditions hold for the primal solution , and dual solution .

returns a list .

This solves the primal problem

as well as the dual problem
In[14]:=
Click for copyable input
Out[14]=
This confirms that the primal and dual give the same objective value.
In[15]:=
Click for copyable input
Out[15]=
In[16]:=
Click for copyable input
Out[16]=
The dual of the constraint is , which means that for one unit of increase in the right-hand side of the constraint, there will be two units of increase in the objective. This can be confirmed by perturbing the right-hand side of the constraint by 0.001.
In[17]:=
Click for copyable input
Out[17]=
Indeed, the objective value increases by twice that amount.
In[18]:=
Click for copyable input
Out[18]=

Dealing with Infeasibility and Unboundedness in the Interior Point Method

The primal-dual interior point method is designed for feasible problems; for infeasible/unbounded problems it will diverge, and with the current implementation, it is difficult to find out if the divergence is due to infeasibility, or unboundedness.

A heuristic catches infeasible/unbounded problems and issues a suitable message.
In[19]:=
Click for copyable input
Out[19]=
Sometimes the heuristic cannot tell with certainty if a problem is infeasible or unbounded.
In[20]:=
Click for copyable input
Out[20]=
Using the Simplex method as suggested by the message shows that the problem is unbounded.
In[21]:=
Click for copyable input
Out[21]=

The Method Options of "InteriorPoint"

is a method option of that decides if dense columns are to be treated separately. Dense columns are columns of the constraint matrix that have many nonzero elements. By default, this method option has the value Automatic, and dense columns are treated separately.

Large problems that contain dense columns typically benefit from dense column treatment.
In[95]:=
Click for copyable input
In[98]:=
Click for copyable input
Out[98]=
In[99]:=
Click for copyable input
Out[99]=

Importing Large Datasets and Solving Large-Scale Problems

A commonly used format for documenting linear programming problems is the Mathematical Programming System (MPS) format. This is a text format consisting of a number of sections.

Importing MPS Formatted Files in Equation Form

The Wolfram Language is able to import MPS formatted files. By default, Import of MPS data returns a linear programming problem in equation form, which can then be solved using Minimize or NMinimize.

This solves the linear programming problem specified by the MPS file .
In[25]:=
Click for copyable input
Out[25]=
In[26]:=
Click for copyable input
Out[26]=

Large-Scale Problems: Importing in Matrix and Vector Form

For large-scale problems, it is more efficient to import the MPS data file and represent the linear programming using matrices and vectors, then solve using LinearProgramming.

This shows that for MPS formatted data, the following three elements can be imported.
In[101]:=
Click for copyable input
Out[101]=
This imports the problem "ganges", with 1309 constraints and 1681 variables, in a form suitable for LinearProgramming.
In[102]:=
Click for copyable input
This solves the problem and finds the optimal value.
In[103]:=
Click for copyable input
In[104]:=
Click for copyable input
Out[104]=
The specification can be used to get the sparse constraint matrix only.
In[105]:=
Click for copyable input
Out[105]=

Free Formatted MPS Files

Standard MPS formatted files use a fixed format, where each field occupies a strictly fixed character position. However some modeling systems output MPS files with a free format, where fields are positioned freely. For such files, the option "FreeFormat"->True can be specified for Import.

This string describes an MPS file in free format.
In[122]:=
Click for copyable input
This gets a temporary file name, and exports the string to the file.
In[123]:=
Click for copyable input
This imports the file, using the "FreeFormat"->True option.
In[126]:=
Click for copyable input
Out[126]=

Linear Programming Test Problems

Through the ExampleData function, all NetLib linear programming test problems can be accessed.

This finds all problems in the Netlib set.
In[12]:=
Click for copyable input
Out[12]=
This imports the problem and solves it.
In[8]:=
Click for copyable input
Out[8]=
In[9]:=
Click for copyable input
Out[9]=
This shows other properties that can be imported for the problem.
In[10]:=
Click for copyable input
Out[10]=
This imports in equation form.
In[11]:=
Click for copyable input
Out[11]=

Application Examples of Linear Programming

L1-Norm Minimization

It is possible to solve an minimization problem

by turning the system into a linear programming problem

This defines a function for solving an minimization problem.
In[35]:=
Click for copyable input
The following is an overdetermined linear system.
A simple application of LinearSolve would fail.
In[36]:=
Click for copyable input
Out[38]=
This finds the minimization solution.
In[39]:=
Click for copyable input
Out[39]=
In[40]:=
Click for copyable input
Out[40]=
The least squares solution can be found using PseudoInverse. This gives a large norm, but a smaller norm.
In[41]:=
Click for copyable input
Out[41]=
In[42]:=
Click for copyable input
Out[42]=

Design of an Optimal Anchor

The example is adopted from [2]. The aim is to design an anchor that uses as little material as possible to support a load.

This problem can be modeled by discretizing and simulating it using nodes and links. The modeling process is illustrated using the following figure. Here a grid of 7×10 nodes is generated. Each node is then connected by a link to all other nodes that are of Manhattan distance of less than or equal to three. The three red nodes are assumed to be fixed to the wall, while on all other nodes, compression and tension forces must balance.

Each link represents a rigid rod that has a thickness, with its weight proportional to the force on it and its length. The aim is to minimize the total material used, which is

Hence mathematically this is a linearly constrained minimization problem, with objective function a sum of absolute values of linear functions.

The absolute values in the objective function can be replaced by breaking down into a combination of compression and tension forces, with each non-negative. Thus assume is the set of links, the set of nodes, the length of the link between nodes and , and and the compression and tension forces on the link; then the above model can be converted to a linear programming problem

The following sets up the model, solves it, and plots the result; it is based on an AMPL model [2].
In[1]:=
Click for copyable input
This solves the problem by placing 30 nodes in the horizontal and vertical directions.
In[2]:=
Click for copyable input
Out[6]=
If, however, the anchor is fixed not on the wall, but on some points in space, notice how the results resemble the shape of some leaves. Perhaps the structure of leaves is optimized in the process of evolution.
In[7]:=
Click for copyable input
Out[11]=

Algorithms for Linear Programming

Simplex and Revised Simplex Algorithms

The simplex and revised simplex algorithms solve linear programming problems by constructing a feasible solution at a vertex of the polytope defined by the constraints, and then moving along the edges of the polytope to vertices with successively smaller values of the objective function until the minimum is reached.

Although the sparse implementation of simplex and revised algorithms is quite efficient in practice, and is guaranteed to find the global optimum, the algorithms have a poor worst-case behavior: it is possible to construct a linear programming problem for which the simplex or revised simplex method takes a number of steps exponential in the problem size.

The Wolfram Language implements simplex and revised simplex algorithms using dense linear algebra. The unique feature of this implementation is that it is possible to solve exact/extended precision problems. Therefore these methods are more suitable for small-sized problems for which non-machine number results are needed.

This sets up a random linear programming problem with 20 constraints and 200 variables.
In[12]:=
Click for copyable input
This solves the problem. Typically, for a linear programming problem with many more variables than constraints, the revised simplex algorithm is faster. On the other hand, if there are many more constraints than variables, the simplex algorithm is faster.
If only machine-number results are desired, then the problem should be converted to machine numbers, and the interior point algorithm should be used.

Interior Point Algorithm

Although the simplex and revised simplex algorithms can be quite efficient on average, they have a poor worst-case behavior. It is possible to construct a linear programming problem for which the simplex or revised simplex methods take a number of steps exponential in the problem size. The interior point algorithm, however, has been proven to converge in a number of steps that are polynomial in the problem size.

Furthermore, the Wolfram Language simplex and revised simplex implementation use dense linear algebra, while its interior point implementation uses machine-number sparse linear algebra. Therefore for large-scale, machine-number linear programming problems, the interior point method is more efficient and should be used.

Interior Point Formulation

Consider the standardized linear programming problem

where , , . This problem can be solved using a barrier function formulation to deal with the positive constraints

The first-order necessary condition for the above problem gives

Let denote the diagonal matrix made of the vector , and .

This is a set of linear/nonlinear equations with constraints. It can be solved using Newton's method

with

One way to solve this linear system is to use Gaussian elimination to simplify the matrix into block triangular form.

To solve this block triangular matrix, the so-called normal system can be solved, with the matrix in this normal system

This matrix is positive definite, but becomes very ill-conditioned as the solution is approached. Thus numerical techniques are used to stabilize the solution process, and typically the interior point method can only be expected to solve the problem to a tolerance of about , with tolerance explained in "Convergence Tolerance". The Wolfram Language uses Mehrotra's predictor-corrector scheme [1].

Convergence Tolerance

General linear programming problems are first converted to the standard form

with the corresponding dual

The convergence criterion for the interior point algorithm is

with the tolerance set, by default, to .

References

[1] Vanderbei, R. Linear Programming: Foundations and Extensions. Springer-Verlag, 2001.

[2] Mehrotra, S. "On the Implementation of a Primal-Dual Interior Point Method." SIAM Journal on Optimization 2 (1992): 575601.