# Linear Programming

Introduction | Application Examples of Linear Programming |

The LinearProgramming Function | Algorithms for Linear Programming |

Examples | References |

Importing Large Datasets and Solving Large-Scale Problems |

## Introduction

Linear programming problems are optimization problems where the objective function and constraints are all linear.

*Mathematica* has a collection of algorithms for solving linear optimization problems with real variables, accessed via LinearProgramming, FindMinimum, FindMaximum, NMinimize, NMaximize, Minimize, and Maximize. LinearProgramming gives direct access to linear programming algorithms, provides the most flexibility for specifying the methods used, and is the most efficient for large-scale problems. FindMinimum, FindMaximum, NMinimize, NMaximize, Minimize, and Maximize are convenient for solving linear programming problems in equation and inequality form.

In[1]:= |

Out[1]= |

In[2]:= |

Out[2]= |

In[3]:= |

Out[3]= |

## The LinearProgramming Function

LinearProgramming is the main function for linear programming with the most flexibility for specifying the methods used, and is the most efficient for large-scale problems.

The following options are accepted.

option name | default value | |

Method | Automatic | method used to solve the linear optimization problem |

Tolerance | Automatic | convergence tolerance |

Options for LinearProgramming.

The Method option specifies the algorithm used to solve the linear programming problem. Possible values are Automatic, , , and . The default is Automatic, which automatically chooses from the other methods based on the problem size and precision.

The Tolerance option specifies the convergence tolerance.

## Examples

### Difference between Interior Point and Simplex and/or Revised Simplex

The simplex and revised simplex algorithms solve a linear programming problem by moving along the edges of the polytope defined by the constraints, from vertices to vertices with successively smaller values of the objective function, until the minimum is reached. *Mathematica*'s implementation of these algorithms uses dense linear algebra. A unique feature of the implementation is that it is possible to solve exact/extended precision problems. Therefore these methods are suitable for small-sized problems for which non-machine-number results are needed, or a solution on the vertex is desirable.

Interior point algorithms for linear programming, loosely speaking, iterate from the interior of the polytope defined by the constraints. They get closer to the solution very quickly, but unlike the simplex/revised simplex algorithms, do not find the solution exactly. *Mathematica*'s implementation of an interior point algorithm uses machine-precision sparse linear algebra. Therefore for large-scale machine-precision linear programming problems, the interior point method is more efficient and should be used.

In[6]:= |

Out[6]= |

In[7]:= |

Out[7]= |

In[43]:= |

Out[44]= |

In[45]:= |

Out[45]= |

In[46]:= |

Out[46]= |

### Finding Dual Variables

Given the general linear programming problem

It is useful to know solutions for both for some applications.

The relationship between the solutions of the primal and dual problems is given by the following table.

if the primal is | then the dual problem is |

feasible | feasible |

unbounded | infeasible or unbounded |

infeasible | unbounded or infeasible |

When both problems are feasible, then the optimal values of (P) and (D) are the same, and the following complementary conditions hold for the primal solution , and dual solution .

In[14]:= |

Out[14]= |

In[15]:= |

Out[15]= |

In[16]:= |

Out[16]= |

In[17]:= |

Out[17]= |

In[18]:= |

Out[18]= |

### Dealing with Infeasibility and Unboundedness in the Interior Point Method

The primal-dual interior point method is designed for feasible problems; for infeasible/unbounded problems it will diverge, and with the current implementation, it is difficult to find out if the divergence is due to infeasibility, or unboundedness.

In[19]:= |

Out[19]= |

In[20]:= |

Out[20]= |

In[21]:= |

Out[21]= |

### The Method Options of "InteriorPoint"

is a method option of that decides if dense columns are to be treated separately. Dense columns are columns of the constraint matrix that have many nonzero elements. By default, this method option has the value Automatic, and dense columns are treated separately.

In[95]:= |

In[98]:= |

Out[98]= |

In[99]:= |

Out[99]= |

## Importing Large Datasets and Solving Large-Scale Problems

A commonly used format for documenting linear programming problems is the Mathematical Programming System (MPS) format. This is a text format consisting of a number of sections.

### Importing MPS Formatted Files in Equation Form

*Mathematica* is able to import MPS formatted files. By default, Import of MPS data returns a linear programming problem in equation form, which can then be solved using Minimize or NMinimize.

In[25]:= |

Out[25]= |

In[26]:= |

Out[26]= |

### Large-Scale Problems: Importing in Matrix and Vector Form

For large-scale problems, it is more efficient to import the MPS data file and represent the linear programming using matrices and vectors, then solve using LinearProgramming.

In[101]:= |

Out[101]= |

In[102]:= |

In[103]:= |

In[104]:= |

Out[104]= |

In[105]:= |

Out[105]= |

### Free Formatted MPS Files

Standard MPS formatted files use a fixed format, where each field occupies a strictly fixed character position. However some modeling systems output MPS files with a free format, where fields are positioned freely. For such files, the option "FreeFormat"->True can be specified for Import.

In[122]:= |

In[123]:= |

In[126]:= |

Out[126]= |

### Linear Programming Test Problems

Through the ExampleData function, all NetLib linear programming test problems can be accessed.

In[12]:= |

Out[12]= |

In[8]:= |

Out[8]= |

In[9]:= |

Out[9]= |

In[10]:= |

Out[10]= |

In[11]:= |

Out[11]= |

## Application Examples of Linear Programming

### L1-Norm Minimization

It is possible to solve an minimization problem

by turning the system into a linear programming problem

In[35]:= |

In[36]:= |

Out[38]= |

In[39]:= |

Out[39]= |

In[40]:= |

Out[40]= |

In[41]:= |

Out[41]= |

In[42]:= |

Out[42]= |

### Design of an Optimal Anchor

The example is adopted from [2]. The aim is to design an anchor that uses as little material as possible to support a load.

This problem can be modeled by discretizing and simulating it using nodes and links. The modeling process is illustrated using the following figure. Here a grid of 7×10 nodes is generated. Each node is then connected by a link to all other nodes that are of Manhattan distance of less than or equal to three. The three red nodes are assumed to be fixed to the wall, while on all other nodes, compression and tension forces must balance.

Each link represents a rigid rod that has a thickness, with its weight proportional to the force on it and its length. The aim is to minimize the total material used, which is

Hence mathematically this is a linearly constrained minimization problem, with objective function a sum of absolute values of linear functions.

The absolute values in the objective function can be replaced by breaking down into a combination of compression and tension forces, with each non-negative. Thus assume is the set of links, the set of nodes, the length of the link between nodes and , and and the compression and tension forces on the link; then the above model can be converted to a linear programming problem

In[1]:= |

## Algorithms for Linear Programming

### Simplex and Revised Simplex Algorithms

The simplex and revised simplex algorithms solve linear programming problems by constructing a feasible solution at a vertex of the polytope defined by the constraints, and then moving along the edges of the polytope to vertices with successively smaller values of the objective function until the minimum is reached.

Although the sparse implementation of simplex and revised algorithms is quite efficient in practice, and is guaranteed to find the global optimum, the algorithms have a poor worst-case behavior: it is possible to construct a linear programming problem for which the simplex or revised simplex method takes a number of steps exponential in the problem size.

*Mathematica* implements simplex and revised simplex algorithms using dense linear algebra. The unique feature of this implementation is that it is possible to solve exact/extended precision problems. Therefore these methods are more suitable for small-sized problems for which non-machine number results are needed.

In[12]:= |

### Interior Point Algorithm

Although the simplex and revised simplex algorithms can be quite efficient on average, they have a poor worst-case behavior. It is possible to construct a linear programming problem for which the simplex or revised simplex methods take a number of steps exponential in the problem size. The interior point algorithm, however, has been proven to converge in a number of steps that are polynomial in the problem size.

Furthermore, the *Mathematica* simplex and revised simplex implementation use dense linear algebra, while its interior point implementation uses machine-number sparse linear algebra. Therefore for large-scale, machine-number linear programming problems, the interior point method is more efficient and should be used.

#### Interior Point Formulation

Consider the standardized linear programming problem

where , , . This problem can be solved using a barrier function formulation to deal with the positive constraints

The first-order necessary condition for the above problem gives

Let denote the diagonal matrix made of the vector , and .

This is a set of linear/nonlinear equations with constraints. It can be solved using Newton's method

One way to solve this linear system is to use Gaussian elimination to simplify the matrix into block triangular form.

To solve this block triangular matrix, the so-called normal system can be solved, with the matrix in this normal system

This matrix is positive definite, but becomes very ill-conditioned as the solution is approached. Thus numerical techniques are used to stabilize the solution process, and typically the interior point method can only be expected to solve the problem to a tolerance of about , with tolerance explained in "Convergence Tolerance". *Mathematica *uses Mehrotra's predictor-corrector scheme [1].

#### Convergence Tolerance

General Linear Programming problems are first converted to the standard form

The convergence criterion for the interior point algorithm is

with the tolerance set, by default, to .

## References

[1] Vanderbei, R. *Linear Programming: Foundations and Extensions*. Springer-Verlag, 2001.

[2] Mehrotra, S. "On the Implementation of a Primal-Dual Interior Point Method." *SIAM Journal on Optimization* 2 (1992): 575-601.