---
title: "QuadraticOptimization"
language: "en"
type: "Symbol"
summary: "QuadraticOptimization[f, cons, vars] finds values of variables vars that minimize the quadratic objective f subject to linear constraints cons. QuadraticOptimization[{q, c}, {a, b}] finds a vector x that minimizes the quadratic objective 1/2 x . q . x + c . x subject to the linear inequality constraints a . x + b \\[SucceedsEqual] 0. QuadraticOptimization[{q, c}, {a, b}, {aeq, beq}] includes the linear equality constraints aeq . x + beq == 0. QuadraticOptimization[{q, c}, ..., {dom1, dom2, ...}] takes xi to be in the domain domi, where domi is Integers or Reals. QuadraticOptimization[...,  prop] specifies what solution property  prop should be returned."
keywords: 
- quadratic programming
- quadratic optimization
- QP
- QO
- linearly constrained quadratic optimization
- linearly constrained quadratic programming
- convex optimization
- convex programming
- fast optimization
canonical_url: "https://reference.wolfram.com/language/ref/QuadraticOptimization.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Convex Optimization"
    link: "https://reference.wolfram.com/language/guide/ConvexOptimization.en.md"
  - 
    title: "Matrix-Based Minimization"
    link: "https://reference.wolfram.com/language/guide/MatrixBasedMinimization.en.md"
  - 
    title: "Optimization"
    link: "https://reference.wolfram.com/language/guide/Optimization.en.md"
  - 
    title: "Symbolic Vectors, Matrices and Arrays"
    link: "https://reference.wolfram.com/language/guide/SymbolicArrays.en.md"
related_functions: 
  - 
    title: "LinearOptimization"
    link: "https://reference.wolfram.com/language/ref/LinearOptimization.en.md"
  - 
    title: "SecondOrderConeOptimization"
    link: "https://reference.wolfram.com/language/ref/SecondOrderConeOptimization.en.md"
  - 
    title: "SemidefiniteOptimization"
    link: "https://reference.wolfram.com/language/ref/SemidefiniteOptimization.en.md"
  - 
    title: "FindMinimum"
    link: "https://reference.wolfram.com/language/ref/FindMinimum.en.md"
  - 
    title: "NMinimize"
    link: "https://reference.wolfram.com/language/ref/NMinimize.en.md"
  - 
    title: "Minimize"
    link: "https://reference.wolfram.com/language/ref/Minimize.en.md"
related_tutorials: 
  - 
    title: "Optimization Method Framework"
    link: "https://reference.wolfram.com/language/OptimizationMethodFramework/tutorial/OptimizationMethodFramework.en.md"
---
# QuadraticOptimization

QuadraticOptimization[f, cons, vars] finds values of variables vars that minimize the quadratic objective f subject to linear constraints cons.

QuadraticOptimization[{q, c}, {a, b}] finds a vector $x$ that minimizes the quadratic objective $1/2x.q.x+c.x$ subject to the linear inequality constraints $a.x+b\succeq 0$. 

QuadraticOptimization[{q, c}, {a, b}, {aeq, beq}] includes the linear equality constraints $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}=0$.

QuadraticOptimization[{q, c}, …, {dom1, dom2, …}] takes $x_i$ to be in the domain domi, where domi is Integers or Reals.

QuadraticOptimization[…, "prop"] specifies what solution property "prop" should be returned.

## Details and Options

* Quadratic optimization is also known as quadratic programming (QP), mixed-integer quadratic programming (MIQP) or linearly constrained quadratic optimization.

* Quadratic optimization is typically used in problems such as parameter fitting, portfolio optimization and geometric distance problems.

* Quadratic optimization is a convex optimization problem that can be solved globally and efficiently with real, integer or complex variables.

* Quadratic optimization finds $x$ that solves the primal problem: »

|     |     |
| --- | --- |
| minimize | $\frac{1}{2}x.q.x+c.x$ |
| subject to constraints | $a.x+b\unicode{f435}0,\\ a_{eq}.x+b_{eq}=0$ |
| where | $q\in \mathcal{S}_+^n,c\in \mathbb{R}^n,a\in \mathbb{R}^{m\times n},b\in \mathbb{R}^m,a_{\text{\textit{eq}}}\in \mathbb{R}^{k\times n},b_{\text{\textit{eq}}}\in \mathbb{R}^k$ |

* The space $\mathcal{S}_+^n$ consists of symmetric positive semidefinite matrices.

* Mixed-integer quadratic optimization finds $x\in \mathbb{R}^{n_r}$ and $y\in \mathbb{Z}^{n_i}$ that solve the problem:

|     |     |
| --- | --- |
| minimize | $\frac{1}{2}x.q_{rr}.x+x.q_{ri}.y+\frac{1}{2}x.q_{ii}.y+c_r.x+c_i.y$ |
| subject to constraints | $a_r.x+a_i.y+b\unicode{f435}0,\\ a_{eq_r}x+a_{eq_i}.y+b_{eq}=0$ |
| where | $\left( \begin{array}{cc}  q_{rr} & q_{ri} \\  q_{ri}{}^T & q_{ii} \\ \end{array}  ... es n_r},a_{eq_i}\in \mathbb{R}^{k\times n_i}b_{\text{\textit{eq}}}\in \mathbb{R}^k$ |

* When the objective function is real valued, ``QuadraticOptimization`` solves problems with $x\in \mathbb{C}^n$ by internally converting to real variables $x= x_r+i x_i$, where $x_r\in \mathbb{R}^n$ and $x_i\in \mathbb{R}^n$.

[image]

* The variable specification ``vars`` should be a list with elements giving variables in one of the following forms:

|     |     |
| --- | --- |
| v   | variable with name $v$ and dimensions inferred |
| v∈Reals | real scalar variable |
| v∈Integers | integer scalar variable |
| v∈Complexes | complex scalar variable |
| v∈ℛ | vector variable restricted to the geometric region $\mathcal{R}$ |
| v∈Vectors[n, dom] | vector variable in $\mathbb{R}^n$ or $\mathbb{Z}^n$ |
| v∈Matrices[{m, n}, dom] | matrix variable in $\mathbb{R}^{m n}$ or $\mathbb{Z}^{m n}$ |

* The constraints ``cons`` can be specified by:

|                    |                                                           |                                 |
| ------------------ | --------------------------------------------------------- | ------------------------------- |
| LessEqual          | $a.x+b\leq 0$               | scalar inequality               |
| GreaterEqual       | $a.x+b\geq 0$               | scalar inequality               |
| VectorLessEqual    | $a.x+b\unicode{f437}0$      | vector inequality               |
| VectorGreaterEqual | $a.x+b\unicode{f435}0$      | vector inequality               |
| Equal              | $a.x+b=0$                   | scalar or vector equality       |
| Element            | $x \in \text{\textit{dom}}$ | convex domain or region element |

* With ``QuadraticOptimization[f, cons, vars]``, parameter equations of the form ``par == val``, where ``par`` is not in ``vars`` and ``val`` is numerical or an array with numerical values, may be included in the constraints to define parameters used in ``f`` or ``cons``.

* The objective function may be specified in the following ways:

|     |     |
| --- | --- |
| q   | $\frac{1}{2}x.q.x$ |
| {q, c} | $\frac{1}{2}x.q.x+c.x$ |
| $$\left\{\left\{\text{\textit{$q$}}_{\text{\textit{$f$}}}\right\},\text{\textit{$c$}}\right\}$$ | $\frac{1}{2}\left(q_f.x\right).\left(q_f.x\right)+c.x=\frac{1}{2}\left(x.q_f{}^T\right).\left(q_f.x\right)+c.x$ |
| $$\left\{\left\{\text{\textit{$q$}}_{\text{\textit{$f$}}},\text{\textit{$c$}}_{\text{\textit{$f$}}}\right\},\text{\textit{$c$}}\right\}$$ | $\frac{1}{2}\left(c_f+q_f.x\right).\left(c_f+q_f.x\right)+c.x=\frac{1}{2}\left(x.q_f{}^T+c_f\right).\left(q_f.x+c_f\right)+c.x$ |

* In the factored form, $q_f\in \mathbb{R}^{p\times n}$ and $c_f\in \mathbb{R}^p$.

* The primal minimization problem has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints.

* The Lagrangian dual problem for quadratic optimization with objective $1/2x.q.x+c.x$ is given by: »

|     |     |
| --- | --- |
| maximize | $-1/2\zeta .q.\zeta -b.\lambda -b_{\text{\textit{eq}}}.\nu$ |
| subject to constraints | $\begin{array}{c}  q.\zeta ==c-a\mathsf{T}.\lambda -a_{\text{\textit{eq}}}\mathsf{T} .\nu , \\  \lambda \succeq 0 \\ \end{array}$ |
| where | $\zeta \in \mathbb{R}^n,b,\lambda \in \mathbb{R}^m,b_{\text{\textit{eq}}},\nu \in  ... n, \\ a\in \mathbb{R}^{m\times n},a_{\text{\textit{eq}}}\in \mathbb{R}^{k\times n}$ |

* With a factored quadratic objective $1/2\left(x.q_f{}^T+c_f\right).\left(q_f.x+c_f\right)+c.x$, the dual problem may also be expressed as:

|     |     |
| --- | --- |
| maximize | $-1/2\zeta _f.\zeta _f+1/2c_f.c_f-b.\lambda -b_{\text{\textit{eq}}}.\nu$ |
| subject to constraints | $\begin{array}{c}  \zeta _f.q_f==c+c_f.q_f-a\mathsf{T}.\lambda -a_{\text{\textit{eq}}}\mathsf{T} .\nu , \\  \lambda \succeq 0 \\ \end{array}$ |
| where | $\zeta _f,c_f\in \mathbb{R}^p,b,\lambda \in \mathbb{R}^m,b_{\text{\textit{eq}}},\n ... {R}^{m\times n},a_{\text{\textit{eq}}}\in \mathbb{R}^{k\times n},c\in \mathbb{R}^n$ |

* The relationship between the factored dual vector $\zeta _f$ and the unfactored dual vector $\zeta$ is $\zeta _f=q_f.\zeta -c_f$.

* For quadratic optimization, strong duality holds if $q$ is positive semidefinite. This means that if there is a solution to the primal minimization problem, then there is a solution to the dual maximization problem, and the dual maximum value is equal to the primal minimum value.

* The possible solution properties ``"prop"`` include:

"PrimalMinimizer"	{\!\(
\*SubsuperscriptBox[\(v\), \(1\), \(\*\)], \[Ellipsis]\)}	a list of variable values that minimizes the objective function
      	"PrimalMinimizerRules"	{Subscript[v, 1]->Subsuperscript[v, 1, \*],\[Ellipsis]}	values for the variables vars={Subscript[v, 1],\[Ellipsis]} that minimizes f
      	"PrimalMinimizerVector"	x^\*	the vector that minimizes 1/2x.q.x+c.x 
      	"PrimalMinimumValue"	1/2x^\*.q.x^\*+c.x^\*	the minimum value f^\* 
      	"DualMaximizer"	{\[Zeta]^\*,\[Lambda]^\*,\[Nu]^\*}	vectors that maximize  -1/2\[Zeta].q.\[Zeta]-b.\[Lambda]-Subscript[b, eq].\[Nu]
      	"DualMaximumValue"	-1/2\[Zeta]^\*.q.\[Zeta]^\*-b.\[Lambda]^\*-Subscript[b, eq].\[Nu]^\*	the dual maximum value
      	"DualityGap"	1/2\[Zeta]^\*.q.\[Zeta]^\*-b.\[Lambda]^\*- Subscript[b, eq].\[Nu]^\*-c.x^\*	the difference between the dual and primal optimal values (0 because of strong duality)
      	"Slack"	{a.x^\*+b,Subscript[a, eq].x^\*+Subscript[b, eq]}	the constraint slack vector
      	"ConstraintSensitivity"	{\[PartialD]f^\*/\[PartialD]b,\[PartialD]f^\*/\[PartialD]Subscript[b, eq]}	sensitivity of f^\* to constraint perturbations
      	"ObjectiveMatrix"	q	the quadratic objective matrix
      	"ObjectiveVector"	c	the linear objective vector
      	"FactoredObjectiveMatrix"	Subscript[q, f]	the matrix in the factored objective form
      	"FactoredObjectiveVector"	Subscript[c, f]	the vector in the factored objective form
      	"LinearInequalityConstraints"	{a,b}	the linear inequality constraint matrix and vector
      	"LinearEqualityConstraints"	{Subscript[a, eq],Subscript[b, eq]}	the linear equality constraint matrix and vector
      	{"Subscript[prop, 1]","Subscript[prop, 2]",\[Ellipsis]}	 	several solution properties

* The dual maximizer component $\zeta$ is a function of $\lambda$ and $\nu$, given by $\zeta (\lambda ,\nu )=q^{-1}\left(c-a\mathsf{T}.\lambda -a_{\text{\textit{eq}}}\mathsf{T} .\nu \right)$.

* The following options may be given:

|                   |                   |                                               |
| ----------------- | ----------------- | --------------------------------------------- |
| MaxIterations     | Automatic         | maximum number of iterations to use           |
| Method            | Automatic         | the method to use                             |
| PerformanceGoal   | \$PerformanceGoal | aspects of performance to try to optimize     |
| Tolerance         | Automatic         | the tolerance to use for internal comparisons |
| WorkingPrecision  | MachinePrecision  | precision to use in internal computations     |

* The option ``Method -> method`` may be used to specify the method to use. Available methods include:

|                           |                                                            |
| ------------------------- | ---------------------------------------------------------- |
| Automatic                 | choose the method automatically                            |
| "COIN"                    | COIN quadratic programming solver                          |
| "SCS"                     | SCS splitting conic solver                                 |
| "OSQP"                    | OSQP operator splitting solver for quadratic problems      |
| "CSDP"                    | CSDP semidefinite optimization solver                      |
| "DSDP"                    | DSDP semidefinite optimization solver                      |
| "PolyhedralApproximation" | objective epigraph is approximated using polyhedra         |
| "MOSEK"                   | commercial MOSEK convex optimization solver                |
| "Gurobi"                  | commercial Gurobi linear and quadratic optimization solver |
| "Xpress"                  | commercial Xpress linear and quadratic optimization solver |

---

## Examples (85)

### Basic Examples (3)

Minimize $2 x^2+20y^2+6x y+5 x$ subject to the constraint $-x+y\geq 2$ :

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
res = QuadraticOptimization[obj, -x + y ≥ 2, {x, y}]

Out[1]= {x -> -1.73214, y -> 0.267857}
```

The optimal point lies in a region defined by the constraints and where $2 x^2+20y^2+6x y+5 x$ is smallest:

```wl
In[2]:= Show[Plot3D[obj, {x, -2, 0}, {y, 0, 1}, ...], Graphics3D[{Blue, PointSize[0.05], Point[{x, y, obj} /. res]}]]

Out[2]= [image]
```

---

Minimize $x^2+y^2$ subject to the equality constraint $x+y=2$ and the inequality constraints $1\leq y\leq 2$ :

```wl
In[1]:= res = QuadraticOptimization[x^2 + y^2, {x + y == 2, 1 ≤ y ≤ 2}, {x, y}]

Out[1]= {x -> 1., y -> 1.}
```

Define objective as $1/2x.q.x+c.x$ and constraints as $a.x+b\unicode{f435}0$ and $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}\text{==}0$ :

```wl
In[2]:=
{q, c} = {{{2, 0}, {0, 2}}, {0, 0}};
{a, b} = {{{0, 1}, {0, -1}}, {-1, 2}};
{aeq, beq} = {{{1, 1}}, {-2}};
```

Solve using matrix-vector inputs:

```wl
In[3]:= QuadraticOptimization[{q, c}, {a, b}, {aeq, beq}]

Out[3]= {1., 1.}
```

The optimal point lies where a level curve of $x^2+y^2$ is tangent to the equality constraint:

```wl
In[4]:= Show[RegionPlot[Evaluate[x^2 + y^2 ≤ (x^2 + y^2 /. res)], ...], ContourPlot[x + y == 2, {x, 0.5, 1.5}, {y, 0.5, 1.5}, ContourStyle -> Black]]

Out[4]= [image]
```

---

Minimize $2 x^2+20y^2+6x y+5 x$ subject to the constraint $-x+y\geq 2, x\in \mathbb{Z},y\in \mathbb{R}$ :

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
res = QuadraticOptimization[obj, -x + y ≥ 2, {x∈Integers, y∈Reals}]

Out[1]= {x -> -2, y -> 0.3}
```

Use the equivalent matrix-vector representation:

```wl
In[2]:=
{q, c} = {{{4, 6}, {6, 40}}, {5, 0}};
{a, b} = {{{-1, 1}}, {-2}};
QuadraticOptimization[{q, c}, {a, b}, {Integers, Reals}]

Out[2]= {-2, 0.3}
```

### Scope (26)

#### Basic Uses (8)

Minimize $x^2+y^2$ subject to the constraints $x+y\geq 1$ :

```wl
In[1]:= QuadraticOptimization[x^2 + y^2, x + y ≥ 1, {x, y}]

Out[1]= {x -> 0.5, y -> 0.5}
```

Get the minimizing value using solution property ``"PrimalMinimumValue"`` :

```wl
In[2]:= QuadraticOptimization[x^2 + y^2, x + y ≥ 1, {x, y}, "PrimalMinimumValue"]

Out[2]= 0.5
```

---

Minimize $2 x^2+20y^2+6x y+5 x$ subject to the constraint $-x+y\geq  2,y\geq 0$ :

```wl
In[1]:= QuadraticOptimization[2x^2 + 20y^2 + 6x y + 5x, {-x + y ≥ 2, y ≥ 0}, {x, y}]

Out[1]= {x -> -1.73214, y -> 0.267857}
```

Get the minimizing value and minimizing vector using solution property:

```wl
In[2]:= QuadraticOptimization[2x^2 + 20y^2 + 6x y + 5x, {-x + y ≥ 2, y ≥ 0}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizer"}]

Out[2]= {-4.00893, {-1.73214, 0.267857}}
```

---

Minimize $x^2+y^2$ subject to the constraint $x+y\leq  2,1\leq y\leq 2$ :

```wl
In[1]:= QuadraticOptimization[x^2 + y^2, {x + y ≤ 2, 1 ≤ y ≤ 2}, {x, y}]

Out[1]= {x -> -6.525663468674981`*^-9, y -> 1.}
```

Define the objective as $1/2x.q.x+c.x$ and constraints as $a.x+b\unicode{f435}0$ :

```wl
In[2]:=
{q, c} = {{{2, 0}, {0, 2}}, {0, 0}};
{a, b} = {{{-1, -1}, {0, 1}, {0, -1}}, {2, -1, 2}};
```

Solve using matrix-vector inputs:

```wl
In[3]:= QuadraticOptimization[{q, c}, {a, b}]

Out[3]= {-6.525663468674981`*^-9, 1.}
```

---

Minimize $x^2+y^2$ subject to the equality constraint $x+y=2$ and the inequality constraint $1\leq y\leq 2$ :

```wl
In[1]:= QuadraticOptimization[x^2 + y^2, {x + y == 2, 1 ≤ y ≤ 2}, {x, y}]

Out[1]= {x -> 1., y -> 1.}
```

Define objective as $1/2x.q.x+c.x$ and constraints as $a.x+b\unicode{f435}0$ and $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}\text{==}0$ :

```wl
In[2]:=
{q, c} = {{{2, 0}, {0, 2}}, {0, 0}};
{a, b} = {{{0, 1}, {0, -1}}, {-1, 2}};
{Subscript[a, eq], Subscript[b, eq]} = {{{1, 1}}, {-2}};
```

Solve using matrix-vector inputs:

```wl
In[3]:= QuadraticOptimization[{q, c}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}]

Out[3]= {1., 1.}
```

---

Minimize $2x^2+5y^2+4x y$ subject to the constraints $x+y\geq 1, x-2y\leq 0$ :

```wl
In[1]:= QuadraticOptimization[2x^2 + 5y^2 + 4x y, {x + y ≥ 1, x - 2y ≤ 0}, {x, y}]

Out[1]= {x -> 0.666667, y -> 0.333333}
```

Define the objective as $1/2\left(q_f.x\right).\left(q_f.x\right)+c.x$ and constraints as $a.x+b\unicode{f435}0$ :

```wl
In[2]:=
{qf, c} = {Sqrt[2] * {{1, 0}, {0, 1}, {1, 2}}, {0, 0}};
{a, b} = {{{1, 1}, {-1, 2}}, {-1, 0}};
```

Solve using matrix-vector inputs:

```wl
In[3]:= QuadraticOptimization[{{qf}, c}, {a, b}]

Out[3]= {0.666667, 0.333333}
```

---

Minimize $2x^2+5y^2+4x y$ subject to the constraints $x+y\geq 1, x-2y\leq 0,0\leq x\leq 1,-1\leq y\leq 1$ :

```wl
In[1]:= QuadraticOptimization[2x^2 + 5y^2 + 4x y, {x + y ≥ 1, x - 2y ≤ 0, 0 ≤ x ≤ 1, -1 ≤ y ≤ 1}, {x, y}]

Out[1]= {x -> 0.666667, y -> 0.333333}
```

Specify constraints $0\leq x\leq 1,-1\leq y\leq 1$ using ``VectorGreaterEqual`` (\[VectorGreaterEqual]) and ``VectorLessEqual`` (\[VectorLessEqual]):

```wl
In[2]:= QuadraticOptimization[2x^2 + 5y^2 + 4x y, {x + y ≥ 1, x - 2y ≤ 0, {0, -1}\[VectorLessEqual]{x, y}\[VectorLessEqual]{1, 1}}, {x, y}]

Out[2]= {x -> 0.666667, y -> 0.333333}
```

---

Minimize $x.q.x$ subject to $a.x+b\unicode{f435}0$. Use a vector variable and constant parameter equations:

```wl
In[1]:= peqns = {q == {{2, 2}, {2, 5}}, a == {{1, 1}, {-1, 2}, {1, 0}, {0, 1}}, b == {-1, 0, 0, 0}};

In[2]:= QuadraticOptimization[x.q.x, {peqns, a.x + b\[VectorGreaterEqual]0}, x]

Out[2]= {x -> {0.666667, 0.333333}}
```

---

Minimize $x.q.x$ subject to $a.x+b\unicode{f435}0,x\unicode{f435}0$. Use ``NonNegativeReals`` to specify the constraint $x\unicode{f435}0$ :

```wl
In[1]:= peqns = {q == {{2, 2}, {2, 5}}, a == {{1, 1}, {-1, 2}}, b == {-1, 0}};

In[2]:= QuadraticOptimization[x.q.x, {peqns, a.x + b\[VectorGreaterEqual]0}, x∈Vectors[2, NonNegativeReals]]

Out[2]= {x -> {0.666667, 0.333333}}
```

#### Integer Variables (4)

Specify integer domain constraints using ``Integers``:

```wl
In[1]:= QuadraticOptimization[2x^2 + 5y^2 + 4x y, {x + y ≥ 1, x - 2y ≤ 0, 0 ≤ x ≤ 1, -1 ≤ y ≤ 1}, {x∈Integers, y∈Integers}]

Out[1]= {x -> 0, y -> 1}
```

---

Specify integer domain constraints on vector variables using ``Vectors[n, Integers]`` :

```wl
In[1]:= peqns = {q == {{2, 2}, {2, 5}}, a == {{1, 1}, {-1, 2}}, b == {-1, 0}};

In[2]:= QuadraticOptimization[x.q.x, {peqns, a.x + b\[VectorGreaterEqual]0}, x∈Vectors[2, Integers]]

Out[2]= {x -> {0, 1}}
```

---

Specify non-negative integer domain constraints using ``NonNegativeIntegers`` (NonNegativeIntegers):

```wl
In[1]:= peqns = {q == {{2, 2}, {2, 5}}, a == {{1, 1}, {-1, 2}}, b == {-1, 0}};

In[2]:= QuadraticOptimization[x.q.x, {peqns, a.x + b\[VectorGreaterEqual]0}, x∈Vectors[2, NonNegativeIntegers]]

Out[2]= {x -> {0, 1}}
```

---

Specify non-positive integer constraints using ``NonPositiveIntegers`` (NonPositiveIntegers):

```wl
In[1]:= peqns = {q == {{2, 4}, {4, 8}}, a == {{1, 1}, {-1, 2}}, b == {5, 0}};

In[2]:= QuadraticOptimization[x.q.x, {peqns, a.x + b\[VectorGreaterEqual]0}, x∈Vectors[2, NonPositiveIntegers]]

Out[2]= {x -> {0, 0}}
```

#### Complex Variables (3)

Specify complex variables using ``Complexes``:

```wl
In[1]:= QuadraticOptimization[Inactive[Norm][{x, y}] ^ 2, {x\[VectorGreaterEqual]I, y\[VectorGreaterEqual]1}, {x∈Complexes, y∈Complexes}]

Out[1]= {x -> 0.0000760359  + 1. I, y -> 1.  + 0.0000760359 I}
```

---

Use a Hermitian matrix $q$ in the objective $(1/2)x.q.x + c.x$ with real-valued variables:

```wl
In[1]:=
{q, c} = {{{2, 1 + I}, {1 - I, 1}}, {2, 3}};
QuadraticOptimization[(1 / 2) * x.q.x + c.x, {}, x∈Vectors[2, Reals]]

Out[1]= {x -> {1., -4.}}
```

---

Use a Hermitian matrix $q$ in the objective $(1/2)\underline{\text{Inactive}}\left[\underline{\text{Dot}}\right]\left[\underline{\text{Conjugate}}[x],q,x\right]$ and complex variables:

```wl
In[1]:=
q = {{1, I}, {-I, 2}};
QuadraticOptimization[(1 / 2) * Inactive[Dot][Conjugate[x], q, x] , {{1, I}.x\[VectorGreaterEqual]1 + I}, x∈Vectors[2, Complexes]]

Out[1]= {x -> {1.  + 1. I, -2.51600326112502`*^-15 + 2.5160024471798256`*^-15 I}}
```

#### Primal Model Properties (4)

Minimize $(x-1)^2+(y-5/2)^2-(3/2)^2$ subject to the constraint $x+2y\leq 4$ :

```wl
In[1]:= QuadraticOptimization[(x - 1)^2 + (y - 5 / 2)^2 - (3 / 2)^2, x + 2y ≤ 4, {x, y}]

Out[1]= {x -> 0.6, y -> 1.7}
```

Get vector output using ``"PrimalMinimizer"`` :

```wl
In[2]:= QuadraticOptimization[(x - 1)^2 + (y - 5 / 2)^2 - (3 / 2)^2, x + 2y ≤ 4, {x, y}, "PrimalMinimizer"]

Out[2]= {0.6, 1.7}
```

Get the rule-based result using ``"PrimalMinimizerRules"`` :

```wl
In[3]:= QuadraticOptimization[(x - 1)^2 + (y - 5 / 2)^2 - (3 / 2)^2, x + 2y ≤ 4, {x, y}, "PrimalMinimizerRules"]

Out[3]= {x -> 0.6, y -> 1.7}
```

---

Get the minimizing value of the optimization using ``"PrimalMinimumValue"`` :

```wl
In[1]:= QuadraticOptimization[(x - 1)^2 + (y - 5 / 2)^2 - (3 / 2)^2, x + 2y ≤ 4, {x, y}, "PrimalMinimumValue"]

Out[1]= -1.45
```

---

Obtain the primal minimum value using symbolic inputs:

```wl
In[1]:=
obj = (x - 1)^2 + (y - 5 / 2)^2 - (3 / 2)^2;
cons = x + 2y ≤ 4;

In[2]:= Subscript[p, sym] = QuadraticOptimization[obj, cons, {x, y}, "PrimalMinimumValue"]

Out[2]= -1.45
```

Extract the matrix-vector inputs of the optimization problem:

```wl
In[3]:= {q, c, {a, b}} = QuadraticOptimization[obj, cons, {x, y}, {"ObjectiveMatrix", "ObjectiveVector", "LinearInequalityConstraints"}];
```

Solve using the matrix-vector form:

```wl
In[4]:= Subscript[p, m] = QuadraticOptimization[{q, c}, {a, b}, "PrimalMinimumValue"]

Out[4]= -6.45
```

Recover the symbolic primal value by adding the objective constant:

```wl
In[5]:= const = QuadraticOptimization[obj, cons, {x, y}, "ObjectiveConstant"]

Out[5]= 5

In[6]:= Subscript[p, sym] == Subscript[p, m] + const

Out[6]= True
```

---

Find the slack for inequalities and equalities associated with a minimization problem:

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
cons = {-x + y ≥ 2, x + y == 1, y ≥ 1};

In[2]:= {Subscript[s, ineq], Subscript[s, eq]} = QuadraticOptimization[obj, cons, {x, y}, "Slack"]

Out[2]= {{1.1285994361287521`*^-10, 0.5}, {2.220446049250313`*^-16}}
```

Get the minimum value and the constraint matrices:

```wl
In[3]:= {x^ * , {a, b}, {Subscript[a, eq], Subscript[b, eq]}} = QuadraticOptimization[obj, cons, {x, y}, {...}];
```

The slack for inequality constraints $a.x^*+b\unicode{f435}0$ is a vector $s_{\text{ineq}}$ such that $a.x^*+b-s_{\text{ineq}}==0$ :

```wl
In[4]:= Norm[a.x^ *  + b - Subscript[s, ineq]] == 0

Out[4]= True
```

The slack for the equality constraints $a.x^*+b==0$ is a vector $s_{\text{\textit{eq}}}$ such that $a.x^*+b-s_{\text{\textit{eq}}}==0$ :

```wl
In[5]:= Norm[Subscript[a, eq].x^ *  + Subscript[b, eq] - Subscript[s, eq]] == 0

Out[5]= True
```

The equality slack $s_{\text{eq}}$ is typically a zero vector:

```wl
In[6]:= Subscript[s, eq]//Chop

Out[6]= {0}
```

#### Dual Model Properties (3)

Minimize $1/2x.q.x+c.x$ subject to $a.x+b\unicode{f435}0$ and $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}==0$ :

```wl
In[1]:=
{q, c} = {{{2, 0}, {0, 2}}, {1, 1}};
{a, b} = {{{-1, 1}, {0, 1}}, {-2, -1}};
{Subscript[a, eq], Subscript[b, eq]} = {{{1, 1}}, {-1}};
xv = Table[Indexed[x, i], {i, 2}];

In[2]:= psol = QuadraticOptimization[(xv.q.xv) / 2 + c.xv, {a.xv + b\[VectorGreaterEqual]0, Subscript[a, eq].xv + Subscript[b, eq] == 0}, xv]

Out[2]= {Indexed[x, {1}] -> -0.5, Indexed[x, {2}] -> 1.5}
```

The dual problem is to maximize $-1/2\zeta .q.\zeta -b.\lambda -b_{\text{\textit{eq}}}.\nu$ subject to $q.\zeta ==c-a^T.\lambda -a_{\text{\textit{eq}}}{}^T .\nu ,\lambda \unicode{f435}0$ :

```wl
In[3]:= {ζv, λv, νv} = {Table[Indexed[ζ, i], {i, 2}], Table[Indexed[λ, i], {i, 2}], Table[Indexed[ν, i], {i, 1}]};

In[4]:= dsol = QuadraticOptimization[(ζv.q.ζv) / 2 + b.λv + Subscript[b, eq].νv, {q.ζv + a\[Transpose].λv + Subscript[a, eq]\[Transpose].νv == c, λv\[VectorGreaterEqual]0}, Join[ζv, λv, νv]]

Out[4]= {Indexed[ζ, {1}] -> 0.5, Indexed[ζ, {2}] -> -1.5, Indexed[λ, {1}] -> 2., Indexed[λ, {2}] -> 5.7994978724919725`*^-9, Indexed[ν, {1}] -> 2.}
```

The primal minimum value and the dual maximum value coincide because of strong duality:

```wl
In[5]:= {(xv.q.xv) / 2 + c.xv  /. psol, -((ζv.q.ζv) / 2 + b.λv + Subscript[b, eq].νv) /. dsol}

Out[5]= {3.5, 3.5}
```

So it has a duality gap of zero. In general, $1/2x.q.x+c.x\leq -1/2\zeta .q.\zeta -b.\lambda -b_{\text{\textit{eq}}}.\nu$ at optimal points:

```wl
In[6]:= QuadraticOptimization[(xv.q.xv) / 2 + c.xv, {a.xv + b\[VectorGreaterEqual]0, Subscript[a, eq].xv + Subscript[b, eq] == 0}, xv, "DualityGap"]

Out[6]= 4.164792954952645`*^-9
```

---

Construct the dual problem using constraint matrices extracted from the primal problem:

```wl
In[1]:=
obj = x ^ 2 + y ^ 2 + x + y;
constraints = {-x + y ≥ 2, y ≥ 1, x + y == 1};

In[2]:= QuadraticOptimization[obj, constraints, {x, y}]

Out[2]= {x -> -0.5, y -> 1.5}
```

Extract the objective and constraint matrices and vectors:

```wl
In[3]:=
{q, c, {a, b}, {Subscript[a, eq], Subscript[b, eq]}} = QuadraticOptimization[obj, constraints, {x, y}, 
	 {"ObjectiveMatrix", "ObjectiveVector", "LinearInequalityConstraints", "LinearEqualityConstraints"}];
```

The dual problem is to maximize $-1/2\zeta .q.\zeta -b.\lambda -b_{\text{\textit{eq}}}.\nu$ subject to $q.\zeta ==c-a^T.\lambda -a_{\text{\textit{eq}}}{}^T .\nu ,\lambda \unicode{f435}0$ :

```wl
In[4]:= QuadraticOptimization[((ζ.q.ζ) / 2 + b.λ + Subscript[b, eq].ν), {q.ζ + a\[Transpose].λ + Subscript[a, eq]\[Transpose].ν == c, λ\[VectorGreaterEqual]0}, {ζ, λ, ν}]

Out[4]= {ζ -> {0.5, -1.5}, λ -> {2., 5.7994978724919725`*^-9}, ν -> {2.}}
```

---

Get the dual maximum value directly using solution properties:

```wl
In[1]:= QuadraticOptimization[x ^ 2 + y ^ 2 + x + y, {-x + y ≥ 2, y ≥ 1, x + y == 1}, {x, y}, "DualMaximumValue"]

Out[1]= 3.5
```

Get the dual maximizer directly using solution properties:

```wl
In[2]:= QuadraticOptimization[x ^ 2 + y ^ 2 + x + y, {-x + y ≥ 2, y ≥ 1, x + y == 1}, {x, y}, "DualMaximizer"]

Out[2]= {{0.5, -1.5}, {2., 4.16679224457539`*^-9}, {2.}}
```

#### Sensitivity Properties (4)

Use ``"ConstraintSensitivity"`` to find the change in optimal value due to constraint relaxations:

```wl
In[1]:=
{q, c} = {{{2, 0}, {0, 2}}, {0, 0}};
{a, b} = {{{-1, -1}, {-1, 2}}, {-1, 0}};
p^ *  = QuadraticOptimization[{q, c}, {a, b}, "PrimalMinimumValue"];
```

The first vector is inequality sensitivity and the second is equality sensitivity:

```wl
In[2]:= {Subscript[s, λ], Subscript[s, ν]} = QuadraticOptimization[{q, c}, {a, b}, "ConstraintSensitivity"]

Out[2]= {{-1.11111, -0.222222}, {}}
```

Consider new constraints $a.x+b+\text{$\delta $b}\unicode{f435}0$ where $\text{$\delta $b}$ is the relaxation:

```wl
In[3]:= δb = {0.01, 0.01};
```

The approximate new optimal value is given by:

```wl
In[4]:= p^ *  + δb.Subscript[s, λ]

Out[4]= 0.542222
```

Compare to directly solving the relaxed problem:

```wl
In[5]:= QuadraticOptimization[{q, c}, {a, b + δb}, "PrimalMinimumValue"]

Out[5]= 0.542322
```

---

Each sensitivity is associated with an inequality or equality constraint:

```wl
In[1]:= {Subscript[s, λ], Subscript[s, ν]} = QuadraticOptimization[x^2 + y^2 + z^2 - 2x - 6y + z, {x + 2y ≥ 4, x ≤ 3, x + y + z == 1, x - y == 2}, {x, y, z}, "ConstraintSensitivity"]

Out[1]= {{0., -2.}, {3.66667, -5.}}
```

Extract the constraints:

```wl
In[2]:= {{a, b}, {Subscript[a, eq], Subscript[b, eq]}} = QuadraticOptimization[x^2 + y^2 + z^2 - 2x - 6y + z, {x + 2y ≥ 4, x ≤ 3, x + y + z == 1, x - y == 2}, {x, y, z}, {"LinearInequalityConstraints", "LinearEqualityConstraints"}];
```

The inequality constraints and their associated sensitivity:

```wl
In[3]:= Table[{a[[i]].{x, y, z} + b[[i]] ≥ 0, Subscript[s, λ][[i]]}, {i, 2}]

Out[3]= {{3.  - 1. x ≥ 0, 0.}, {-4. + 1. x + 2. y ≥ 0, -2.}}
```

The equality constraints and their associated sensitivity:

```wl
In[4]:= Table[{Subscript[a, eq][[i]].{x, y, z} + Subscript[b, eq][[i]] == 0, Subscript[s, ν][[i]]}, {i, 2}]

Out[4]= {{-1. + 1. x + 1. y + 1. z == 0, 3.66667}, {-2. + 1. x - 1. y == 0, -5.}}
```

---

The change in optimal value due to constraint relaxation is proportional to the sensitivity value:

```wl
In[1]:=
q = {{2, 0, 0}, {0, 2, 0}, {0, 0, 2}};
c = {-2, -6, 1};
{a, b} = {{{1, 2, 0}, {-1, 0, 0}}, {-4, 3}};
{Subscript[a, eq], Subscript[b, eq]} = {{{1, 1, 1}, {1, -1, 0}}, {-1, -2}};
```

Compute the minimal value and constraint sensitivity:

```wl
In[2]:= {p^ * , {Subscript[s, λ], Subscript[s, ν]}} = QuadraticOptimization[{q, c}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}, {"PrimalMinimumValue", "ConstraintSensitivity"}]

Out[2]= {1.33333, {{-2., 0.}, {3.66667, -5.}}}
```

A zero sensitivity will not change the optimal value if the constraint is relaxed:

```wl
In[3]:= Subscript[s, λ][[2]]

Out[3]= 0.

In[4]:= Chop[p^ *  - QuadraticOptimization[{q, c}, {a, b + {0, 0.01}}, {Subscript[a, eq], Subscript[b, eq]}, "PrimalMinimumValue"]] == 0

Out[4]= True
```

A negative sensitivity will decrease the optimal value:

```wl
In[5]:= Subscript[s, λ][[1]]

Out[5]= -2.

In[6]:= p^ *  > QuadraticOptimization[{q, c}, {a, b + {0.1, 0}}, {Subscript[a, eq], Subscript[b, eq]}, "PrimalMinimumValue"]

Out[6]= True
```

A positive sensitivity will increase the optimal value:

```wl
In[7]:= Subscript[s, ν][[1]]

Out[7]= 3.66667

In[8]:= p^ *  < QuadraticOptimization[{q, c}, {a, b}, {Subscript[a, eq], Subscript[b, eq] + {0.1, 0}}, "PrimalMinimumValue"]

Out[8]= True
```

---

The ``"ConstraintSensitivity"`` is related to the dual maximizer of the problem:

```wl
In[1]:= {{Subscript[s, λ], Subscript[s, ν]}, {ζ, λ^ * , ν^ * }} = QuadraticOptimization[x^2 + y^2 + z^2 - 2x - 6y + z, {x + 2y ≥ 4, x ≤ 3, x + y + z == 1, x - y == 2}, {x, y, z}, {"ConstraintSensitivity", "DualMaximizer"}]

Out[1]= {{{-2., 0.}, {-5., 3.66667}}, {{-2.66667, -0.666667, 2.33333}, {2., 0.}, {5., -3.66667}}}
```

The inequality sensitivity $s_{\lambda }=\frac{\partial p^*}{\partial b}$ satisfies $s_{\lambda }=-\lambda ^*$, where $\lambda ^*$ is the dual inequality maximizer:

```wl
In[2]:= Subscript[s, λ] == -λ^ *

Out[2]= True
```

The equality sensitivity $s_{\nu }=\frac{\partial p^*}{\partial b_{\text{\textit{eq}}}}$ satisfies $s_{\nu }=-\nu ^*$, where $\nu ^*$ is the dual equality maximizer:

```wl
In[3]:= Subscript[s, ν] == -ν^ *

Out[3]= True
```

### Options (12)

#### Method (5)

The method ``"COIN"`` uses the COIN library:

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
cons = {-x + y ≥ 2, y ≥ 1};

In[2]:= QuadraticOptimization[obj, cons, {x, y}, Method -> "COIN"]

Out[2]= {x -> -2.75, y -> 1.}
```

``"CSDP"`` and ``"DSDP"`` reduce to semidefinite optimization:

```wl
In[3]:= QuadraticOptimization[obj, cons, {x, y}, Method -> "CSDP"]

Out[3]= {x -> -2.75, y -> 1.}

In[4]:= QuadraticOptimization[obj, cons, {x, y}, Method -> "DSDP"]

Out[4]= {x -> -2.75, y -> 1.}
```

``"SCS"`` reduces to conic optimization:

```wl
In[5]:= QuadraticOptimization[obj, cons, {x, y}, Method -> "SCS"]

Out[5]= {x -> -2.74925, y -> 1.00002}
```

``"PolyhedralApproximation"`` approximates the objective using linear constraints:

```wl
In[6]:= QuadraticOptimization[obj, cons, {x, y}, Method -> "PolyhedralApproximation"]

Out[6]= {x -> -2.75001, y -> 1.}
```

---

For least-squares-type quadratic problems, ``"CSDP"`` and ``"DSDP"`` will be slower than ``"COIN"`` or ``"SCS"``:

```wl
In[1]:=
SeedRandom[123];
mat = RandomReal[{-1, 1}, {400, 10}];
rhs = RandomReal[{-1, 1}, 400];
```

Solve the least-squares problem using method ``"COIN"`` :

```wl
In[2]:= AbsoluteTiming[Subscript[x, COIN] = QuadraticOptimization[{{mat, -rhs}, {}}, {}, Method -> "COIN"];]

Out[2]= {0.00137, Null}
```

Solve the problem using method ``"CSDP"`` :

```wl
In[3]:= AbsoluteTiming[Subscript[x, CSDP] = QuadraticOptimization[{{mat, -rhs}, {}}, {}, Method -> "CSDP"];]

Out[3]= {1.11425, Null}

In[4]:= Norm[Subscript[x, COIN] - Subscript[x, CSDP]]

Out[4]= 6.216681323423071`*^-10
```

---

For least-squares-type quadratic problems, ``"CSDP", "DSDP"`` and ``"PolyhedralApproximation"`` will be slower than ``"COIN"`` or ``"SCS"``:

```wl
In[1]:=
SeedRandom[123];
mat = RandomReal[{-1, 1}, {40, 10}];
rhs = RandomReal[{-1, 1}, 40];
Subscript[x, Ref] = LeastSquares[mat, rhs];
```

Solve the least-squares problem using method ``"COIN"`` :

```wl
In[2]:= AbsoluteTiming[Subscript[x, COIN] = QuadraticOptimization[{{mat, -rhs}, {}}, {}, Method -> "COIN"];]

Out[2]= {0.001197, Null}

In[3]:= Norm[Subscript[x, COIN] - Subscript[x, Ref]]

Out[3]= 1.2030472646460224`*^-10
```

Solve the problem using method ``"CSDP"`` :

```wl
In[4]:= AbsoluteTiming[Subscript[x, CSDP] = QuadraticOptimization[{{mat, -rhs}, {}}, {}, Method -> "CSDP"];]

Out[4]= {0.012986, Null}

In[5]:= Norm[Subscript[x, CSDP] - Subscript[x, Ref]]

Out[5]= 1.993434340561508`*^-15
```

Solve the problem using method ``"PolyhedralApproximation"`` :

```wl
In[6]:= AbsoluteTiming[Subscript[x, PA] = QuadraticOptimization[{{mat, -rhs}, {}}, {}, Method -> "PolyhedralApproximation"];]

Out[6]= {0.867309, Null}

In[7]:= Norm[Subscript[x, PA] - Subscript[x, Ref]]

Out[7]= 6.61697961030752`*^-6
```

---

Different methods may give different results for problems with more than one optimal solution:

```wl
In[1]:= QuadraticOptimization[x ^ 2, {0 ≤ x ≤ 1}, {x, y}, Method -> "SCS"]

Out[1]= {x -> 0.000209055, y -> 0.}

In[2]:= QuadraticOptimization[x ^ 2, {0 ≤ x ≤ 1}, {x, y}, Method -> "COIN"]

Out[2]= {x -> 0., y -> 10.}

In[3]:= QuadraticOptimization[x ^ 2, {0 ≤ x ≤ 1}, {x, y}, Method -> "DSDP"]

Out[3]= {x -> 0.0000122361, y -> 0.}
```

---

Minimizing a concave function can be done only using ``Method -> "COIN"`` :

```wl
In[1]:= QuadraticOptimization[-(x ^ 2), 0 ≤ x ≤ 1, x, Method -> "COIN"]

Out[1]= {x -> 1.}
```

Other methods cannot be used because they require the factorization of the objective matrix:

```wl
In[2]:= QuadraticOptimization[-(x ^ 2), 0 ≤ x ≤ 1, x, Method -> "CSDP"]
```

QuadraticOptimization::qspsd: Unable to factorize the quadratic function matrix because it was not a positive semi-definite matrix.

```wl
Out[2]= QuadraticOptimization[-x^2, 0 ≤ x ≤ 1, x, Method -> "CSDP"]
```

#### PerformanceGoal (1)

Get more accurate results at the cost of higher computation time with ``"Quality"`` setting:

```wl
In[1]:=
SeedRandom[123];
mat = RandomReal[{-1, 1}, {400, 10}];
rhs  = mat.RandomReal[{-1, 1}, 10];
{a, b} = {{UnitVector[10, 1]}, {0.1}};

In[2]:= AbsoluteTiming[pQuality = QuadraticOptimization[{{mat, -rhs}, {}}, {a, b}, "PrimalMinimumValue", PerformanceGoal -> "Quality"];]

Out[2]= {1.0192, Null}
```

Use ``"Speed"`` to get results quicker but at the cost of quality:

```wl
In[3]:= AbsoluteTiming[pSpeed = QuadraticOptimization[{{mat, -rhs}, {}}, {a, b}, "PrimalMinimumValue", PerformanceGoal -> "Speed"];]

Out[3]= {0.008782, Null}

In[4]:= {pQuality, pSpeed}

Out[4]= {45.9336, 45.9525}
```

#### Tolerance (2)

A smaller ``Tolerance`` setting gives a more precise result:

```wl
In[1]:=
SeedRandom[123];
mat = RandomReal[{-1, 1}, {800, 10}];
rhs  = mat.RandomReal[{-1, 1}, 10];
Subscript[p, exact] = 0;
```

Find the error between computed and exact minimum value using different ``Tolerance`` settings:

```wl
In[2]:=
err = Table[{tol, Abs[QuadraticOptimization[{{mat, -rhs}, {}}, {}, 
	"PrimalMinimumValue", Tolerance -> tol, Method -> "SCS"] - Subscript[p, exact]]}, {tol, 10. ^ -Range[7]}]

Out[2]= {{0.1, 0.0953189}, {0.01, 0.00813393}, {0.001, 0.000871899}, {0.0001, 0.0000934355}, {0.00001, 8.008435964719053`*^-6}, {1.`*^-6, 8.581802696104121`*^-7}, {1.`*^-7, 9.196221713195629`*^-8}}
```

Visualize the change in minimum value error with respect to tolerance:

```wl
In[3]:= ListLogLogPlot[err, PlotRange -> All, Joined -> True, PlotMarkers -> Automatic]

Out[3]= [image]
```

---

A smaller ``Tolerance`` setting gives a more precise answer, but typically takes longer to compute:

```wl
In[1]:=
SeedRandom[123];
mat = RandomReal[{-1, 1}, {500, 10}];
rhs = mat.RandomReal[{-1, 1}, 10];
```

A smaller tolerance takes longer:

```wl
In[2]:= AbsoluteTiming[ptight = QuadraticOptimization[{{mat, -rhs}, {}}, {}, "PrimalMinimumValue", Method -> "CSDP", Tolerance -> 10 ^ -6];]

Out[2]= {1.23687, Null}

In[3]:= AbsoluteTiming[ploose = QuadraticOptimization[{{mat, -rhs}, {}}, {}, "PrimalMinimumValue", Method -> "CSDP", Tolerance -> 10 ^ -3];]

Out[3]= {0.638126, Null}
```

The tighter tolerance gives a more precise answer:

```wl
In[4]:= {ptight, ploose}

Out[4]= {-3.367365278391886`*^-10, 0.0000270762}
```

#### WorkingPrecision (4)

``MachinePrecision`` is the default for the ``WorkingPrecision`` option in ``QuadraticOptimization`` :

```wl
In[1]:= QuadraticOptimization[x^2 + y^2 - 2x - 3y, {x + 2 y ≤ 3}, {x, y}]

Out[1]= {x -> 0.8, y -> 1.1}

In[2]:= QuadraticOptimization[x^2 + y^2 - 2x - 3y, {x + 2 y ≤ 3}, {x, y}, WorkingPrecision -> MachinePrecision]

Out[2]= {x -> 0.8, y -> 1.1}
```

---

With ``WorkingPrecision -> Automatic``, ``QuadraticOptimization`` infers the precision to use from input:

```wl
In[1]:= QuadraticOptimization[x^2 + y^2 - 2x - 3y, {x + 2 y ≤ 3}, {x, y}, WorkingPrecision -> Automatic]

Out[1]= {x -> (4/5), y -> (11/10)}
```

---

``QuadraticOptimization`` can compute results using arbitrary-precision numbers:

```wl
In[1]:= QuadraticOptimization[x^2 + y^2 - 2x - 3y, {x + 2 y ≤ 3}, {x, y}, WorkingPrecision -> 20]

Out[1]= {x -> 0.80000000000000000000, y -> 1.1000000000000000000}
```

If the specified precision is less than the precision of the input arguments, a message is issued:

```wl
In[2]:= QuadraticOptimization[x^2 + y^2 - 2x - 3y, {x + 2 y ≤ N[3, 20]}, {x, y}, WorkingPrecision -> 30]
```

QuadraticOptimization::precw: The precision of the argument function (20.\`) is less than WorkingPrecision (30.\`).

QuadraticOptimization::precw: The precision of the argument function ({SparseArray[Automatic,{1,2},0,{1,{{0,2},{{1},{2}}},{-1.0000000000000000000,-2.0000000000000000000}}],{3.0000000000000000000}}) is less than WorkingPrecision (30.\`).

```wl
Out[2]= {x -> 0.80000000000000000000, y -> 1.1000000000000000000}
```

---

If a high-precision result cannot be computed, a message is issued and a ``MachinePrecision`` result is returned:

```wl
In[1]:= QuadraticOptimization[(1 + x + 2 y) (1 + x + y), {x + y ≤ -2, x + y ≤ 0}, {x, y}, WorkingPrecision -> Infinity]
```

QuadraticOptimization::maxit: Maximum number of iterations 1000000 reached without convergence.

QuadraticOptimization::qpwpsol: The extended precision result for the quadratic problem could not be computed. QuadraticOptimization will return the result in MachinePrecision.

```wl
Out[1]= {x -> -16., y -> 10.}
```

### Applications (29)

#### Basic Modeling Transformations (7)

Maximize $-\left(x^2+y^2+1\right)$ subject to $x+y\leq 1$. Solve the maximization problem by negating the objective function:

```wl
In[1]:= QuadraticOptimization[(x^2 + y^2 + 1), x + y ≤ 1, {x, y}]

Out[1]= {x -> -6.928677119653587`*^-9, y -> -6.928677119653587`*^-9}
```

Negate the primal minimum value to get the corresponding maximum value:

```wl
In[2]:= -QuadraticOptimization[(x^2 + y^2 + 1), x + y ≤ 1, {x, y}, "PrimalMinimumValue"]

Out[2]= -1.
```

---

Minimize $f(x)=\left\|q_f.x-c_f\right\|$ by converting the objective function into $\left\|q_f.x-c_f\|^2\right.$ :

```wl
In[1]:= {qf, cf} = IconizedObject[«Block»];
```

Construct the objective function by expanding $g(x)=\left\|q_f.x-c_f\|^2\right.=\left(q_f.x-c_f\right){}^T.\left(q_f.x-c_f\right)$ :

```wl
In[2]:=
q = Transpose[qf].qf;
c = -2cf.qf;
```

Since ``QuadraticOptimization`` minimizes $1/2 x.q.x+c.x$, the matrix is multiplied by 2:

```wl
In[3]:= {Subscript[x, min], Subscript[g, min]} = QuadraticOptimization[{2q, c}, {}, {"PrimalMinimizer", "PrimalMinimumValue"}]

Out[3]= {{3.44555, 0.591979, 4.38725, -4.09043, -2.27081}, -129.405}
```

The minimal value of the original function is recovered as $f_{\min }=\surd \left(g_{\min }+c_f.c_f\right)$ :

```wl
In[4]:= {Norm[qf.Subscript[x, min] - cf], Sqrt[Subscript[g, min] + cf.cf]}

Out[4]= {15.9873, 15.9873}
```

``QuadraticOptimization`` directly performs this transformation. Construct the objective function using ``Inactive`` to avoid threading:

```wl
In[5]:= QuadraticOptimization[Norm[Inactive[Plus][qf.x, -cf]], {}, x, {"PrimalMinimizer", "PrimalMinimumValue"}]

Out[5]= {{{3.44555, 0.591979, 4.38725, -4.09043, -2.27081}}, 15.9873}
```

---

Minimize $f(x)=|x+1|$ subject to the constraints $x+y\leq 3,x\geq 1,y\geq 1$. Transform the objective function to $g(x)=(x+1)^2$ and solve the problem:

```wl
In[1]:= QuadraticOptimization[(x + 1) ^ 2, {x + y ≤ 3, x ≥ 1, y ≥ 1}, {x, y}]

Out[1]= {x -> 1., y -> 1.}
```

Recover the original minimum value using transformation $f_{\min }=\surd g_{\min }$ :

```wl
In[2]:= minval = QuadraticOptimization[(x + 1) ^ 2, {x + y ≤ 3, x ≥ 1, y ≥ 1}, {x, y}, "PrimalMinimumValue"]

Out[2]= 4.

In[3]:= Sqrt[minval]

Out[3]= 2.
```

---

Minimize $(x+y)^2$ subject to the constraints $|2x+y|\leq 3,x\geq 1,y\geq 1$. The constraint can be transformed to $-3\leq 2x+y\leq 3,x\geq 1,y\geq 1$ :

```wl
In[1]:= QuadraticOptimization[(x + y) ^ 2, {-3 ≤ 2x + y ≤ 3, x ≥ 1, y ≥ 1}, {x, y}]

Out[1]= {x -> 1., y -> 1.}
```

---

Minimize $(x+y)^2$ subject to the constraints $|2x+y|\geq 1,x\leq 1,y\geq 1$. The constraint $|2x+y|\geq 1$ can be interpreted as $2x+y\geq 1 \| 2x+y\leq -1$. Solve the problem with each constraint :

```wl
In[1]:= res1 = QuadraticOptimization[(x + y) ^ 2, {2x + y ≥ 1, x ≤ 1, y ≥ 1}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizer"}]

Out[1]= {1., {-1.368604634920793`*^-16, 1.}}

In[2]:= res2 = QuadraticOptimization[(x + y) ^ 2, {2x + y ≤ -1, x ≤ 1, y ≥ 1}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizer"}]

Out[2]= {0., {-347.508, 347.508}}
```

The optimal solution is the minimum of the two solutions:

```wl
In[3]:= First@MinimalBy[{res1, res2}, First]

Out[3]= {0., {-347.508, 347.508}}
```

---

Minimize $f(x.q.x+c.x)$ subject to $a.x\preceq b$, where $f$ is a non-decreasing function, by instead minimizing $x.q.x+c.x$. The primal minimizer $x^*$ will remain the same for both problems. Consider minimizing $f(x,y)=e^{(x+y)^2}$ subject to $2x-y\leq 1,x\geq  0,y\geq 0$ :

```wl
In[1]:= f = Function[{s}, Exp[s]];

In[2]:= {minval, Subscript[X, min]} = QuadraticOptimization[(x + y) ^ 2, {2x - y ≥ 1, x ≥ 0, y ≥ 0}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[2]= {0.25, {x -> 0.5, y -> 1.6285098811527718`*^-9}}
```

The true minimum value can be obtained by applying the function $f$ to the primal minimum value:

```wl
In[3]:= f[minval]

Out[3]= 1.28403
```

---

Minimize $f(z)=\surd z, z=3+x+(x-y)^2, z\geq 0$, subject to $2x+y\geq 1,-2\leq y\leq 2$ :

```wl
In[1]:= f = Function[{s}, Sqrt[s]];

In[2]:= {minval, Subscript[X, min]} = QuadraticOptimization[3 + x + (x - y)^2, {2x + y ≥ 1, -2 ≤ y ≤ 2}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[2]= {3.30556, {x -> 0.277778, y -> 0.444444}}
```

Since $z\geq 0$, the solution is the true solution only if the primal minimum value is greater than 0. The true minimum value can be obtained by applying the function $f$ to the primal minimum value:

```wl
In[3]:= f[minval]

Out[3]= 1.81812
```

#### Data-Fitting Problems (7)

Find a linear fit to discrete data by minimizing $\left\|q_f.x-c_f\|^2\right.$ :

```wl
In[1]:=
data = IconizedObject[«Block»];
dataplot = ListPlot[data, PlotRange -> All]

Out[1]= [image]
```

Construct the factored quadratic matrix using ``DesignMatrix`` :

```wl
In[2]:=
qf = DesignMatrix[data, {1, x}, x];
cf = -data[[All, 2]];
```

Find the coefficients of the line:

```wl
In[3]:= res = QuadraticOptimization[{{qf, cf}, 0}, {}]

Out[3]= {-0.0200885, 1.03635}
```

Compare fit with data:

```wl
In[4]:= Show[dataplot, Plot[Evaluate[res.{1, x}], {x, 0, 1}]]

Out[4]= [image]
```

---

Find a quadratic fit to discrete data by minimizing $\left\|q_f.x-c_f\|^2\right.$ :

```wl
In[1]:=
data = IconizedObject[«Block»];
dataplot = ListPlot[data, PlotRange -> All]

Out[1]= [image]
```

Construct the factored quadratic matrix using ``DesignMatrix`` :

```wl
In[2]:=
qf = DesignMatrix[data, {1, x, x^2}, x];
cf = -data[[All, 2]];
```

Find the coefficients of the quadratic curve:

```wl
In[3]:= res = QuadraticOptimization[{{qf, cf}, 0}, {}]

Out[3]= {0.0860607, 0.423949, 0.600391}
```

Compare fit with data:

```wl
In[4]:= Show[dataplot, Plot[Evaluate[res.{1, x, x^2}], {x, 0, 1}]]

Out[4]= [image]
```

---

Fit a quadratic curve discrete data such that the first and last points of the data lie on the curve:

```wl
In[1]:=
data = IconizedObject[«Block»];
dataplot = ListPlot[data, PlotRange -> All]

Out[1]= [image]
```

Construct the factored quadratic matrix using ``DesignMatrix`` :

```wl
In[2]:=
qf = DesignMatrix[data, {1, x, x^2}, x];
cf = -data[[All, 2]];
```

The two equality constraints are:

```wl
In[3]:=
aeq = qf[[{1, -1}]];
beq = cf[[{1, -1}]];
```

Find the coefficients of the line:

```wl
In[4]:= res = QuadraticOptimization[{{qf, cf}, 0}, {}, {aeq, beq}]

Out[4]= {0.0817806, 0.603236, 0.317454}
```

Compare fit with data:

```wl
In[5]:= Show[Plot[Evaluate[res.{1, x, x^2}], {x, 0, 1}, Epilog -> {Red, PointSize[0.03], Point[data[[{1, -1}]]]}], dataplot]

Out[5]= [image]
```

---

Find an interpolating function to noisy data using bases $\phi _i(x)=\left(1+\left(x-c_i\right){}^2\right){}^{1/2}$ :

```wl
In[1]:=
data = IconizedObject[«Block»];
dataPlot = ListPlot[data, PlotRange -> All]

Out[1]= [image]
```

The interpolating function will be $s(x)=\sum _{i=1}\lambda _i\phi _i(x)$ :

```wl
In[2]:=
bases = Sqrt[1 + (x - #) ^ 2]& /@ Subdivide[-3, 3, 10];
qf = DesignMatrix[data, bases, x, IncludeConstantBasis -> False];
cf = -data[[All, 2]];
```

Find the coefficients of the interpolating function:

```wl
In[3]:= λ = QuadraticOptimization[{{qf, cf}, 0}, {}]

Out[3]= {32.5068, -85.7855, 91.8512, -65.8306, 47.3478, -34.9903, 36.2891, -65.971, 87.9523, -66.6243, 21.4029}
```

Compare the fit to the data:

```wl
In[4]:= Show[dataPlot, Plot[Evaluate[bases.λ], {x, -3, 3}]]

Out[4]= [image]
```

---

Minimize $\left\|q_f.x-c_f\right\|$ subject to the constraints $x_i\geq 1, i=1,2,\ldots ,m$ :

```wl
In[1]:= {qf, cf} = IconizedObject[«Block»];

In[2]:= QuadraticOptimization[Norm[Inactive[Plus][qf.x, -cf]], VectorGreaterEqual[{x, 1}], x]

Out[2]= {x -> {11.1577, 1., 7.11024, 4.79331, 1.}}
```

Compare with the unconstrained minimum of $\left\|q_f.x-c_f\right\|$ :

```wl
In[3]:= LeastSquares[qf, cf]

Out[3]= {9.67646, -0.820078, 8.99846, 7.71208, -10.4558}
```

---

Cardinality constrained least squares: minimize $\|a.x-b\|$ such that $x$ has at most $k$ nonzero elements:

```wl
In[1]:=
m  = 33; n = 7;k = 4;
SeedRandom[n * m * k];
a = RandomReal[{-1, 1}, {m, n}];
b = RandomReal[{-1, 1}, m];
```

Let $z\in \mathbb{Z}^n$ be a decision vector such that if $z_i=1$, then $x_i$ is nonzero. The decision constraints are:

```wl
In[2]:= decisionConstraints = {Total[z] ≤ k, 0\[VectorLessEqual]z\[VectorLessEqual]1, z∈Vectors[n, Integers]};
```

To model constraint $x_i=0$ when $z_i=0$, chose a large constant $M$ such that $-M z_i\leq x_i\leq M z_i$ :

```wl
In[3]:= coeffConstraint = -100 z\[VectorLessEqual]x\[VectorLessEqual]100z;
```

Solve the cardinality constrained least-squares problem:

```wl
In[4]:= QuadraticOptimization[Norm[Inactive[Plus][a.x, -b]], {decisionConstraints, coeffConstraint}, {x, z}]

Out[4]= {x -> {8.25845212235261`*^-8, 0.0453214, -0.315389, 0.263853, 1.263668460400405`*^-7, 0.103531, -1.0009045263920472`*^-7}, z -> {0, 1, 1, 1, 0, 1, 0}}
```

The subset selection can also be done more efficiently with ``Fit`` using $L_1$ regularization. First, find the range of regularization parameters that uses at most $k$ basis functions:

```wl
In[5]:=
nterms = Table[{λ, Length[Fit[{a, b}, 
	FitRegularization -> {"L1", λ}]["NonzeroPositions"]]}, {λ, 0.1, 2, 0.02}];
λk = Mean[MinMax[Cases[nterms, {x_, k} -> x]]]

Out[5]= 0.47
```

Find the nonzero terms in the $L_1$ regularized fit:

```wl
In[6]:= fitTerms = Flatten[Fit[{a, b}, FitRegularization -> {"L1", λk}]["NonzeroPositions"]]

Out[6]= {2, 3, 4, 6}
```

Find the fit with just these basis terms:

```wl
In[7]:= Fit[{a[[All, fitTerms]], b}]

Out[7]= {0.0453215, -0.315389, 0.263853, 0.103531}
```

---

Find the best subset of functions from a candidate set of functions to approximate given data:

```wl
In[1]:=
refFun[x_] := Exp[-(5 * (x - 1 / 2)) ^ 2];
data = Table[{x, refFun[x]}, {x, 0, 1, 0.02}];
```

The approximating function will be $s(x)=\sum _{i=1}\lambda _i\phi _i(x)$ :

```wl
In[2]:=
basis = Join[Cos[Range[10] * Pi * x], Sin[Range[10] * Pi * x]];
qf = DesignMatrix[data, basis, x, IncludeConstantBasis -> False];
cf = data[[All, 2]];
```

A maximum of 5 basis functions is to be used in the final approximation:

```wl
In[3]:= functionConstraint = {Total[z] ≤ 5, 0\[VectorLessEqual]z\[VectorLessEqual]1};
```

The coefficients associated with functions that are not chosen must be zero:

```wl
In[4]:= coeffConstraint = {-10 * z\[VectorLessEqual]x\[VectorLessEqual]z * 10};
```

Find the best subset of functions:

```wl
In[5]:=
res = QuadraticOptimization[Inactive[Norm][Inactive[Plus][qf.x, -cf]] ^ 2, {functionConstraint, coeffConstraint}, {x, 
	z∈Vectors[Last[Dimensions[qf]], Integers]}]

Out[5]= {x -> {-1.8488193989732983`*^-23, 5.78089804933623`*^-11, -2.531818841084454`*^-22, -0.00722855, 1.008698572054536`*^-22, -3.926198119794515`*^-10, -3.3954128395492023`*^-22, 0.0045959, 3.834579847199447`*^-23, 1.3076586696425836`*^-9, 0.641859, 1. ... 4, -0.295197, -2.4737344328962667`*^-22, 0.0661045, 2.0073715565573325`*^-23, -1.0666950340644765`*^-9, -3.842060054460686`*^-23, -1.285614971343243`*^-9, 2.214733479157897`*^-22}, z -> {0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0}}
```

Compare the resulting approximating with the given data:

```wl
In[6]:=
{coeffs, chosen} = {x, z} /. res;
fun = Total[Pick[basis * coeffs, chosen, 1]];
Plot[fun, {x, 0, 1}, Epilog -> Point[data], PlotRange -> All]

Out[6]= [image]
```

#### Classification Problems (2)

Find a plane $a.x+b$ that separates two groups of 3D points $P=\left\{p_1,p_2,\ldots ,p_n\right\}$ and $Q=\left\{q_1,q_2,\ldots ,q_m\right\}$ :

```wl
In[1]:= {set1, set2} = IconizedObject[«Block»];

In[2]:= dataplot = ListPointPlot3D[{set1, set2}, Sequence[...]]

Out[2]= [image]
```

For separation, set 1 must satisfy $a.p_i+b\geq 1$ and set 2 must satisfy $a.q_j+b\leq -1$. Find the hyperplane by minimizing $\left\|a\|^2\right.$ :

```wl
In[3]:= res = QuadraticOptimization[Norm[a]^2, {set1.a + b\[VectorGreaterEqual]1, set2.a + b\[VectorLessEqual]-1}, {a, b}, Method -> "COIN"]

Out[3]= {a -> {-2.42757, -3.14341, -4.27419}, b -> 0.0849696}
```

The distance between the planes $a.x+b=1$ and $a.x+b==-1$ is $2/\|a\|$ :

```wl
In[4]:= (2 / Norm[a]) /. res

Out[4]= 0.342782
```

The plane separating the two groups of points is:

```wl
In[5]:= plane = (a.{x, y, z} + b) /. res

Out[5]= 0.0849696  - 2.42757 x - 3.14341 y - 4.27419 z
```

Plot the plane separating the two datasets:

```wl
In[6]:= Show[dataplot, ContourPlot3D[plane == 0, {x, -1, 1}, {y, -1, 1}, {z, -1, 1}, Sequence[...]]]

Out[6]= [image]
```

---

Find a quadratic polynomial that separates two groups of 3D points $P=\left\{p_1,p_2,\ldots ,p_n\right\}$ and $Q=\left\{q_1,q_2,\ldots ,q_m\right\}$ :

```wl
In[1]:=
{set1, set2} = IconizedObject[«Block»];
dataplot = ListPointPlot3D[{set1, set2}, Sequence[...]]

Out[1]= [image]
```

Construct the quadratic polynomial data matrices for the two sets using ``DesignMatrix`` :

```wl
In[2]:= {data1, data2} = DesignMatrix[#, {x, x^2, y, y^2, x y}, {x, y}]& /@ {set1, set2};
```

For separation, set 1 must satisfy $a.p_i+b\geq 1$ and set 2 must satisfy $a.q_j+b\leq -1$. Find the separating surface by minimizing $\left\|a\|^2\right.$ :

```wl
In[3]:= res = QuadraticOptimization[Norm[a]^2, {data1.a + b\[VectorGreaterEqual]1, data2.a + b\[VectorLessEqual]-1}, {a, b}, Method -> "COIN"]

Out[3]= {a -> {-1.7237947563604376`*^-8, -0.67201, 30.7757, 18.7288, 2.28012, -1.40668}, b -> 2.06385}
```

The polynomial separating the two groups of points is:

```wl
In[4]:= poly = (a.{1, x, x^2, y, y^2, x y} + b) /. res

Out[4]= 2.06385  - 0.67201 x + 30.7757 x^2 + 18.7288 y - 1.40668 x y + 2.28012 y^2
```

Plot the polynomial separating the two datasets:

```wl
In[5]:= Show[dataplot, ContourPlot3D[poly == 0, {x, -1.5, 1.5}, {y, -1.5, 1.5}, {z, -1, 1}, Sequence[...]]]

Out[5]= [image]
```

#### Geometric Problems (3)

Find a point $\left(x_1,x_2,x_3\right)$ closest to the point $p_0=(1/2,-1,1)$ that lies on the planes $x_3=0$ and $x_1-2x_2+3x_3=1$:

```wl
In[1]:= p0 = {1 / 2, -1, 1};
```

Find the point closest to $p_0$ by minimizing $\left\|p_0-x\|^2\right.$. Use ``Inactive`` ``Plus`` when constructing the objective:

```wl
In[2]:= res = QuadraticOptimization[Norm[-p0  + x]^2, {{0, 0, 1}, {1, -2, 3}}.x == {0, 1}, x]

Out[2]= {x -> {0.2, -0.4, 0.}}

In[3]:= Show[ContourPlot3D[{Subscript[x, 3] == 0, Subscript[x, 1] - 2Subscript[x, 2] + 3Subscript[x, 3] == 1}, {Subscript[x, 1], -1, 1}, {Subscript[x, 2], -1, 1}, {Subscript[x, 3], -1, 1}, Sequence[...]], IconizedObject[«Graphics3D»]]

Out[3]= [image]
```

---

Find the distance between two convex polyhedra:

```wl
In[1]:= {region1, region2} = IconizedObject[«Block»];

In[2]:= {dist, minPts} = QuadraticOptimization[Norm[x1 - x2] ^ 2, {x1∈region1, x2∈region2}, {x1, x2}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[2]= {0.745362, {x1 -> {0.766127, 0.683241, 0.789814}, x2 -> {1.41985, 1.11739, 1.14971}}}
```

Show the nearest points with the line connecting them:

```wl
In[3]:= Show[region1, region2, Graphics3D[{Line[{x1, x2}], Blue, PointSize[0.025], Point[{x1, x2}]} /. minPts]]

Out[3]= [image]
```

---

Find the radius $r$ and center $c_0$ of a minimal enclosing ball that encompasses a given region:

```wl
In[1]:=
region = [image];
pts = MeshCoordinates[region];
```

The original minimization problem is to minimize $r$ subject to $\left\|p_i-c_0\|^2\leq r^2\right., i=1,2,\ldots ,n$. The dual of this problem is to maximize $-\sum _i\left(w_ip_i\right){}^2+\sum _iw_i\sum _{j=1}p_i\left(x_j\right){}^2$ subject to $\sum _{i-1}w_i=1,w_i\geq 0$:

```wl
In[2]:=
obj = -Norm[w.pts]^2 + Total[pts^2, {2}].w;
constraints = {w\[VectorGreaterEqual]0, Total[w] == 1};
```

Solve the dual maximization problem:

```wl
In[3]:=
{weights, minval} = QuadraticOptimization[-obj, constraints, w∈Vectors[Length[pts]], {"PrimalMinimizer", "PrimalMinimumValue"}];
Short[weights, 3]

Out[3]//Short= {{7.412782159076121`*^-11, 4.1512677378373617`*^-10, 3.271975080995626`*^-10, 3.5343603155713387`*^-11, 7.41673719309808`*^-11, «443», 2.201058351638336`*^-11, 1.7421431943211132`*^-11, 1.5924859716062005`*^-11, 0.}}
```

The center of the minimal enclosing ball is $c_0=\sum _{i=1}w_ip_i$ :

```wl
In[4]:= center = weights.pts

Out[4]= {{-0.0047416, 1.0384806387958846`*^-9, 0.31701}}
```

The radius of the minimal enclosing ball is ``Sqrt`` of the maximum value:

```wl
In[5]:= radius = Sqrt[-minval]

Out[5]= 1.20794
```

Visualize the enclosing ball:

```wl
In[6]:= Show[Graphics3D[{Opacity[0.2], Red, Ball[center, radius]}], region]

Out[6]= [image]
```

The minimal enclosing ball can be found efficiently using ``BoundingRegion`` :

```wl
In[7]:= BoundingRegion[pts, "MinBall"]

Out[7]= Ball[{-0.0047416, 4.926614671774132`*^-16, 0.31701}, 1.20794]
```

#### Investment Problems (3)

Find the number of stocks to buy from four stocks, such that a minimum \$1000 dividend is received and risk is minimized. The expected return value and the covariance matrix $R$ associated with the stocks are:

```wl
In[1]:= {returnValue, R} = IconizedObject[«Block»];
```

The unit price for the four stocks is \$1. Each stock can be allocated a maximum of \$2500:

```wl
In[2]:= capitalConstraint = x\[VectorLessEqual]2500;
```

The investment must yield a minimum of \$1000:

```wl
In[3]:= returnConstraint = returnValue.x\[VectorGreaterEqual]1000;
```

A negative stock cannot be bought:

```wl
In[4]:= minStock = x\[VectorGreaterEqual]0;
```

The total amount to spend on each stock is found by minimizing the risk given by $x.R.x$ :

```wl
In[5]:=
res = QuadraticOptimization[x.R.x, {capitalConstraint, 
	returnConstraint, minStock}, x∈Vectors[4, Integers]]

Out[5]= {x -> {2498, 0, 1030, 2402}}
```

The total investment to get a minimum of \$1000 is:

```wl
In[6]:= Total[x /. res]

Out[6]= 5930
```

---

Find the number of stocks to buy from four stocks, with an option to short-sell such that a minimum dividend of \$1000 is received and the overall risk is minimized:

```wl
In[1]:= {returnValue, R} = IconizedObject[«Block»];
```

The capital constraint and return on investment constraints are:

```wl
In[2]:= constraints = {x\[VectorLessEqual]2500, returnValue.x\[VectorGreaterEqual]1000};
```

A short-sale option allows the stock to be sold. The optimal number of stocks to buy/short-sell is found by minimizing the risk given by the objective $x.R.x$ :

```wl
In[3]:= res = QuadraticOptimization[x.R.x, constraints, x∈Vectors[4, Integers]]

Out[3]= {x -> {1743, -1532, 691, 1676}}
```

The second stock can be short-sold. The total investment to get a minimum of \$1000 due to the short-selling is:

```wl
In[4]:= Total[x /. res]

Out[4]= 2578
```

Without short-selling, the initial investment will be significantly greater:

```wl
In[5]:= Total[x /. QuadraticOptimization[x.R.x, {constraints, x\[VectorGreaterEqual]0}, x∈Vectors[4, Integers]]]

Out[5]= 5930
```

---

Find the best combination of six stocks to invest in out of a possible 20 candidate stocks, so as to maximize return while minimizing risk:

```wl
In[1]:= {μ, Σ} = Block[...];
```

Let $w_i$ be the percentage of total investment made in stock $i$. The return is given by $\mu .w$, where $\mu$ is a vector of the expected return value of each individual stock:

```wl
In[2]:= return = μ.w;
```

Let $x\in \mathbb{Z}^{20}$ be a decision vector such that if $x_i=1$, then that stock is bought. Six stocks have to be chosen:

```wl
In[3]:= decisionConstraints = {Total[x] == 6, 0\[VectorLessEqual]x\[VectorLessEqual]1};
```

The percentage of investment $w$ must be greater than 0 and must add to 1:

```wl
In[4]:= weightConstraints = {0\[VectorLessEqual]w\[VectorLessEqual]x, Total[w] == 1};
```

Find the optimal combination of stocks that minimizes the risk given by $w.\Sigma .w$ and maximizes return:

```wl
In[5]:=
res = QuadraticOptimization[w.Σ.w - return, {decisionConstraints, 
	weightConstraints}, {w, x∈Vectors[20, Integers]}, Tolerance -> 10 ^ -12];
```

The optimal combination of stocks is:

```wl
In[6]:= stockPicks = Flatten[Position[x /. res, 1]]

Out[6]= {1, 8, 10, 12, 13, 20}
```

The percentages of investment to put into the respective stocks are:

```wl
In[7]:= (w /. res)[[stockPicks]]

Out[7]= {0.068503, 0.107209, 0.364076, 0.193083, 0.109169, 0.157966}
```

#### Portfolio Optimization (1)

Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:

```wl
In[1]:= {μ, Σ} = Block[...];
```

Let $w_i$ be the percentage of total investment made in stock $i$. The return is given by $\mu .w$, where $\mu$ is a vector of expected return value of each individual stock:

```wl
In[2]:= return = μ.w;
```

The risk is given by $\alpha  (w.\Sigma .w)$, and $\alpha$ is a risk-aversion parameter:

```wl
In[3]:= risk = w.Σ.w;
```

The objective is to maximize return while minimizing risk for a specified risk-aversion parameter $\alpha$ :

```wl
In[4]:= objective[α_ ? NumericQ] := -(return - α risk);
```

The percentage of investment $w$ must be greater than 0 and must add to 1:

```wl
In[5]:= weightConstraints = {w\[VectorGreaterEqual]0, Total[w] == 1};
```

Compute the returns and corresponding risk for a range of risk-aversion parameters:

```wl
In[6]:=
frontier = Table[{risk, return} /. QuadraticOptimization[
	objective[α], weightConstraints, w], {α, Subdivide[0.0001, 3, 100]}];
```

The optimal $w$ over a range of $\alpha$ gives an upper-bound envelope on the tradeoff between return and risk:

```wl
In[7]:=
randomRiskReturns = Table[{risk, return}, {w, Map[...]}];
Show[ListPlot[frontier, ...], ListPlot[randomRiskReturns, Rule[...]]]

Out[7]= [image]
```

Compute the percentage of investment $w$ for a specified number of risk-aversion parameters:

```wl
In[8]:=
riskAversionParameter = N@{1 / 100, 1 / 2, 1, 10, 100, 1000};
res = Table[{α, risk, w} /. QuadraticOptimization[
	objective[α], weightConstraints, w], {α, riskAversionParameter}];
```

Increasing the risk-aversion parameter $\alpha$ leads to stock diversification to reduce the risk:

```wl
In[9]:= Legended[BarChart[res[[All, -1]], ...], SwatchLegend[...]]

Out[9]= [image]
```

Increasing the risk-aversion parameter $\alpha$ leads to a reduced expected return on investment:

```wl
In[10]:= ListLogLogPlot[Transpose[{res[[All, 1]], res[[All, -1]].μ}], ...]

Out[10]= [image]
```

#### Trajectory Optimization Problems (2)

Minimize $\int _0{}^1 u^2(\tau ) \text{d$\tau $}$ subject to $x''=u(t),x(0)=0, x(1)=1,x'(0)=x'(1)=0$. The minimizing function integral can be approximated using the trapezoidal rule. The discretized objective function will be $u.W.u$ :

```wl
In[1]:=
{ti, wts} = Most[IconizedObject[«NIntegrate`TrapezoidalRuleData»]];
W = DiagonalMatrix[SparseArray[wts]];
obj = u.W.u;
```

The constraint $x''=u$ can be discretized using finite differences:

```wl
In[2]:=
mat = IconizedObject[«NDSolve`FiniteDifferenceDerivative[2, ti, DifferenceOrder -> 2]»];
ode = mat.x == u;
```

The constraints $x(0)=0, x(1)=1$ can be represented using the ``Indexed`` function:

```wl
In[3]:= bc1 = {Indexed[x, 1] == 0, Indexed[x, Length[ti]] == 1};
```

The constraints $x'(0)=x'(1)=0$ can be discretized using finite differences, and only the first and last rows are used:

```wl
In[4]:=
mat = IconizedObject[«NDSolve`FiniteDifferenceDerivative[1, ti, DifferenceOrder -> 2]»];
bc2 = mat[[{1, -1}]].x == 0;
```

Solve the discretized trajectory problem:

```wl
In[5]:=
res = QuadraticOptimization[obj, {ode, bc1, bc2}, {u, x}];
Short[res, 6]

Out[5]//Short= {u -> {3.38761, 5.35285, 7.3181, 5.47229, 5.32028, 5.16827, 5.01626, 4.86426, 4.71225, 4.56024, 4.40823, «57», -4.40823, -4.56024, -4.71225, -4.86426, -5.01626, -5.16827, -5.32028, -5.47229, -7.3181, -5.35285, -3.38761}, x -> «1»}
```

Convert the discretized result into an ``InterpolatingFunction`` :

```wl
In[6]:= {xsol, usol} = ListInterpolation[#, ti]& /@ ({x, u} /. res);
```

Compare the result with the analytic solution:

```wl
In[7]:= Row[{Plot[{xsol[t], 3t^2 - 2t^3}, {t, 0, 1}, Sequence[...]], Plot[{usol[t], 6 - 12t}, {t, 0, 1}, Sequence[...]]}]

Out[7]= [image][image]
```

---

Find the shortest path between two points while avoiding obstacles. Specify the obstacle:

```wl
In[1]:= SeedRandom[1234];mesh = ConvexHullMesh[RandomReal[{.2, .8}, {10, 2}]];
```

Extract the half-spaces that form the convex obstacle:

```wl
In[2]:= {a, b}  =   LinearOptimization[0, {}, Element[x, mesh], "LinearInequalityConstraints"];
```

Specify the start and end points of the path:

```wl
In[3]:=
start = {0, 0};
end = {1, 1};
```

The path can be discretized using $n$ points. Let $p_i\in \mathbb{R}^2$ represent the position vector:

```wl
In[4]:=
n = 10;
pvars = Table[Element[p[i], Vectors[2, Reals]], {i, 1, n}];
```

The objective is to minimize $\sum _{i=1}^{n-1} \left\|p_{i+1}-p_i\right\|$. Let $t_i=p_{i+1}-p_i$. The objective is transformed to $\sum _{i=1}^{n-1} t_i{}^2$ :

```wl
In[5]:=
tvars = Table[t[i], {i, n - 1}];
transformationConstraint = Table[t[i] == p[i + 1] - p[i], {i, n - 1}];
objective = Sum[t[i].t[i], {i, n - 1}];
```

Specify the end point constraints:

```wl
In[6]:= endPointConstraints = {p[1] == start, p[n] == end};
```

The distance between any two subsequent points should not be too large:

```wl
In[7]:=
maxDist  = 2 Norm[start - end] / n;
separationConstraints  = Table[-maxDist \[VectorLessEqual]t[i]\[VectorLessEqual]maxDist , 
	{i, n - 1}];
```

A point $p_i$ is outside the object if at least one element of $a_k.p_i+b_k,k=1,2,\ldots ,m$ is less than zero. To enforce this constraint, let $z_k\in \mathbb{Z}^n$ be a decision vector and $z_k(i)$ be the $i$$$^{\text{th}}$$ element of $z$ such that $\sum _{i=1}^n z_k(i)\leq m-1$, then $a.p_i+b\unicode{f437}M z_i,i=1,2,\ldots ,n$ and $M$ is large enough such that $M\geq \text{Max}.\left|a.p_i+b\right|$ :

```wl
In[8]:=
m = 10;
zvars = Table[Element[z[i], Vectors[Length[a], Integers]], {i, n}];
binaryConstraints = Table[0 \[VectorLessEqual]z[i] \[VectorLessEqual]1, {i, n}];
avoidanceConstraints = Table[{Inactive[Plus][a.p[i], b]\[VectorLessEqual] m z[i], Total[z[i]] ≤ Length[a] - 1}, {i, n}];
```

Find the minimum distance path around the obstacle:

```wl
In[9]:= res = QuadraticOptimization[objective, {endPointConstraints, transformationConstraint, separationConstraints, binaryConstraints, avoidanceConstraints}, Join[pvars, tvars, zvars]];
```

Extract and display the path:

```wl
In[10]:=
path = Table[p[i], {i, n}] /. res;
Show[mesh, Graphics[{Point[path], Line[path]}]]

Out[10]= [image]
```

To avoid potential crossings at the edges, the region can be inflated and the problem solved again:

```wl
In[11]:=
newMesh = RegionResize[mesh, {Scaled[1.45], Scaled[1.45]}];
{a, b}  =   LinearOptimization[0, {}, Element[x, newMesh], "LinearInequalityConstraints"];
```

Get the new constraints for avoiding the obstacles:

```wl
In[12]:=
avoidanceConstraints = Table[{Inactive[Plus][a.p[i], b]\[VectorLessEqual] m z[i], 
	Total[z[i]] ≤ Length[a] - 1}, {i, n}];
```

Solve the new problem:

```wl
In[13]:= res = QuadraticOptimization[objective, {endPointConstraints, transformationConstraint, separationConstraints, binaryConstraints, avoidanceConstraints}, Join[pvars, tvars, zvars]];
```

Extract and display the new path:

```wl
In[14]:=
path = Table[p[i], {i, n}] /. res;
Show[ mesh, Graphics[{Point[path], Line[path]}]]

Out[14]= [image]
```

#### Optimal Control Problems (2)

Minimize $\int _0{}^{10}(\pmb{x}(10)-\pmb{x}(t))^T(\pmb{x}(10)-\pmb{x}(t))+u(t)^2\text{dt},$ subject to $\pmb{x}'(t)=A.\pmb{x}(t)+u(t)$ and $\pmb{x}(0)=(1,0), \pmb{x}(10)=(0,1)$ :

```wl
In[1]:=
A = (⁠|     |     |
| --- | --- |
| -1  | 0.5 |
| 0.3 | -1  |⁠);
```

The integral can be discretized using the trapezoidal method:

```wl
In[2]:=
{ti, wts} = 10 * Most[IconizedObject[«NIntegrate`TrapezoidalRuleData»]];
W = DiagonalMatrix[SparseArray[wts]];
```

$\int _0{}^{10}(\pmb{x}(10)-\pmb{x}(t))^T(\pmb{x}(10)-\pmb{x}(t))+u(t)^2\text{dt}=\int _0{}^{10}x_1(t){}^2\text{dt}+\int _0{}^{10}\left(1-x_2(t)\right){}^2\text{dt}+\int
_0{}^{10}u(t)^2\text{dt}$. The objective function is:

```wl
In[3]:=
objective = x1.W.x1 + s.W.s + u.W.u;
auxConstraint = s == (1 - x2);
```

The time derivative in $x'(t)=A.x(t)+u(t)$ is discretized using finite differences:

```wl
In[4]:=
dTMat = IconizedObject[«NDSolve`FiniteDifferenceDerivative[1, ti, DifferenceOrder -> 2]»];
ode1 = dTMat.x1 == A[[1]].{x1, x2} + u;
ode2 = dTMat.x2 == A[[2]].{x1, x2} + u;
```

The end-condition constraints $\pmb{x}(0)=(1,0), \pmb{x}(10)=(0,1)$ can be specified using ``Indexed`` :

```wl
In[5]:=
bc1 = {Indexed[x1, 1] == 1, Indexed[x2, 1] == 0};
bc2 = { Indexed[x1, Length[ti]] == 0, Indexed[x2, Length[ti]] == 1};
```

Solve the discretized problem:

```wl
In[6]:= res = QuadraticOptimization[objective, {ode1, ode2, bc1, bc2, auxConstraint}, {u, x1, x2, s}];
```

Convert the discretized result into ``InterpolatingFunction`` :

```wl
In[7]:= {usol, x1sol, x2sol} = Map[ListInterpolation[#, ti]&, {u, x1, x2} /. res];
```

Plot the control variable:

```wl
In[8]:= Plot[usol[t], {t, 0, 10}]

Out[8]= [image]
```

Plot the state variables:

```wl
In[9]:= Plot[{x1sol[t], x2sol[t]}, {t, 0, 10}]

Out[9]= [image]
```

---

Minimize $\int _0{}^{10}(\pmb{x}(10)-\pmb{x}(t))^T(\pmb{x}(10)-\pmb{x}(t))+u(t)^2\text{dt}$ subject to $\pmb{x}'(t)=A.\pmb{x}(t)+u(t)$ and $\pmb{x}(0)=(1,0), \pmb{x}(10)=(1/2,1)$ and $-10\leq u(t)\leq 5$ on the control variable:

```wl
In[1]:=
A = (⁠|     |     |
| --- | --- |
| -1  | 0.5 |
| 0.3 | -1  |⁠);
```

The integral can be discretized using the trapezoidal method:

```wl
In[2]:=
{ti, wts} = 10 * Most[IconizedObject[«NIntegrate`TrapezoidalRuleData»]];
W = DiagonalMatrix[SparseArray[wts]];
```

$\int _0{}^{10}(\pmb{x}(10)-\pmb{x}(t))^T(\pmb{x}(10)-\pmb{x}(t))+u(t)^2\text{dt}=\int _0{}^{10}\left(1/2-x_1(t)\right){}^2\text{dt}+\int _0{}^{10}\left(1-x_2(t)\right){}^2\text{dt}+\int
_0{}^{10}u(t)^2\text{dt}$. The objective function is:

```wl
In[3]:=
objective = g.W.g + s.W.s + u.W.u;
auxConstraint1 = g == (1 / 2 - x1);
auxConstraint2 = s == (1 - x2);
```

The time derivative in $x'(t)=A.x(t)+u(t)$ is discretized using finite differences:

```wl
In[4]:=
dTMat = IconizedObject[«NDSolve`FiniteDifferenceDerivative[1, ti, DifferenceOrder -> 2]»];
ode1 = dTMat.x1 == A[[1]].{x1, x2} + u;
ode2 = dTMat.x2 == A[[2]].{x1, x2} + u;
```

The end-condition constraints $\pmb{x}(0)=(1,0), \pmb{x}(10)=(1/2,1)$ can be specified using ``Indexed`` :

```wl
In[5]:=
bc1 = {Indexed[x1, 1] == 1, Indexed[x2, 1] == 0};
bc2 = { Indexed[x1, Length[ti]] == 1 / 2, Indexed[x2, Length[ti]] == 1};
```

The constraint on the control variable $u(t)$ is:

```wl
In[6]:= controlConstraint = {u\[VectorGreaterEqual]-10, u\[VectorLessEqual]5};
```

Solve the discretized problem:

```wl
In[7]:= res = QuadraticOptimization[objective, {ode1, ode2, bc1, bc2, auxConstraint1, auxConstraint2, controlConstraint}, {u, x1, x2, s, g}];
```

Convert the discretized result into ``InterpolatingFunction`` :

```wl
In[8]:= {usol, x1sol, x2sol} = Map[ListInterpolation[#, ti]&, {u, x1, x2} /. res];
```

The control variable is now restricted between $-10$ and 5:

```wl
In[9]:= Plot[usol[t], {t, 0, 10}, PlotRange -> All]

Out[9]= [image]
```

Plot the state variables:

```wl
In[10]:= Plot[{x1sol[t], x2sol[t]}, {t, 0, 10}]

Out[10]= [image]
```

#### Sequential Quadratic Optimization (2)

Minimize a nonlinear function $f(x)$ subject to nonlinear constraints $g(x)\geq 0$. The minimization can be done by approximating $f(x)$ as $f(x+d)\approx f(x)+(\nabla f).d+(1/2)d.\left(\nabla ^2f(x)\right).d$ and the constraints as $g(x)$ as $g(x+d)=g(x)+(\nabla g).d\geq  0$. This leads to a quadratic minimization problem that can be solved iteratively. Consider a case where $f=1-\underline{\text{Exp}}\left(\left(\underline{\text{Sin}}(x)^2+\underline{\text{Sin}}(y-1/2)^2\right)^2\right)$ and $g=\{x\leq 0.5, y\geq - 0.5, -x+y\leq  1\}$ :

```wl
In[1]:=
f[{x_, y_}] := 1 - Exp[-(Sin[x] ^ 2 + Sin[y - 1 / 2] ^ 2)];
g[{x_, y_}] := {-x + 0.5, y + 0.5, x - y + 1};
```

The gradient and Hessian of the minimizing function are:

```wl
In[2]:=
df[{x_, y_}] = D[f[{x, y}], {{x, y}}];
hf[{x_, y_}] = D[f[{x, y}], {{x, y}, 2}];
```

The gradient of the constraints is:

```wl
In[3]:= dg[{x_, y_}] = D[g[{x, y}], {{x, y}}];
```

The subproblem is to find $d$ by minimizing $\frac{1}{2}d.\left(\nabla ^2f\left(x^*\right)\right).d+(\nabla f).d$ subject to $g_i\left(x^*\right)+\left(\nabla g_i\left(x^*\right)\right). d\geq 0$ :

```wl
In[4]:=
QuadraticSubProblem[xiter_] := Module[{q, c, a, b, res}, 
	{q, c, a, b} = #[xiter]& /@ {hf, df, dg, g};
	res = Quiet[QuadraticOptimization[{q, c}, {a, b}]];
	If[!VectorQ[res, NumericQ], res = LinearOptimization[c, {a, b}];
	];
	res
	];
```

Iterate starting with an initial guess of $x^*=(-1, -0.4)$. The next iterate is $x_{\text{new}}=x^*+s.d$, where $s\in [0,1]$ is the step length to ensure that the constraints are always satisfied:

```wl
In[5]:=
x^ *  = {-1, -0.4};
xp = Last@Reap[Do[
	Sow[x^ * , "IterationPoints"];
	d = QuadraticSubProblem[x^ * ];
	If[Norm[d] ≤ 10 ^ -8, Break[]];
	s = 1;
	While[Min[g[x^ *  + s * d]] ≤ 0, s /= 2];
	x^ *  += s * d;
	, {20}], "IterationPoints"]

Out[5]= {{{-1, -0.4}, {-0.25, 0.55}, {0.0661341, 0.491938}, {-0.000985969, 0.500073}, {3.2055380372666975`*^-9, 0.5}}}
```

Visualize the convergence of the result. The final result is the green point:

```wl
In[6]:= ContourPlot[f[{x, y}], {x, -1.5, 0.5}, {y, -0.5, 1.5}, Sequence[...]]

Out[6]= [image]
```

---

Minimize $f(x,y)=\surd \left(1+x^2+\underline{\text{Cos}}(y/2)^2\right)$ subject to $x+\underline{\text{Sin}}(y)\geq 1,y\leq 2, x y=2$ :

```wl
In[1]:=
f[{x_, y_}] := Sqrt[(1 + x ^ 2 + Cos[y / 2] ^ 2)];
g[{x_, y_}] := {x + Sin[2y] - 1, -y + 2};
gEq[{x_, y_}] := {x y - 2};
```

The gradient and Hessian of the minimizing function are:

```wl
In[2]:=
df[{x_, y_}] = D[f[{x, y}], {{x, y}}];
hf[{x_, y_}] = D[f[{x, y}], {{x, y}, 2}];
```

The gradient of the constraints is:

```wl
In[3]:=
dg[{x_, y_}] = D[g[{x, y}], {{x, y}}];
dgEq[{x_, y_}] = D[gEq[{x, y}], {{x, y}}];
```

The subproblem is to minimize $\frac{1}{2}d.\left(\nabla ^2f\left(x^*\right)\right).d+(\nabla f).d$, subject to $g_i\left(x^*\right)+\left(\nabla g_i\left(x^*\right)\right). d\geq 0$ and $g^{\text{eq}}{}_i\left(x^*\right)+\left(\nabla g^{\text{\textit{eq}}}{}_i\left(x^*\right)\right). d=0$ :

```wl
In[4]:=
QuadraticSubProblem[xiter_] := Module[{q, c, a, b, aeq, beq, res}, 
	{q, c, a, b, aeq, beq} = #[xiter]& /@ {hf, df, dg, g, dgEq, gEq};
	res = Quiet[QuadraticOptimization[{q, c}, {a, b}, {aeq, beq}]];
	If[!VectorQ[res, NumericQ], res = LinearOptimization[c, {a, b}, {aeq, beq}];
	];
	res
	];
```

Iterate starting with the initial guess $x^*=(0.5,0.5)$ :

```wl
In[5]:=
x^ *  = {0.5, 0.5};
xp = Last@Reap[Do[
	Sow[x^ * , "IterationPoints"];
	d = QuadraticSubProblem[x^ * ];
	If[Norm[d] ≤ 10 ^ -8, Break[]];
	s = 1;
	While[Min[g[x^ *  + s * d]] ≤ 0 && s > 10 ^ -9, s /= 2];
	x^ *  += s * d;
	, {30}], "IterationPoints"]

Out[5]= {{{0.5, 0.5}, {1.5, 1.25}, {1.29339, 1.46384}, {1.23977, 1.56577}, {1.19582, 1.66871}, {1.19747, 1.67019}, {1.19747, 1.67019}}}
```

Visualize the convergence of the result. The final result is the green point:

```wl
In[6]:= ContourPlot[f[{x, y}], {x, 0, 2.5}, {y, 0, 2}, ...]

Out[6]= [image]
```

### Properties & Relations (9)

``QuadraticOptimization`` gives the global minimum of the objective function:

```wl
In[1]:= {Subscript[x, min], minval} = QuadraticOptimization[x^2 + y^2 - 2x - 6y, {x + 2y ≥ 4, x ≤ 3}, {x, y}, {"PrimalMinimizer", "PrimalMinimumValue"}]

Out[1]= {{1., 3.}, -10.}
```

Visualize the objective function:

```wl
In[2]:= Show[Plot3D[x^2 + y^2 - 2x - 6y, ...], Graphics3D[{Blue, PointSize[0.05], Point[Append[Subscript[x, min], minval]]}]]

Out[2]= [image]
```

---

The minimizer can be in the interior or at the boundary of the feasible region:

```wl
In[1]:=
Table[{Subscript[x, min], minval} = QuadraticOptimization[x^2 + y^2 + c.{x, y}, ...];
	Show[...], {c, {{-2, -6}, {-10, -6}, {-10, 6}}}]

Out[1]= [image]
```

---

``Minimize`` gives global exact results for quadratic optimization problems:

```wl
In[1]:= Minimize[{2x^2 + 20y^2 + 6x y + 5x, {-x + y ≥ 2, y ≥ 0}}, {x, y}]

Out[1]= {-(449/112), {x -> -(97/56), y -> (15/56)}}
```

---

``NMinimize`` can be used to obtain approximate results using global methods:

```wl
In[1]:= NMinimize[{2x^2 + 20y^2 + 6x y + 5x, {-x + y ≥ 2, y ≥ 0}}, {x, y}]

Out[1]= {-4.00893, {x -> -1.73214, y -> 0.267857}}
```

---

``FindMinimum`` can be used to obtain approximate results using local methods:

```wl
In[1]:= FindMinimum[{2x^2 + 20y^2 + 6x y + 5x, {-x + y ≥ 2, y ≥ 0}}, {x, y}]

Out[1]= {-4.00893, {x -> -1.73214, y -> 0.267857}}
```

---

``LinearOptimization`` is a special case of ``QuadraticOptimization`` :

```wl
In[1]:= QuadraticOptimization[x + 2y, {3x + 4y ≥ 5 && x ≥ 0 && y ≥ 0}, {x, y}]

Out[1]= {x -> 1.66667, y -> 0.}

In[2]:= LinearOptimization[x + 2y, {3x + 4y ≥ 5 && x ≥ 0 && y ≥ 0}, {x, y}]//N

Out[2]= {x -> 1.66667, y -> 0.}
```

In matrix-vector form, the quadratic term is set to 0:

```wl
In[3]:= QuadraticOptimization[{0, {1, 2}}, {{{3, 4}, {1, 0}, {0, 1}}, {-5, 0, 0}}]

Out[3]= {1.66667, 0.}
```

---

``SecondOrderConeOptimization`` is a generalization of ``QuadraticOptimization`` :

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
cons = {-x + y ≥ 2, y ≥ 0};

In[2]:= QuadraticOptimization[obj, cons, {x, y}]

Out[2]= {x -> -1.73214, y -> 0.267857}
```

Use auxiliary variable $t$ and minimize $t$ with additional constraint $2x^2+20y^2+6x y+5x\leq t$ :

```wl
In[3]:= SecondOrderConeOptimization[t, {obj ≤ t, cons}, {x, y, t}]

Out[3]= {x -> -1.73214, y -> 0.267857, t -> -4.00893}
```

---

``SemidefiniteOptimization`` is a generalization of ``QuadraticOptimization`` :

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
cons = {-x + y ≥ 2, y ≥ 0};

In[2]:= QuadraticOptimization[obj, cons, {x, y}]

Out[2]= {x -> -1.73214, y -> 0.267857}
```

Use auxiliary variable $t$ and minimize $t$ with additional constraint $2x^2+20y^2+6x y+5x\leq t$ :

```wl
In[3]:= SemidefiniteOptimization[t, {obj ≤ t, cons}, {x, y, t}]

Out[3]= {x -> -1.73214, y -> 0.267857, t -> -4.00893}
```

---

``ConicOptimization`` is a generalization of ``QuadraticOptimization`` :

```wl
In[1]:=
obj = 2x^2 + 20y^2 + 6x y + 5x;
cons = {-x + y ≥ 2, y ≥ 0};

In[2]:= QuadraticOptimization[obj, cons, {x, y}]

Out[2]= {x -> -1.73214, y -> 0.267857}
```

Use auxiliary variable $t$ and minimize $t$ with additional constraint $2x^2+20y^2+6x y+5x\leq t$ :

```wl
In[3]:= ConicOptimization[t, {obj ≤ t, cons}, {x, y, t}]

Out[3]= {x -> -1.73214, y -> 0.267857, t -> -4.00893}
```

### Possible Issues (6)

Constraints specified using strict inequalities may not be satisfied for certain methods:

```wl
In[1]:= {obj, cons} = {x ^ 2 + x + y, {x - 2y ≤ 3, x > 1, y > 0}};
```

The reason is that ``QuadraticOptimization`` solves $\inf  \left\{\left.x^2+x+y\right|x-2 y\leq 3,x>1,y>0\right\}$:

```wl
In[2]:= cons /. QuadraticOptimization[obj, cons, {x, y}, Method -> "CSDP"]

Out[2]= {True, False, False}
```

---

The minimum value of an empty set or infeasible problem is defined to be $\infty$:

```wl
In[1]:= QuadraticOptimization[x ^ 2, {x ≤ 0, x ≥ 1}, x, "PrimalMinimumValue"]
```

QuadraticOptimization::nsolc: There are no points that satisfy the constraints.

```wl
Out[1]= ∞
```

The minimizer is ``Indeterminate``:

```wl
In[2]:= QuadraticOptimization[x ^ 2, {x ≤ 0, x ≥ 1}, x]
```

QuadraticOptimization::nsolc: There are no points that satisfy the constraints.

```wl
Out[2]= {x -> Indeterminate}
```

---

The minimum value for an unbounded set or unbounded problem is $-\infty$:

```wl
In[1]:= QuadraticOptimization[x ^ 2 + y, 0 ≤ x ≤ 1, {x, y}, "PrimalMinimumValue", Method -> "CSDP"]
```

QuadraticOptimization::ubnd: The problem is unbounded.

```wl
Out[1]= -∞
```

The minimizer is ``Indeterminate``:

```wl
In[2]:= QuadraticOptimization[x ^ 2 + y, 0 ≤ x ≤ 1, {x, y}, Method -> "CSDP"]
```

QuadraticOptimization::ubnd: The problem is unbounded.

```wl
Out[2]= {x -> Indeterminate, y -> Indeterminate}
```

---

Certain solution properties are not available for symbolic input:

```wl
In[1]:=
obj = (x - 1)^2 + (y - 5 / 2)^2;
cons = x + 2y ≤ 4;

In[2]:= QuadraticOptimization[obj, cons, {x, y}, "FactoredObjectiveMatrix"]

Out[2]= Missing["NotAvailable"]

In[3]:= QuadraticOptimization[obj, cons, {x, y}, "FactoredObjectiveVector"]

Out[3]= Missing["NotAvailable"]
```

---

Dual related solution properties for mixed-integer problems may not be available:

```wl
In[1]:= QuadraticOptimization[2x^2 + 20y^2 + 6x y + 5x, -x + y ≥ 2, {x∈Integers, y∈Reals}, {"DualMaximumValue", "DualMaximizer", "DualityGap"}]

Out[1]= {Missing["NotAvailable"], Missing["NotAvailable"], Missing["NotAvailable"]}
```

---

Constraints with complex values need to be specified using vector inequalities:

```wl
In[1]:= QuadraticOptimization[Inactive[Norm][{x}] ^ 2, {x\[VectorGreaterEqual]I}, {x∈Complexes}]

Out[1]= {x -> 0.0000760359  + 1. I}
```

Just using ``GreaterEqual`` will not work:

```wl
In[2]:= QuadraticOptimization[Inactive[Norm][{x}] ^ 2, {x ≥ I}, {x∈Complexes}]
```

GreaterEqual::nord: Invalid comparison with I attempted.

GreaterEqual::nord: Invalid comparison with I attempted.

```wl
Out[2]= QuadraticOptimization[Inactive[Norm][{x}]^2, {x ≥ I}, {x∈ℂ}]
```

## See Also

* [`LinearOptimization`](https://reference.wolfram.com/language/ref/LinearOptimization.en.md)
* [`SecondOrderConeOptimization`](https://reference.wolfram.com/language/ref/SecondOrderConeOptimization.en.md)
* [`SemidefiniteOptimization`](https://reference.wolfram.com/language/ref/SemidefiniteOptimization.en.md)
* [`FindMinimum`](https://reference.wolfram.com/language/ref/FindMinimum.en.md)
* [`NMinimize`](https://reference.wolfram.com/language/ref/NMinimize.en.md)
* [`Minimize`](https://reference.wolfram.com/language/ref/Minimize.en.md)

## Tech Notes

* [Optimization Method Framework](https://reference.wolfram.com/language/OptimizationMethodFramework/tutorial/OptimizationMethodFramework.en.md)

## Related Guides

* [Convex Optimization](https://reference.wolfram.com/language/guide/ConvexOptimization.en.md)
* [Matrix-Based Minimization](https://reference.wolfram.com/language/guide/MatrixBasedMinimization.en.md)
* [`Optimization`](https://reference.wolfram.com/language/guide/Optimization.en.md)
* [Symbolic Vectors, Matrices and Arrays](https://reference.wolfram.com/language/guide/SymbolicArrays.en.md)

## History

* [Introduced in 2019 (12.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn120.en.md) \| [Updated in 2020 (12.1)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn121.en.md) ▪ [2020 (12.2)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn122.en.md)