# LQRegulatorGains

LQRegulatorGains[ssm,{q,r}]

gives the optimal state feedback gain matrix for the StateSpaceModel ssm and the quadratic cost function, with state and control weighting matrices q and r.

LQRegulatorGains[ssm,{q,r,p}]

includes the state-control cross-coupling matrix p in the cost function.

LQRegulatorGains[{ssm,finputs},{}]

specifies finputs as the feedback inputs of ssm.

# Details and Options

• The standard state-space model ssm can be given as StateSpaceModel[{a,b,}], where a and b represent the state and input matrices in either a continuous-time or a discrete-time system:
•  continuous-time system discrete-time system
• The descriptor state-space model ssm can be given as StateSpaceModel[{a,b,c,d,e}] in either continuous time or discrete time:
•  continuous-time system discrete-time system
• LQRegulatorGains also accepts nonlinear systems specified by AffineStateSpaceModel and NonlinearStateSpaceModel.
• For nonlinear systems, the operating values of state and input variables are taken into consideration, and the gains are computed based on the approximate Taylor linearization and returned as a vector.
• The argument finputs is a list of integers specifying the positions of the feedback inputs in .
• LQRegulatorGains[ssm,{}] is equivalent to LQRegulatorGains[{ssm,All},{}].
• The cost function is:
•  continuous-time system discrete-time system
• In LQRegulatorGains[ssm,{q,r}], the cross-coupling matrix p is assumed to be zero.
• The optimal control is given by , where is the computed feedback gain matrix.
• For continuous-time systems, the optimal feedback gain is computed as , where is the solution of the continuous Riccati equation , and is the submatrix of associated with the feedback inputs .
• For discrete-time systems, the optimal feedback gain is computed as , where is the solution of the discrete Riccati equation .
• The optimal control is unique and stabilizing if is stabilizable, is detectable, , and .

# Examples

open allclose all

## Basic Examples(5)

Compute the optimum feedback gain matrix for a continuous-time system:

Calculate the optimal control gains for an unstable system:

Compare the open- and closed-loop poles:

Compute the optimal state-feedback gain matrix for a discrete-time system:

Calculate the feedback gains for controlling a two-input system using the first input:

A set of feedback gains for a stabilizable but uncontrollable system:

## Scope(7)

Compute the feedback gains for a continuous-time state-space model:

The feedback gains for a discrete-time system:

Compute the optimal gain matrix when the cost function contains state-control coupling:

The LQR gains for a system in which only the fourth and fifth inputs are feedback inputs:

The gains for a system with the first two inputs as feedback inputs and a cost function with cross-coupling:

Find the optimal gains for a descriptor state-space model:

Compute the gains for a NonlinearStateSpaceModel:

The closed-loop system:

The state response is stable:

## Applications(3)

Compute a set of state feedback gains that stabilizes an unstable system:

The response of the closed-loop system to a step input:

The open-loop system is unstable:

Compute the Hessian of the cost function's integrand to determine the weighting matrices:

The natural response of the closed-loop system:

Without feedback, the system is highly oscillatory:

Design a regulator for the discrete-time model of a mixing tank system:

The response of the system to an impulse at the second input:

## Properties & Relations(9)

Find the optimal feedback gains for a continuous-time system:

The same solution can be obtained from the solution to the continuous-time Riccati equation:

Find the optimal feedback gains for a discrete-time system:

The same solution can be obtained from the solution to the discrete-time Riccati equation:

Find the loop gain transfer function for an LQR design:

Its Nyquist plot lies outside the unit circle centered at :

Consequently, the gain margin and the phase margin satisfies and :

The gains associated with a state increase as its weight is increased:

The higher penalty of the second state reduces its overshoot:

Find the optimal gains for a stable system as input cost is varied:

The gain converges to zero as approaches zero, or approaches infinity:

As the gain decreases, the closed-loop poles move closer to the open-loop ones:

Find the optimal gains for an unstable system as state cost is varied:

The gain converges to a minimum as approaches zero, or approaches infinity:

As k decreases, the closed-loop poles move nearer the stable open-loop poles and mirror images of unstable ones:

When approaches infinity, or approaches zero, the gain becomes unbounded:

As the gains increase, the states are penalized more, and their values become smaller:

The optimal cost-to-go is a Lyapunov function:

The state trajectory projected on the optimal cost surface asymptotically approaches the origin:

The optimal state trajectory for a system with one state:

The co-state trajectory:

The optimal input trajectory:

The optimal cost trajectory:

The optimal cost satisfies the infinite horizon HamiltonJacobiBellman equation:

The optimal input minimizes the Hamiltonian, thus satisfying :

## Possible Issues(4)

The gain computations fail for an unstabilizable system:

The gain computations fail if the control weighting matrix is not positive definite:

Use a positive-definite control weighting matrix:

If has unobservable modes on the imaginary axis, there is no continuous-time solution:

The zero eigenvalue is unobservable:

If has unobservable modes on the unit circle, there is no discrete-time solution:

The eigenvalue 1 is unobservable:

Wolfram Research (2010), LQRegulatorGains, Wolfram Language function, https://reference.wolfram.com/language/ref/LQRegulatorGains.html (updated 2014).

#### Text

Wolfram Research (2010), LQRegulatorGains, Wolfram Language function, https://reference.wolfram.com/language/ref/LQRegulatorGains.html (updated 2014).

#### BibTeX

@misc{reference.wolfram_2020_lqregulatorgains, author="Wolfram Research", title="{LQRegulatorGains}", year="2014", howpublished="\url{https://reference.wolfram.com/language/ref/LQRegulatorGains.html}", note=[Accessed: 24-January-2021 ]}

#### BibLaTeX

@online{reference.wolfram_2020_lqregulatorgains, organization={Wolfram Research}, title={LQRegulatorGains}, year={2014}, url={https://reference.wolfram.com/language/ref/LQRegulatorGains.html}, note=[Accessed: 24-January-2021 ]}

#### CMS

Wolfram Language. 2010. "LQRegulatorGains." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2014. https://reference.wolfram.com/language/ref/LQRegulatorGains.html.

#### APA

Wolfram Language. (2010). LQRegulatorGains. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LQRegulatorGains.html