2.1 Quick Reference
For each problem, Advanced Numerical Methods provides more than one (in some cases, several) computationally viable numerical algorithms to enable comparison of relevant features among the algorithms with respect to their efficiency, numerical effectiveness, and accuracy. Alternatively, Advanced Numerical Methods can often select a suitable algorithm automatically, given the size of the problem, the precision of the data, and an a priori estimate of the precision of the solution needed for a particular application.
The implemented algorithms are described in the following lists. These are organized to match the guide contents. Typically, Advanced Numerical Methods supplies functions that operate on the familiar control objects (StateSpace and TransferFunction) from Control System Professional or options for other Control System Professional functions.
Note that this section gives only the most direct syntax to invoke each method. The design of Control System Professional, however, makes the corresponding algorithms immediately available for all related functions. As an example, the new methods for solving the Riccati equations are referenced in the following as the options for the function RiccatiSolve, while the same options can also be chosen for the functions LQRegulatorGains and LQEstimatorGains, which call RiccatiSolve internally and take all the relevant options.
A comprehensive review of the algorithms implemented in Advanced Numerical Methods can be found in Datta (2003). Patel et al. (1994) provides a useful collection of reprinted papers.
Solutions of the Lyapunov and Sylvester Matrix Equations
The Schur method for the Lyapunov equations is implemented as LyapunovSolve[a, b, SolveMethod SchurDecomposition] (continuoustime case) and DiscreteLyapunovSolve[a, b, SolveMethod SchurDecomposition] (discretetime case).
The HessenbergSchur method for the Sylvester equations is implemented as LyapunovSolve[a, b, c, SolveMethod HessenbergSchurDecomposition] (continuoustime case) and DiscreteLyapunovSolve[a, b, c, SolveMethod HessenbergSchurDecomposition] (discretetime case).
The Cholesky factors of the controllability and observability Gramians of a stable system are computed using CholeskyFactorControllabilityGramian[system] and CholeskyFactorObservabilityGramian[system].
Solutions of the Algebraic Riccati Equations
The Schur method is implemented as RiccatiSolve[a, b, q, r, SolveMethod SchurDecomposition] (continuoustime case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod SchurDecomposition] (discretetime case).
The Newton method is implemented as RiccatiSolve[a, b, q,r, SolveMethod Newton,InitialGuess ] (continuoustime case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod Newton,InitialGuess ] (discretetime case).
The matrix Signfunction method is implemented as RiccatiSolve[a, b, q, r, SolveMethod MatrixSign] (continuoustime case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod MatrixSign] (discretetime case).
The inversefree method based on generalized eigenvectors is implemented as RiccatiSolve[a, b, q, r, SolveMethod GeneralizedEigendecomposition] (continuoustime case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod GeneralizedEigendecomposition] (discretetime case).
The inversefree method based on generalized Schur decomposition is implemented as RiccatiSolve[a, b, q, r, SolveMethod GeneralizedSchurDecomposition] (continuoustime case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod GeneralizedSchurDecomposition] (discretetime case).
Reduction to ControllerHessenberg and ObserverHessenberg Forms
ControllerHessenberg forms are computed by ControllerHessenbergForm[system] and LowerControllerHessenbergForm[system].
ObserverHessenberg forms are computed by ObserverHessenbergForm[system] and UpperObserverHessenbergForm[system].
Controllability and Observability Tests
Tests of controllability and observability using controllerHessenberg and observerHessenberg forms are performed via Controllable[system, ControllabilityTest FullRankControllerHessenbergBlocks] and Observable[system, ObservabilityTest FullRankObserverHessenbergBlocks].
Tests of controllability and observability of a stable system via positive definiteness of Gramians are performed via Controllable[system, ControllabilityTest PositiveDiagonalCholeskyFactorControllabilityGramian] and Observable[system, ObservabilityTest PositiveDiagonalCholeskyFactorObservabilityGramian].
Pole Assignment
The recursive algorithm is implemented as StateFeedbackGains[system, poles, Method Recursive].
The explicit QR algorithm is implemented as StateFeedbackGains[system, poles, Method QRDecomposition].
The Schur method is implemented as StateFeedbackGains[system, poles, Method SchurDecomposition].
The RQ modification of the recursive singleinput algorithm is implemented as StateFeedbackGains[system, poles, Method RecursiveRQDecomposition].
The implicit singleinput RQ algorithm is implemented as StateFeedbackGains[system, poles, Method ImplicitRQDecomposition].
The projection technique for the partial pole assignment problem is implemented as StateFeedbackGains[system, {badpoles, goodpoles}] and StateFeedbackGains[system, {, , }], where the length of the list of poles is smaller than the order of the system.
Feedback Stabilization
Constrained feedback stabilization is computed by StateFeedbackGains[system, region], where the available regions are DampingFactorRegion[], SettlingTimeRegion[, ], DampingRatioRegion[], and NaturalFrequencyRegion[], and their intersections.
The Lyapunov algorithms for the feedback stabilization are implemented as StateFeedbackGains[system, region, Method LyapunovShift] and StateFeedbackGains[system, region, Method PartialLyapunovShift].
Design of the ReducedOrder State Estimator (Observer)
The reducedorder state estimator using the pole assignment approach is computed by ReducedOrderEstimator[system, poles].
The reducedorder state estimator via solution of the Sylvesterobserver equation using the recursive bidiagonal scheme is computed by ReducedOrderEstimator[system, poles, Method RecursiveBidiagonal] and ReducedOrderEstimator[system, poles, Method RecursiveBlockBidiagonal] (the block version of the scheme).
The reducedorder state estimator via solution of the Sylvesterobserver equation using the recursive triangular scheme is computed by ReducedOrderEstimator[system, poles, Method RecursiveTriangular] and ReducedOrderEstimator[system, poles, Method RecursiveBlockTriangular] (the block version of the scheme).
Model Reduction
The Schur method for model reduction is implemented as DominantSubsystem[system, Method SchurDecomposition].
The squareroot method is implemented as DominantSubsystem[system, Method SquareRoot].
Model Identification
The system identification from its impulse responses is performed by ImpulseResponseIdentify[response].
The system identification from its frequency responses is performed by FrequencyResponseIdentify[response].
The system identification directly from inputoutput data is performed by OutputResponseIdentify[u, y].
Miscellaneous Matrix Decompositions and Functions
The generalized Schur decomposition is computed by GeneralizedSchurDecomposition[a, b].
The ordered Schur and generalized Schur decompositions are computed by SchurDecompositionOrdered[a, b, pred] and GeneralizedSchurDecompositionOrdered[{{a, b}, { q, z}}, pred]. SchurDecompositionOrdered[a, b, pred, RealBlockForm True] gives the ordered Schur decomposition in which complex eigenvalues of a real input matrix appear as 2 × 2 real blocks (the new default form for SchurDecompositionOrdered).
The generalized eigenvalues and eigenvectors via the generalized Schur decomposition are computed by GeneralizedEigenvalues[a, b], GeneralizedEigenvectors[a, b], and GeneralizedEigensystem[a, b].
