2.1 Quick Reference
For each problem, Advanced Numerical Methods provides more than one (in some cases, several) computationally viable numerical algorithms to enable comparison of relevant features among the algorithms with respect to their efficiency, numerical effectiveness, and accuracy. Alternatively, Advanced Numerical Methods can often select a suitable algorithm automatically, given the size of the problem, the precision of the data, and an a priori estimate of the precision of the solution needed for a particular application.
The implemented algorithms are described in the following lists. These are organized to match the guide contents. Typically, Advanced Numerical Methods supplies functions that operate on the familiar control objects (StateSpace and TransferFunction) from Control System Professional or options for other Control System Professional functions.
Note that this section gives only the most direct syntax to invoke each method. The design of Control System Professional, however, makes the corresponding algorithms immediately available for all related functions. As an example, the new methods for solving the Riccati equations are referenced in the following as the options for the function RiccatiSolve, while the same options can also be chosen for the functions LQRegulatorGains and LQEstimatorGains, which call RiccatiSolve internally and take all the relevant options.
A comprehensive review of the algorithms implemented in Advanced Numerical Methods can be found in Datta (2003). Patel et al. (1994) provides a useful collection of reprinted papers.
Solutions of the Lyapunov and Sylvester Matrix Equations
The Schur method for the Lyapunov equations is implemented as LyapunovSolve[a, b, SolveMethod SchurDecomposition] (continuous-time case) and DiscreteLyapunovSolve[a, b, SolveMethod SchurDecomposition] (discrete-time case).
The Hessenberg-Schur method for the Sylvester equations is implemented as LyapunovSolve[a, b, c, SolveMethod HessenbergSchurDecomposition] (continuous-time case) and DiscreteLyapunovSolve[a, b, c, SolveMethod HessenbergSchurDecomposition] (discrete-time case).
The Cholesky factors of the controllability and observability Gramians of a stable system are computed using CholeskyFactorControllabilityGramian[system] and CholeskyFactorObservabilityGramian[system].
Solutions of the Algebraic Riccati Equations
The Schur method is implemented as RiccatiSolve[a, b, q, r, SolveMethod SchurDecomposition] (continuous-time case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod SchurDecomposition] (discrete-time case).
The Newton method is implemented as RiccatiSolve[a, b, q,r, SolveMethod Newton,InitialGuess ] (continuous-time case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod Newton,InitialGuess ] (discrete-time case).
The matrix Sign-function method is implemented as RiccatiSolve[a, b, q, r, SolveMethod MatrixSign] (continuous-time case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod MatrixSign] (discrete-time case).
The inverse-free method based on generalized eigenvectors is implemented as RiccatiSolve[a, b, q, r, SolveMethod GeneralizedEigendecomposition] (continuous-time case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod GeneralizedEigendecomposition] (discrete-time case).
The inverse-free method based on generalized Schur decomposition is implemented as RiccatiSolve[a, b, q, r, SolveMethod GeneralizedSchurDecomposition] (continuous-time case) and DiscreteRiccatiSolve[a, b, q, r, SolveMethod GeneralizedSchurDecomposition] (discrete-time case).
Reduction to Controller-Hessenberg and Observer-Hessenberg Forms
Controller-Hessenberg forms are computed by ControllerHessenbergForm[system] and LowerControllerHessenbergForm[system].
Observer-Hessenberg forms are computed by ObserverHessenbergForm[system] and UpperObserverHessenbergForm[system].
Controllability and Observability Tests
Tests of controllability and observability using controller-Hessenberg and observer-Hessenberg forms are performed via Controllable[system, ControllabilityTest FullRankControllerHessenbergBlocks] and Observable[system, ObservabilityTest FullRankObserverHessenbergBlocks].
Tests of controllability and observability of a stable system via positive definiteness of Gramians are performed via Controllable[system, ControllabilityTest PositiveDiagonalCholeskyFactorControllabilityGramian] and Observable[system, ObservabilityTest PositiveDiagonalCholeskyFactorObservabilityGramian].
Pole Assignment
The recursive algorithm is implemented as StateFeedbackGains[system, poles, Method Recursive].
The explicit QR algorithm is implemented as StateFeedbackGains[system, poles, Method QRDecomposition].
The Schur method is implemented as StateFeedbackGains[system, poles, Method SchurDecomposition].
The RQ modification of the recursive single-input algorithm is implemented as StateFeedbackGains[system, poles, Method RecursiveRQDecomposition].
The implicit single-input RQ algorithm is implemented as StateFeedbackGains[system, poles, Method ImplicitRQDecomposition].
The projection technique for the partial pole assignment problem is implemented as StateFeedbackGains[system, {badpoles, goodpoles}] and StateFeedbackGains[system, {, , }], where the length of the list of poles is smaller than the order of the system.
Feedback Stabilization
Constrained feedback stabilization is computed by StateFeedbackGains[system, region], where the available regions are DampingFactorRegion[], SettlingTimeRegion[, ], DampingRatioRegion[], and NaturalFrequencyRegion[], and their intersections.
The Lyapunov algorithms for the feedback stabilization are implemented as StateFeedbackGains[system, region, Method LyapunovShift] and StateFeedbackGains[system, region, Method PartialLyapunovShift].
Design of the Reduced-Order State Estimator (Observer)
The reduced-order state estimator using the pole assignment approach is computed by ReducedOrderEstimator[system, poles].
The reduced-order state estimator via solution of the Sylvester-observer equation using the recursive bidiagonal scheme is computed by ReducedOrderEstimator[system, poles, Method RecursiveBidiagonal] and ReducedOrderEstimator[system, poles, Method RecursiveBlockBidiagonal] (the block version of the scheme).
The reduced-order state estimator via solution of the Sylvester-observer equation using the recursive triangular scheme is computed by ReducedOrderEstimator[system, poles, Method RecursiveTriangular] and ReducedOrderEstimator[system, poles, Method RecursiveBlockTriangular] (the block version of the scheme).
Model Reduction
The Schur method for model reduction is implemented as DominantSubsystem[system, Method SchurDecomposition].
The square-root method is implemented as DominantSubsystem[system, Method SquareRoot].
Model Identification
The system identification from its impulse responses is performed by ImpulseResponseIdentify[response].
The system identification from its frequency responses is performed by FrequencyResponseIdentify[response].
The system identification directly from input-output data is performed by OutputResponseIdentify[u, y].
Miscellaneous Matrix Decompositions and Functions
The generalized Schur decomposition is computed by GeneralizedSchurDecomposition[a, b].
The ordered Schur and generalized Schur decompositions are computed by SchurDecompositionOrdered[a, b, pred] and GeneralizedSchurDecompositionOrdered[{{a, b}, { q, z}}, pred]. SchurDecompositionOrdered[a, b, pred, RealBlockForm True] gives the ordered Schur decomposition in which complex eigenvalues of a real input matrix appear as 2 × 2 real blocks (the new default form for SchurDecompositionOrdered).
The generalized eigenvalues and eigenvectors via the generalized Schur decomposition are computed by GeneralizedEigenvalues[a, b], GeneralizedEigenvectors[a, b], and GeneralizedEigensystem[a, b].
|