1. Trang chủ
  2. » Thể loại khác

complex conjugate matrix eqatrions for systems and control

496 943 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 496
Dung lượng 4,84 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Specifically, Lyapunov matrix equations are often encountered in sta-bility analysis of linear systems [160]; the homogeneous continuous-time Lyapunovequation in block companion matrices

Trang 1

Communications and Control Engineering

Ai-Guo Wu

Ying Zhang

Complex Conjugate Matrix Equations

for Systems and

Control

Trang 2

Communications and Control Engineering

Series editors

Alberto Isidori, Roma, Italy

Jan H van Schuppen, Amsterdam, The Netherlands

Eduardo D Sontag, Piscataway, USA

Miroslav Krstic, La Jolla, USA

Trang 4

Ai-Guo Wu • Ying Zhang

Complex Conjugate Matrix Equations for Systems

and Control

123

Trang 5

Harbin Institute of Technology, Shenzhen

University Town of Shenzhen

Communications and Control Engineering

DOI 10.1007/978-981-10-0637-1

Library of Congress Control Number: 2016942040

Mathematics Subject Classi fication (2010): 15A06, 11Cxx

© Springer Science+Business Media Singapore 2017

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part

of the material is concerned, speci fically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on micro films or in any other physical way, and transmission

or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a speci fic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature

The registered company is Springer Science+Business Media Singapore Pte Ltd.

Trang 6

To our supervisor, Prof Guang-Ren Duan

To Hong-Mei, and Yi-Tian (Ai-Guo Wu)

To Rui, and Qi-Yu (Ying Zhang)

Trang 7

Theory of matrix equations is an important branch of mathematics, and has broadapplications in many engineeringfields, such as control theory, information theory,and signal processing Specifically, algebraic Lyapunov matrix equations play vitalroles in stability analysis for linear systems, and coupled Lyapunov matrix equa-tions appear in the analysis for Markovian jump linear systems; algebraic Riccatiequations are encountered in optimal control Due to these reasons, matrix equa-tions are extensively investigated by many scholars from various fields, and thecontent on matrix equations has been very rich Matrix equations are often covered

in some books on linear algebra, matrix analysis, and numerical analysis We listseveral books here, for example, Topics in Matrix Analysis by R.A Horn andC.R Johnson [143], The Theory of Matrices by P Lancaster and M Tismenetsky[172], and Matrix Analysis and Applied Linear Algebra by C.D Meyer [187] Inaddition, there are some books on special matrix equations, for example, LyapunovMatrix Equations in System Stability and Control by Z Gajic [128], Matrix RiccatiEquations in Control and Systems Theory by H Abou-Kandil [2], and GeneralizedSylvester Equations: Unified Parametric Solutions by Guang-Ren Duan [90] Itshould be pointed out that all the matrix equations investigated in the aforemen-tioned books are in real domain By now, it seems that there is no book on complexmatrix equations with the conjugate of unknown matrices For convenience, thisclass of equations is called complex conjugate matrix equations

Thefirst author of this book and his collaborators began to consider complexmatrix equations with the conjugate of unknown matrices in 2005 inspired by thework [155] of Jiang published in Linear Algebra and Applications Since then, heand his collaborators have published many papers on complex conjugate matrixequations Recently, the second author of this book joined this field, and hasobtained some interesting results In addition, some complex conjugate matrixequations have found applications in the analysis and design of antilinear systems.This book aims to provide a relatively systematic introduction to complex conjugatematrix equations and its applications in discrete-time antilinear systems

vii

Trang 8

The book has 12 chapters In Chap.1, first a survey is given on linear matrixequations, and then recent development on complex conjugate matrix equations issummarized Some mathematical preliminaries to be used in this book are collected

in Chap.2 Besides these two chapters, the rest of this book is partitioned into threeparts Thefirst part contains Chaps.3–5, and focuses on the iterative solutions forseveral types of complex conjugate matrix equations The second part consists ofChaps 6–10, and focuses on explicit closed-form solutions for some complexconjugate matrix equations In the third part, including Chaps.11and 12, severalapplications of complex conjugate matrix equations are considered In Chap 11,stability analysis of discrete-time antilinear systems is investigated, and some sta-bility criteria are given in terms of anti-Lyapunov matrix equations, which arespecial complex conjugate matrix equations In Chap.12, some feedback designproblems are solved for discrete-time antilinear systems by using several types ofcomplex conjugate matrix equations Except part of Chap.2and Subsection6.1.1,the other materials of this book are based on our own research work, includingsome unpublished results

The intended audience of this monograph includes students and researchers inareas of control theory, linear algebra, communication, numerical analysis, and so

on An appropriate background for this monograph would be the first course onlinear algebra and linear systems theory

Since 1980s, many researchers have devoted much effort in complex conjugatematrix equations, and much contribution has been made to this area Owing tospace limitation and the organization of the book, many of their published resultsare not included or even not cited We extend our apologies to these researchers

It is under the supervision of our Ph.D advisor, Prof Guang-Ren Duan atHarbin Institute of Technology (HIT), that we entered thefield of matrix equationswith their applications in control systems design Moreover, Prof Duan has alsomade much contribution to the investigation of complex conjugate matrix equa-tions, and has coauthored many papers with thefirst author Some results in thesepapers have been included in this book Therefore, at the beginning of preparing themanuscript, we intended to get Prof Duan as thefirst author of this book due to hiscontribution on complex conjugate matrix equations However, he thought that hedid not make contribution to the writing of this book, and thus should not be anauthor of this book Here, we wish to express our sincere gratitude and appreciation

to Prof Duan for his magnanimity and selflessness We also would like to expressour profound gratitude to Prof Duan for his careful guidance, wholehearted sup-port, insightful comments, and great contribution

We also would like to give appreciation to our colleague, Prof Bin Zhou of HITfor his help Thefirst author has coauthored some papers included in this book withProf Gang Feng when he visited City University of Hong Kong as a ResearchFellow Thefirst author would like to express his sincere gratitude to Prof Feng forhis help and contribution Dr Yan-Ming Fu, Dr Ming-Zhe Hou, Mr Yang-YangQian, and Dr Ling-Ling Lv have also coauthored with thefirst author a few papersincluded in this book Thefirst author would extend his great thanks to all of themfor their contribution

Trang 9

Great thanks also go to Mr Yang-Yang Qian and Mr Ming-Fang Chang, Ph.D.students of the first author, who have helped us in typing a few sections of themanuscripts In addition, Mr Fang-Zhou Fu, Miss Dan Guo, Miss Xiao-Yan He,

Mr Zhen-Peng Zeng, and Mr Tian-Long Qin, Master students of thefirst author,and Mr Yang-Yang Qian and Mr Ming-Fang Chang have provided tremendoushelp in finding errors and typos in the manuscripts Their help has significantlyimproved the quality of the manuscripts, and is much appreciated

Thefirst and second authors would like to thank his wife Ms Hong-Mei Wangand her husband Dr Rui Zhang, respectively, for their constant support in everyaspect Part of the book was written when thefirst author visited the University ofWestern Australia (UWA) from July 2013 to July 2014 Thefirst author would like

to thank Prof Victor Sreeram at UWA for his help and invaluable suggestions

We would like to gratefully acknowledge thefinancial support kindly provided

by the National Natural Science Foundation of China under Grant Nos

60974044 and 61273094, by Program for New Century Excellent Talents inUniversity under Grant No NCET-11-0808, by Foundation for the Author ofNational Excellent Doctoral Dissertation of China under Grant No 201342, bySpecialized Research Fund for the Doctoral Program of Higher Education underGrant Nos 20132302110053 and 20122302120069, by the Foundation for CreativeResearch Groups of the National Natural Science Foundation of China under GrantNos 61021002 and 61333003, by the National Program on Key Basic ResearchProject (973 Program) under Grant No 2012CB821205, by the Project forDistinguished Young Scholars of the Basic Research Plan in Shenzhen City underContract No JCJ201110001, and by Key Laboratory of Electronics Engineering,College of Heilongjiang Province (Heilongjiang University)

Lastly, we thank in advance all the readers for choosing to read this book It ismuch appreciated if readers could possibly provide, via email: agwu@163.com,feedback about any problems found

Ying Zhang

Trang 10

1 Introduction 1

1.1 Linear Equations 2

1.2 Univariate Linear Matrix Equations 5

1.2.1 Lyapunov Matrix Equations 5

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix Equations 9

1.2.3 Other Matrix Equations 13

1.3 Multivariate Linear Matrix Equations 16

1.3.1 Roth Matrix Equations 16

1.3.2 First-Order Generalized Sylvester Matrix Equations 18

1.3.3 Second-Order Generalized Sylvester Matrix Equations 24

1.3.4 High-Order Generalized Sylvester Matrix Equations 25

1.3.5 Linear Matrix Equations with More Than Two Unknowns 26

1.4 Coupled Linear Matrix Equations 27

1.5 Complex Conjugate Matrix Equations 30

1.6 Overview of This Monograph 33

2 Mathematical Preliminaries 35

2.1 Kronecker Products 35

2.2 Leverrier Algorithms 42

2.3 Generalized Leverrier Algorithms 46

2.4 Singular Value Decompositions 49

2.5 Vector Norms and Operator Norms 52

2.5.1 Vector Norms 52

2.5.2 Operator Norms 56

2.6 A Real Representation of a Complex Matrix 63

2.6.1 Basic Properties 64

2.6.2 Proof of Theorem 2.7 68

xi

Trang 11

2.7 Consimilarity 73

2.8 Real Linear Spaces and Real Linear Mappings 75

2.8.1 Real Linear Spaces 76

2.8.2 Real Linear Mappings 81

2.9 Real Inner Product Spaces 83

2.10 Optimization in Complex Domain 87

2.11 Notes and References 90

Part I Iterative Solutions 3 Smith-Type Iterative Approaches 97

3.1 Infinite Series Form of the Unique Solution 98

3.2 Smith Iterations 103

3.3 Smith (l) Iterations 105

3.4 Smith Accelerative Iterations 108

3.5 An Illustrative Example 115

3.6 Notes and References 116

4 Hierarchical-Update-Based Iterative Approaches 119

4.1 Extended Con-Sylvester Matrix Equations 121

4.1.1 The Matrix Equation AXBþ CXD ¼ F 121

4.1.2 A General Case 126

4.1.3 Numerical Examples 133

4.2 Coupled Con-Sylvester Matrix Equations 135

4.2.1 Iterative Algorithms 137

4.2.2 Convergence Analysis 139

4.2.3 A More General Case 146

4.2.4 A Numerical Example 147

4.3 Complex Conjugate Matrix Equations with Transpose of Unknowns 149

4.3.1 Convergence Analysis 151

4.3.2 A Numerical Example 157

4.4 Notes and References 158

5 Finite Iterative Approaches 163

5.1 Generalized Con-Sylvester Matrix Equations 163

5.1.1 Main Results 164

5.1.2 Some Special Cases 172

5.1.3 Numerical Examples 175

5.2 Extended Con-Sylvester Matrix Equations 179

5.2.1 The Matrix Equation AXBþ CXD ¼ F 179

5.2.2 A General Case 192

5.2.3 Numerical Examples 195

Trang 12

5.3 Coupled Con-Sylvester Matrix Equations 198

5.3.1 Iterative Algorithms 198

5.3.2 Convergence Analysis 199

5.3.3 A More General Case 206

5.3.4 Numerical Examples 207

5.3.5 Proofs of Lemmas 5.15 and 5.16 209

5.4 Notes and References 221

Part II Explicit Solutions 6 Real-Representation-Based Approaches 225

6.1 Normal Con-Sylvester Matrix Equations 226

6.1.1 Solvability Conditions 226

6.1.2 Uniqueness Conditions 230

6.1.3 Solutions 233

6.2 Con-Kalman-Yakubovich Matrix Equations 241

6.2.1 Solvability Conditions 241

6.2.2 Solutions 243

6.3 Con-Sylvester Matrix Equations 250

6.4 Con-Yakubovich Matrix Equations 259

6.5 Extended Con-Sylvester Matrix Equations 267

6.6 Generalized Con-Sylvester Matrix Equations 270

6.7 Notes and References 273

7 Polynomial-Matrix-Based Approaches 275

7.1 Homogeneous Con-Sylvester Matrix Equations 276

7.2 Nonhomogeneous Con-Sylvester Matrix Equations 284

7.2.1 The First Approach 285

7.2.2 The Second Approach 293

7.3 Con-Yakubovich Matrix Equations 294

7.3.1 The First Approach 295

7.3.2 The Second Approach 305

7.4 Extended Con-Sylvester Matrix Equations 307

7.4.1 Basic Solutions 308

7.4.2 Equivalent Forms 311

7.4.3 Further Discussion 316

7.4.4 Illustrative Examples 318

7.5 Generalized Con-Sylvester Matrix Equations 321

7.5.1 Basic Solutions 322

7.5.2 Equivalent Forms 324

7.5.3 Special Solutions 329

7.5.4 An Illustrative Example 332

7.6 Notes and References 334

Trang 13

8 Unilateral-Equation-Based Approaches 335

8.1 Con-Sylvester Matrix Equations 336

8.2 Con-Yakubovich Matrix Equations 343

8.3 Nonhomogeneous Con-Sylvester Matrix Equations 349

8.4 Notes and References 354

9 Conjugate Products 355

9.1 Complex Polynomial RingðC½s; þ ; ~Þ 355

9.2 Division with Remainder inðC½s; þ ; ~Þ 359

9.3 Greatest Common Divisors in ðC½s; þ ; ~Þ 362

9.4 Coprimeness inðC½s; þ ; ~Þ 365

9.5 Conjugate Products of Polynomial Matrices 366

9.6 Unimodular Matrices and Smith Normal Form 371

9.7 Greatest Common Divisors 377

9.8 Coprimeness of Polynomial Matrices 379

9.9 Conequivalence and Consimilarity 382

9.10 An Example 385

9.11 Notes and References 385

10 Con-Sylvester-Sum-Based Approaches 389

10.1 Con-Sylvester Sum 389

10.2 Con-Sylvester-Polynomial Matrix Equations 394

10.2.1 Homogeneous Case 394

10.2.2 Nonhomogeneous Case 397

10.3 An Illustrative Example 400

10.4 Notes and References 402

Part III Applications in Systems and Control 11 Stability for Antilinear Systems 405

11.1 Stability for Discrete-Time Antilinear Systems 407

11.2 Stochastic Stability for Markovian Antilinear Systems 410

11.3 Solutions to Coupled Anti-Lyapunov Equations 423

11.3.1 Explicit Iterative Algorithms 424

11.3.2 Implicit Iterative Algorithms 428

11.3.3 An Illustrative Example 432

11.4 Notes and References 435

11.4.1 Summary 435

11.4.2 A Brief Overview 436

12 Feedback Design for Antilinear Systems 439

12.1 Generalized Eigenstructure Assignment 439

12.2 Model Reference Tracking Control 442

12.2.1 Tracking Conditions 443

12.2.2 Solution to the Feedback Stabilizing Gain 445

12.2.3 Solution to the Feedforward Compensation Gain 446

12.2.4 An Example 447

Trang 14

12.3 Finite Horizon Quadratic Regulation 450

12.4 Infinite Horizon Quadratic Regulation 461

12.5 Notes and References 467

12.5.1 Summary 467

12.5.2 A Brief Overview 468

References 471

Index 485

Trang 15

Notation Related to Subspaces

Z Set of all integer numbers

Rn Set of all real vectors of dimension n

Cn Set of all complex vectors of dimension n

Rm n Set of all real matrices of dimension m n

Cm n Set of all complex matrices of dimension m n

Rm n½s Set of all polynomial matrices of dimension m n with real

coefficients

Cm n½s Set of all polynomial matrices of dimension m n with complex

coefficients

Image The image of a mapping

rdim The real dimension of a real linear space

Notation Related to Vectors and Matrices

AH Transposed complex conjugate of matrix A

Trang 16

ReðAÞ Real part of matrix A

ImðAÞ Imaginary part of matrix A

detðAÞ Determinant of matrix A

adjðAÞ Adjoint of matrix A

trðAÞ Trace of matrix A

rankðAÞ Rank of matrix A

vecðAÞ Vectorization of matrix A

 Kronecker product of two matrices

q Að Þ Spectral radius of matrix A

k Að Þ Set of the eigenvalues of matrix A

kminð ÞA The minimal eigenvalue of matrix A

kmaxð ÞA The maximal eigenvalue of matrix A

rmaxð ÞA The maximal singular value of matrix A

kAk2 2-norm of matrix A

kAk Frobenius norm of matrix A

Ak

!

The k-th right alternating power of matrix A

Ak The k-th left alternating power of matrix A

k E; Að Þ Set of thefinite eigenvalues of matrix pair E; Að Þ

Other Notation

I½m; n The set of integers from m to n

min The minimum value in a set

~ Conjugate product of two polynomial matrices

Trang 17

The theory of matrix equations is an active research topic in matrix algebra, and hasbeen extensively investigated by many researchers Different matrix equations havewide applications in various areas, such as, communication, signal processing andcontrol theory Specifically, Lyapunov matrix equations are often encountered in sta-bility analysis of linear systems [160]; the homogeneous continuous-time Lyapunovequation in block companion matrices plays a vital role in the investigation of fac-torizations of Hermitian block Hankel matrices [228]; generalized Sylvester matrixequations are often encountered in eigenstructure assignment of linear systems [90]

As to a matrix equation, three basic problems need to be considered: the ity conditions, solving approaches and expressions of the solutions For real matrixequations, a considerable number of results have been obtained for these problems

solvabil-In addition, some other problems have also been considered for some special matrixequations For example, geometric properties of continuous-time Lyapunov matrixequations were investigated in [286]; bounds of the solution were studied for discrete-time algebraic Lyapunov equations in [173, 227] and for continuous-time Lyapunovequations in [173] However, there are only a few results on complex matrix equa-tions with the conjugate of unknown matrices reported in literature For convenience,the type of these matrix equations is called the complex conjugate matrix equation.Recently, complex conjugate matrix equations have found some applications indiscrete-time antilinear systems In this book, some recent results are summarizedfor several kinds of complex conjugate matrix equations and their applications inanalysis and feedback design of antilinear systems In this chapter, the main aim is tofirst provide a survey on real linear matrix equations, and then give recent progress

on complex conjugate matrix equations The recent progress on antilinear systemsand related problems will be given in the part of “Notes and References” of Chaps.11and12 At the end of this chapter, an overview of this monograph is presented.Symbols used in this chapter are now introduced It should be pointed out that these

symbols are also adopted throughout this book For two integers m ≤ n, the notation

© Springer Science+Business Media Singapore 2017

A.-G Wu and Y Zhang, Complex Conjugate Matrix Equations

for Systems and Control, Communications and Control Engineering,

DOI 10.1007/978-981-10-0637-1_1

1

Trang 18

2 1 Introduction

I[m, n] denotes the set {m, m + 1, , n} For a square matrix A, we use det A,

ρ (A), λ (A), λmin(A) , and λmax(A) to denote the determinant, the spectral radius, the set of eigenvalues, the minimal and maximal eigenvalues of A, respectively The notations A, AT, and AHdenote the conjugate, transpose and conjugate transpose of

the matrix A, respectively Re (A) and Im (A) denote the real part and imaginary part

of the matrix A, respectively In addition,diagn

i=1 A iis used to denote the block diagonal

matrix whose elements in the main block-diagonal are A i , i ∈ I[1, n] The symbol

“⊗” is used to denote the Kronecker product of two matrices

1.1 Linear Equations

The most common linear equation may be the following real equation

where A∈ Rm ×n and b∈ Rm are known, and x∈ Rnis the vector to be determined

If A is a square matrix, it is well-known that the linear equation (1.1) has a unique

solution if and only if the matrix A is invertible, and in this case, the unique solution can be given by x = A−1b In addition, this unique solution can also be given by

x i=det A i

det A , i ∈ I[1, n], where x i is the i-th element of the vector x, and A iis the matrix formed by replacing

the i-th column of A with the column vector b This is the celebrated Cramer’s rule.

For the general case, it is well-known that the matrix equation (1.1) has a solution ifand only if

rank

A b

= rankA.

In addition, the solvability of the general equation (1.1) can be characterized in terms

of generalized inverses, and the general expression of all the solutions to the equation(1.1) can also be given in terms of generalized inverses

Definition 1.1 ([206, 208]) Given a matrix A∈ Rm ×n , if a matrix X∈ Rn ×msatisfies

AXA = A, then X is called a generalized inverse of the matrix A.

The generalized inverse may be not unique An arbitrary generalized inverse of

the matrix A is denoted by A

Theorem 1.1 ([208, 297]) Given a matrix A ∈ Rm ×n , let Abe an arbitrary

gen-eralized inverse of A Then, the vector equation ( 1.1 ) has a solution if and only if

Trang 19

The analytical solutions of the equation (1.1) given by inverses or generalizedinverses have neat expressions, and play important roles in theoretical analysis How-ever, it has been recognized that the operation of matrix inverses is not numericallyreliable Therefore, many numerical methods are applied in practice to solve linearvector equations These methods can be classified into two types One is the transfor-

mation approach, in which the matrix A needs to be transformed into some special

canonical forms, and the other is the iterative approach which generates a sequence

of vectors that approach the exact solution An iterative process may be stopped assoon as an approximate solution is sufficiently accurate in practice

For the equation (1.1) with m = n, the celebrated iterative methods include Jacobi

iteration and Gauss-Seidel iteration Let

The Gauss-Seidel and Jacobi iterative methods require that the vector equation (1.1)

has a unique solution, and all the entries in the main diagonal of A are nonzero, that

is, a ii = 0, i ∈ I[1, n] It is assumed that the initial values x i (0) of x i , i ∈ I[1, n],

are given Then, the Jacobi iterative method obtains the unique solution of (1.1) bythe following iteration [132]:

Trang 20

In 1950, David M Young, Jr and H Frankel proposed a variant of the Seidel iterative method for solving the equation (1.1) with m = n [156] This is the so-called successive over-relaxation (SOR) method, by which the elements x i,

Gauss-i ∈ I[1, n], of x can be computed sequentially by forward substitution:

Trang 21

x (k + 1) = (D + ωL)−1[ωb − (ωU + (ω − 1) D) x (k)]

The choice of relaxation factor is not necessarily easy, and depends on the properties

of A It has been proven that if A is symmetric and positive definite, the SOR method

is convergent with 0< ω < 2.

If A is symmetric and positive definite, the equation (1.1) can be solved by theconjugate gradient method proposed by Hestenes and Stiefel This method is given

in the following theorem

Theorem 1.2 Given a symmetric and positive definite matrix A∈ Rn ×n , the solution

of the equation ( 1.1 ) can be obtained by the following iteration

1.2 Univariate Linear Matrix Equations

In this section, a simple survey is provided for linear matrix equations with only oneunknown matrix variable Let us start with the Lyapunov matrix equations

The most celebrated univariate matrix equations may be the continuous-time anddiscrete-time Lyapunov matrix equations, which play vital roles in stability analy-sis [75, 160], controllability and observability analysis of linear systems [3] Thecontinuous-time and discrete-time Lyapunov matrix equations are respectively inthe forms as

where A∈ Rn ×n , and positive semidefinite matrix Q∈ Rn ×n are known, and X is the

matrix to be determined In [103, 104], the robust stability analysis was investigatedfor linear continuous-time and discrete-time systems, respectively, and the admissi-ble perturbation bounds of the system matrices were given in terms of the solutions

Trang 22

In [145], the continuous-time Lyapunov matrix equation was used to analyze theweighted logarithmic norm of matrices While in [106], this equation was employed

to investigate the so-called generalized positive definite matrix In [222], the inverse

solution of the discrete-time Lyapunov equation was applied to generate q-Markov

covers for single-input-single-output discrete-time systems In [317], a relationshipbetween the weighted norm of a matrix and the corresponding discrete-time Lya-punov matrix equation was first established, and then an iterative algorithm waspresented to obtain the spectral radius of a matrix by the solutions of a sequence ofdiscrete-time Lyapunov matrix equations

For the solutions of Lyapunov matrix equations with special forms, many results

have been reported in literature When A is in the Schwarz form, and Q is in a special

diagonal form, the solution of the continuous-time Lyapunov matrix equation (1.3)

was explicitly given in [12] When ATis in the following companion form:

closed-Routh array when A is in a companion form In [19], the solutions for the above two

Lyapunov matrix equations, which are particularly suitable for symbolic

implemen-tation, were proposed for the case where the matrix A is in a companion form In

[24], the following special discrete-time Lyapunov matrix equation was considered:

X − FXFT= GQGT,where the matrix pair(F, G) is in a controllable canonical form It was shown in [24]

that the solution to this equation is the inverse of a Schur-Cohn matrix associated

with the characteristic polynomial of F.

When A is Hurwitz stable, the unique solution to the continuous-time Lyapunov

matrix equation (1.3) can be given by the following integration form [28]:

Trang 23

 ∞

0

Further, let Q = BBTwith B∈ Rn ×r , and let the matrix exponential function e At be

expressed as a finite sum of the power of A:

When A is Schur stable, the following theorem summarizes some important

prop-erties of the discrete-time Lyapunov matrix equation (1.4)

Theorem 1.3 ([212]) If A is Schur stable, then the solution of the discrete-time

Lyapunov matrix equation ( 1.4 ) exists for any matrix Q, and is given as

factor of X directly The basic idea is to apply triangular structure to solve the

equation iteratively By constructing a new rank-1 updating scheme, an improvedHammarling method was proposed in [220] to accommodate a more general case

Trang 24

8 1 Introduction

of Lyapunov matrix equations In [284], by using a dimension-reduced method analgorithm was proposed to solve the continuous-time Lyapunov matrix equation(1.3) in controllable canonical forms In [18], the presented Smith iteration for thediscrete-time Lyapunov matrix equation (1.4) was in the form of

X (k + 1) = ATX (k) A + Q with X (0) = Q.

Besides the solutions to Lyapunov matrix equations, the bounds of the solutionshave also been extensively investigated In [191], the following result was given onthe eigenvalue bounds of the discrete-time Lyapunov matrix equation

where A ∈ Rn ×n and B ∈ Rn ×r are known matrices, and X is the matrix to be

determined

Theorem 1.4 Given matrices A ∈ Rn ×n and B ∈ Rn ×r , for the solution X to the

discrete-time Lyapunov matrix equation ( 1.7 ) there holds

λmin



Ctr(A, B) CtrT(A, B)P ≤ X ≤ λmax

Ctr(A, B) CtrT(A, B)P, where P is the solution to the Lyapunov matrix equation

In [116], lower bounds were established for the minimal and maximal eigenvalues

of the solution to the discrete-time Lyapunov equation (1.7)

Recently, parametric Lyapunov matrix equations were extensively investigated

In [307, 315], some properties of the continuous-time parametric Lyapunov matrixequations were given In [307], the solution of the parametric Lyapunov equationwas applied to semiglobal stabilization for continuous-time linear systems subject toactuator saturation; while in [315] the solution was used to design a state feedbackstabilizing law for linear systems with input delay The discrete-time parametric Lya-punov matrix equations were investigated in [313, 314], and some elegant propertieswere established

Trang 25

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

In the matrix equations (1.8) and (1.9), A∈ Rn ×n , B∈ Rp ×p , and C∈ Rn ×pare the

known matrices, and X ∈ Rn ×pis the matrix to be determined On the solvability of

the normal Sylvester matrix equation (1.8), there exists the following result whichhas been well-known as Roth’s removal rule

Theorem 1.5 ([210]) Given matrices A ∈ Rn ×n , B ∈ Rp ×p , and C ∈ Rn ×p , the

normal Sylvester matrix equation ( 1.8 ) has a solution if and only if the following two partitioned matrices are similar

The result in the preceding theorem was generalized to the Kalman-Yakubovichmatrix equation (1.9) in [238] This is the following theorem

Theorem 1.6 Given matrices A ∈ Rn ×n , B ∈ Rp ×p , and C ∈ Rn ×p , the

Kalman-Yakubovich matrix equation ( 1.9 ) has a solution if and only if there exist nonsingular real matrices S and R such that

Trang 26

10 1 Introductionequation into a triangular system which can be solved efficiently by forward or back-ward substitutions To save computation time, Bartels-Stewart method was extended

in [167] to treat the adjoint equations In [131, 140], the backward stability analysisand backward error analysis of Bartels-Stewart algorithm were given In [220], threecolumnwise direct solver schemes were proposed by modifying the Bartels-Stewartalgorithm In [133], the so-called Hessenberg-Schur algorithm was proposed for thenormal Sylvester matrix equation (1.8) This algorithm requires the transformation

of the larger of the two matrices A and B, say A, into the upper Hessenberg form and the other B to the real Schur form Like the Bertels-Stewart and Hammarling

algorithms, the Hessenberg-Schur method is also an example of the transformationmethod Different from the Bertels-Stewart algorithm, the Hessenberg-Schur algo-

rithm only requires the matrix A to be reduced to Hessenberg form In [205], a

numerical algorithm was proposed to solve the normal Sylvester matrix equation(1.8) by orthogonal reduction of the matrix B to a block upper Hessenberg form In

[17], a factored ADI (alternating direction implicit) method was presented In [139],projection methods that use the extended block Arnoldi process were proposed tosolve the low-rank normal Sylvester matrix equations

For explicit solutions to the normal Sylvester matrix equation (1.8), perhaps theearliest result may be the one in the form of finite double matrix series for the case

of both A and B being of Jordan forms in [183] In [157], two explicit solutions

were established in terms of principle idempotents and nilpotence of the coefficientmatrices In [37], explicit general solutions were given by using eigenprojections

In [171], an infinite series representation of the unique solution was established byconverting the normal Sylvester matrix equation (1.8) into a Kalman-Yakubovichmatrix equation in the form of (1.9) When B is in Jordan form, a finite iterative

approach was proposed in [92] to solve the equation (1.8) When A is a normalized

lower Hessenberg matrix, the normal Sylvester matrix equation (1.8) was investigated

in [46] It was pointed out in [46] that the solution is uniquely determined by its firstrow When the right-hand side of the normal Sylvester equation (1.8) is a matrix withrank 1, a simple iterative method was proposed in [233] based on an extension of theAstrom-Jury-Agniel algorithm

When the coefficient matrices are not in any canonical forms, explicit solutions

in the form of X = MN−1were established for the normal Sylvester matrix equation

(1.8) in literature, for example, [152, 182, 194, 258] In [152, 182, 258], the matrix

N is the value of the eigenpolynomial of A at B, while in [194] it is the value of an annihilating polynomial of A at B In [152], the solution was obtained by applying Cayley-Hamilton theorem and M was expressed as the sum of a group of matrices

which can be iteratively derived in terms of the coefficient matrices In [182], thesolution was derived based on a Bezout identity related to the mutual primeness of two

polynomials, and M was given in terms of the coefficient matrices of adj (sI − A).

In [258], the solution was established with the help of Kronecker maps, and M

was represented by the controllability matrix and observability matrix In [194],the solution was constructed based on the similarity of two partitioned matrices,

and M was provided by a finite double series form associated with the coefficient

matrices In addition, by applying spectral theory, in [171] the unique solution was

Trang 27

expressed by a contour integration on resolvents of A and B By applying the Faddeev

iterative sequence, a finite double series solution was also derived in [137] In [47],

a closed-form finite series representation of the unique solution was developed Inthis solution, some coefficients are closely related to the companion matrices of the

characteristic polynomials of matrices A and B The result in [47] is very elegant, and

thus it is provided in the following theorem Before proceeding, for a polynomial

and the corresponding upper Hankle matrix as

⎦.

Theorem 1.7 ([47]) Given matrices A ∈ Rn ×n , B ∈ Rp ×p , and C = C A C BT ∈

Rn ×p with C A∈ Rn ×q , let the normal Sylvester matrix equation ( 1.8 ) have a unique

solution, and let α (s) and β (s) with respective degrees μ and ν be coprime monic polynomials such that

Trang 28

Toeplitz matrix when A and B were in companion forms In [137], a finite iterative

method for solving the Kalman-Yakubovich matrix equation was given based on theFaddeev sequence The solution can be quickly obtained if a solution was known for

a Kalman-Yakubovich matrix equation with a right-hand side of rank 1 In [305],the unique solutions to the Kalman-Yakubovich and Stein equations were given interms of controllability matrices, observability matrices and Hankel matrices In

addition, explicit solutions in the form of X = MN−1 to the Kalman-Yakubovich

matrix equation were established in literature, for example, [155, 280] In [155], the

solution was obtained by applying linear operators approach, and M was expressed

as a finite double sum in terms of the coefficients of the characteristic polynomial of

the matrix A In [280], the solution was established with the aid of Kronecker maps, and M was expressed in terms of the controllability and observability matrices.

On the numerical approach for solving the Kalman-Yakubovich matrix tion, the typical methods are the Smith-type iterations In [18], the presented Smithiteration for the Kalman-Yakubovich matrix equation (1.9) was in the form of

equa-X (k + 1) = AX (k) B + C with X (0) = C A quadratically convergent version

of this iteration was given in [218] This iteration can be written as

X (k + 1) = X (k) + A (k) X (k) B (k) ,

A (k + 1) = A2(k) ,

B (k + 1) = B2(k) ,

Trang 29

with initial values X (0) = C, A (0) = A, and B (0) = B Obviously, the preceding iteration only works for square A and B Moreover, the Smith (l) iteration was pro-

posed in [218] It was shown in [200] that a moderate increase in the number of shifts

l can accelerate the convergence significantly However, it was also observed that the speed of convergence was hardly improved by a further increase of l [134, 200] To

improve the speed of convergence, one can adopt the so-called Smith accelerative

iteration [217] In addition, a new Smith-type iteration named the r-Smith iteration

was proposed in [310]

In fact, the normal Sylvester matrix equation (1.8) can be transformed into a

Kalman-Yakubovich matrix equation [171] For a nonzero real constant number a,

it follows from (1.8) that

(aI + A) X (aI − B) − (aI − A) X (aI + B) = 2aC.

If a is chosen so that (aI − A)−1and(aI + B)−1exist, then pre- and post-multiplying

by these matrices, respectively, gives

(aI − A)−1(aI + A) X (aI − B) (aI + B)−1− X

= 2a (aI − A)−1C (aI + B)−1.

Denote

U = (aI − A)−1(aI + A) , V = (aI − B) (aI + B)−1.

It is easily known that

Thus, some numerical approaches for Kalman-Yakubovich matrix equations can

be applied to normal Sylvester matrix equations A more general transformationapproach from normal Sylvester matrix equations to Kalman-Yakubovich matrixequations was presented in [188]

For linear matrix equations with only one unknown matrix, there are some othertypes besides those mentioned in previous subsections The simplest one may be thefollowing bilateral linear matrix equation

Trang 30

14 1 Introduction

where A ∈ Rm ×k , B ∈ Rl ×n , and C ∈ Rm ×n are known matrices, and X∈ Rk ×l is

the unknown matrix It is well-known that the bilateral matrix equation (1.11) has asolution if and only if

where V and W are the free parametric matrices Many researchers have given some

results on properties of the solution to the matrix equation (1.11) It was shown in[226] that

max

AXB =C rank X = min {k, l, k + l + rank C − rank A − rank B} ,

min

AXB =C rank X = rank C.

In addition, in [59, 171, 239] the following matrix equation was considered:

where X is the matrix to be determined This equation is a general form of the

equations (1.3), (1.4), (1.8), (1.9), and (1.11) In [171], the solution was explicitlygiven by a double integration In [59], the uniqueness condition of the solution to(1.12) was established in terms of a bivariate polynomials In [239], the uniquesolution was expressed by a double sum, and the rank of the solution was alsoestimated

In [146, 147, 239], the following matrix equation was investigated

ω



i=0

where A∈ Rn ×n , B i ∈ Rm ×q , i ∈ I[0, ω], and C ∈ R n ×qare known matrices This

equation includes the continuous-time and discrete-time Lyapunov matrix equations(1.3) and (1.4), the normal Sylvester matrix equation (1.8), the Kalman-Yakubovichmatrix equation (1.9), and the matrix equations (1.11) and (1.12) as special cases

Trang 31

in [210] on the normal Sylvester matrix equation (1.8), the following interestingconclusion was obtained in [146].

Theorem 1.8 ([146]) Given matrices A∈ Rn ×n , B

In mathematical literature, the following matrix equation was also extensivelyinvestigated

where X is the matrix to be determined, and A , B, C, and D are square matrices

with appropriate dimensions This equation is a third general form of the equations(1.3), (1.4), (1.8), and (1.9) In [129, 190], a matrix transformation approach wasestablished to solve the matrix equation (1.15) In this method, the QZ algorithm wasemployed to structure the equation in such a way that it can be solved columnwisely

by a block substitution technique In [41], the existence condition of the uniquesolution of the matrix equation (1.15) was given, and a numerical algorithm wasproposed to solve it In [138], the unique solution of the matrix equation (1.15)was given in an explicit form by using the coefficients of the Laurent expansions

of (sC − A)−1 and(sB − D)−1 In [58], for the matrix equation (1.15) a

Trang 32

gradient-16 1 Introductionbased and a least-squares-based iterative algorithms were established for the solution

by applying the hierarchical identification principle in [53, 54], respectively In [52,58], the hierarchical identification principle was also used to solve the general matrixequation

N



i=1

A i XB i = C,

where X is the unknown matrix.

Besides, some other matrix equations were also investigated In [298], necessaryand sufficient conditions were given for the existence of at least a full-column rank

solution to the matrix equation AX = EXJ Recently, some mixed-type Lyapunov

matrix equations were intensively investigated In [289], the following mixed-type

Lyapunov matrix equation with respect to X was investigated:

and some sufficient solvability conditions were derived for this matrix equation interms of inequalities In [114], a new solvability condition was proposed for theequation (1.16) by using Bhaskar and Lakshmikantham’s fixed point theorem, andalso an iterative algorithm was constructed to solve this equation In the analysis ofItô stochastic systems, the following mixed-type Lyapunov matrix equation appears[16, 36]:

In [16], a solvability condition for this equation was given by spectral radius of a

linear operator related to the two operators L1(X) = AX + XATand L2(X) = BXBT

In [114], a new solvability condition and an iterative algorithm were proposed forthe matrix equation (1.17)

1.3 Multivariate Linear Matrix Equations

We first provide a survey of linear matrix equations with two unknown matrices inthe first two subsections In the third subsection of this section, a survey will be given

on the matrix equations with more than two unknown matrices

The following matrix equation has been considered in [210] by Roth

Trang 33

where A∈ Rm ×p , B∈ Rk ×n , and C∈ Rm ×n are known matrices, and X∈ Rp ×nand

Y ∈ Rm ×kare the matrices to be determined For convenience, the matrix equation

(1.18) will be called the Roth matrix equation in this monograph It was shown in[210] that the Roth matrix equation (1.18) has a solution if and only if the followingtwo partitioned matrices are equivalent



A C

0 B

,



A 0

0 B



An alternative proof of this result was given in [126] In [40], necessary and sufficientconditions for the solvability of this equation were given by using singular valuedecomposition (SVD) The Roth matrix equation was further investigated in [8], andthe existence conditions and general solutions were expressed in terms of generalizedinverses In addition, the extremal ranks of the solutions to the Roth matrix equation(1.18) were given in [180] In [4], the Roth matrix equation (1.18) was revisitedbased on the rank condition of a partitioned matrix In the following theorem, sometypical results on the Roth matrix equation (1.18) are summarised

Theorem 1.9 ([8, 180]) Given matrices A∈ Rm ×p , B∈ Rk ×n , and C ∈ Rm ×n , the

Roth matrix equation ( 1.18 ) has a solution if and only if

where W ∈ Rp ×n and Z ∈ Rm ×k are two arbitrary matrices Further, the maximal

and minimal ranks of a pair of solutions to ( 1.18 ) are given by

Trang 34

18 1 Introductionwas investigated For convenience, this equation is called generalized Roth matrixequation A necessary and sufficient condition for its solvability and a representation

of its general solution were established in [9, 149] in terms of generalized inverses

It was shown in [196, 236] that the matrix equation (1.19) has a solution if and only

if the following rank conditions hold:

by using canonical correlation decomposition of matrix pairs Besides, necessaryand sufficient conditions for the solvability of (1.19) were given in [40] by usinggeneralized singular value decompositions In [199], a finite iterative method wasproposed for solving the generalized Roth matrix equation (1.19) When the matrixequation is solvable, then, for any initial matrix pair, a solution pair can be obtainedwithin finite iteration steps in the absence of round-off errors

In controller design of linear systems, the following Sylvester matrix equation isoften encountered:

where A∈ Rn ×n , B∈ Rn ×r , and F ∈ Rp ×p are known matrices, and X∈ Rn ×pand

Y ∈ Rr ×pare the matrices to be determined This matrix equation plays a very vital

role in eigenstructure assignment [130, 169], pole assignment [214], and so on Itsdual form is the following so-called Sylvester-observer matrix equation:

where A∈ Rn ×n , C∈ Rm ×n , and F∈ Rp ×p are known matrices, and X∈ Rp ×nand

Y ∈ Rp ×mare the matrices to be determined It is well-known that the existence of

a Luenberge observer for linear systems can be characterized based on this equation[60] A more general form of (1.20) is

Trang 35

AX + BY = EXF. (1.22)This generalized Sylvester matrix equation appears in the field of descriptor linearsystems [81], and can be used to solve the problems of eigenstructure assignment[64] and output regulation [211] for descriptor linear systems An important variation

of (1.22), called generalized Sylvester-observer equation

where X and Y need to be determined, arises in observer design [32, 254], fault

detection [127] of descriptor linear systems A more general form of the matrixequation (1.22) is the following equation

where M ∈ Rn ×n , F ∈ Rp ×p , and T ∈ Rn ×r are known matrices Obviously, the

matrix equations (1.20)–(1.25) are homogeneous In fact, their nonhomogeneouscounterparts have also been investigated For example, the so-called regulator equa-tion

1.3.2.1 Solution Approaches

There have been many numerical algorithms for solving these matrix equations

In [60], an orthogonal-transformation-based algorithm was proposed to solve theSylvester-observer matrix equation (1.21) In this algorithm, the matrix pair(A, C)

Trang 36

20 1 Introduction

is first transformed via a unitary state-space transformation into staircase form Withsuch a transformation, the solution of the Sylvester-observer matrix equation can beobtained by a reduced dimensional matrix equation with Schur form The advantage

of this approach is that one can use more degrees of freedom in the equation tofind a solution matrix with some desired robustness properties such as the minimumnorm In [45], a computational method for solving the matrix equation (1.21) was

proposed when A is large and sparse This method uses Amoldi’s reduction in the

initial process, and allows an arbitrary choice of distinct eigenvalues of the matrix

F The numerical aspects of the method in [45] was discussed in [30], and a strategy was presented for choosing the eigenvalues of F In [22], in view of the design

requirement the generalized Sylvester matrix equation (1.20) was first changed into

a normal Sylvester matrix equation by choosing F in a block upper Hessenberg matrix and fixing Y to a special matrix, and then a parallel algorithm was given by reducing the matrix A to lower Hessenberg form In [44], an algorithm was proposed

to construct an orthogonal solution of the Sylvester-observer matrix equation (1.21)

by generalizing the classical Arnoldi method In [31], a block algorithm was proposed

to compute a full rank solution of (1.21) This algorithm does not require the reduction

of the matrix A.

On the numerical solutions of (1.22) and (1.23), only a few results were reported

in literature In [32], a singular value decomposition (SVD) based block algorithmwas proposed for solving the generalized Sylvester-observer matrix equation (1.23)

In this algorithm, the matrix F needs to be chosen in a special block form, and the matrices E , A, andC are not necessarily reduced to any canonical forms In [33], a

new algorithm was proposed to numerically solve the generalized Sylvester-observermatrix equation (1.23) This algorithm can be viewed as a natural generalization of thewell-known observer-Hessenberg algorithm in [22] In this algorithm, the matrices

E and A should be respectively transformed into an upper triangular matrix and a

block upper Hessenberg matrix by orthogonal transformation The algorithm in [33]was improved in [34], and was applied to state and velocity estimation in vibratingsystems

The aforementioned numerical approaches for solving the four first-order eralized Sylvester matrix equations (1.20), (1.21), (1.22), and (1.23) can only giveone solution each time However, for several applications it is important to obtaingeneral solutions of these equations For example, in robust pole assignment prob-lem one encounters optimization problems in which the criterion function can be

gen-expressed in terms of the solutions to a Sylvester matrix equation [164] When F is

in a Jordan form, an attractive analytical and restriction-free solution was presented

in [231] for the matrix equation (1.21) Reference [66] proposes two solutions to theSylvester matrix equation (1.20) for the case where the matrix F is in a Jordan form.

One is in a finite iterative form, and the other is in an explicit form To obtain theexplicit solution given in [66], one needs to carry out a right coprime factorization

of(sI − A)−1B (when the eigenvalues of the Jordan matrix F are undetermined) or

a series of singular value decompositions (when the eigenvalues of F are known) When the matrix F is in a companion form, an explicit solution expressed by a Hankel

matrix, a symmetric operator and a controllability matrix was established in [301]

Trang 37

In many applications, for example, model reference control [96, 100], Luenbergerobserver design [93], the Sylvester matrix equation in the form of (1.20) with F

being an arbitrary matrix is often encountered Therefore, it is useful and interesting

to give complete and explicit solutions by using the general matrix F itself directly.

For such a case, a finite series solution to the Sylvester matrix equation (1.20) wasproposed in [300] Some equivalent forms of such a solution were also provided inthat paper

On the generalized Sylvester matrix equation (1.22), an explicit solution was

provided in [64] when F is in a Jordan form This solution is given by a finite

iteration In addition, a direct explicit solution for the matrix equation (1.22) wasestablished in [69] with the help of the right coprime factorization (sE − A)−1B.

The results in [64, 69] can be viewed as a generalization of those in [66] The

case where F is in a general form was firstly investigated in [303], and a complete

parametric solution was presented by using the coefficient matrices of a right coprimefactorization of(sE − A)−1B In [247], an explicit solution expressed by generalized

R-controllability matrix and generalized symmetric operator matrix was also given

In order to obtain this solution, one needs to solve a standard unilateral matrixequation In [262], an explicit solution was also given for the generalized Sylvestermatrix equation (1.22) in terms of R-controllability matrix, generalized symmetricoperator matrix and observability matrix by using Leverrier algorithm for descriptorlinear systems In [202], the generalized Sylvester matrix equation (1.22) was solved

by transforming it into a linear vector equation with the help of Kronecker products.Now, we give the results in [69, 303] on the generalized Sylvester matrix equation(1.22) Due to the block diagonal structure of Jordan forms, when the result is stated

for the case of Jordan forms, the matrix F is chosen to be the following matrix

for all s ∈ C, let F be in the form of ( 1.26 ) Further, let N (s) ∈ R n ×r [s] and

D (s) ∈ R r ×r [s] be a pair of right coprime polynomial matrices satisfying

Then, all the solutions to the generalized Sylvester matrix equation ( 1.22 ) are given by

Trang 38

expres-Theorem 1.11 ([303]) Given matrices E , A ∈ R n ×n , B ∈ R n ×r , and F ∈ Rp ×p

satisfying ( 1.27 ) for all s ∈ C, let

For the matrix equation (1.24), based on the concept of F -coprimeness, degrees

of freedom existing in the general solution to this type of equations were first given

in [82], and then a general complete parametric solution in explicit closed form wasestablished based on generalized right factorization On the matrix equation (1.25),

a neat explicit parametric solution was given in [305] by using the coefficients of thecharacteristic polynomial and adjoint polynomial matrices For the matrix equation

AX + BY = EXF + R, an explicit parametric solution was established in [78] by elementary transformation of polynomial matrices when F is in a Jordan form; while

in [278] the solution of this equation was given based on the solution of a standardmatrix equation without any structural restriction on the coefficient matrices

1.3.2.2 Applications in Control Systems Design

The preceding several kinds of Sylvester matrix equations have been extensivelyapplied to control systems design In eigenstructure assignment problems of linearsystems, the Sylvester matrix equation (1.20) plays vital roles By using a parametricsolution to time-varying Sylvester matrix equations, eigenstructure assignment prob-lems were considered in [105] for time-varying linear systems In [66], a parametricapproach was proposed for state feedback eigenstructure assignment in linear sys-tems based on explicit solutions to the Sylvester matrix equation (1.20) In [63], therobust pole assignment problem was solved via output feedback for linear systems

by combining parametric solutions of two Sylvester matrix equations in the form of

Trang 39

(1.20) with eigenvalue sensitivity theory In [73, 224], the output feedback structure assignment was investigated for linear systems In [224], the problem wassolved by using two coupled Sylvester matrix equations and the concept of(C, A, B)

eigen invariance; while in [73] the problem was handled by using an explicit parametricsolution to the Sylvester matrix equation based on singular value decompositions

In [67], the problem of eigenstructure assignment via decentralized output feedbackwas solved by the parametric solution proposed in [66] for the Sylvester matrixequation (1.20) In [65], a complete parametric approach for eigenstructure assign-ment via dynamical compensators was proposed based on the explicit solutions ofthe Sylvester matrix equation (1.20) In [91], the parametric approach in [65] wasutilized to deal with the robust control of a basic current-controlled magnetic bearing

by an output dynamical compensator

In [39], disturbance suppressible controllers were designed by using Sylvesterequations based on left eigenstructure assignment scheme In addition, some observerdesign problems can also be solved in the framework of explicit parametric solutions

of the Sylvester matrix equation (1.20) For example, in [99] the design of Luenbergerobservers with loop transfer recovery was considered; an eigenstructure assignmentapproach was proposed in [95] to the design of proportional integral observers forcontinuous-time linear systems A further application of parametric solutions to theSylvester matrix equation (1.20) is in fault detection In [98], the problem of faultdetection in linear systems was investigated based on Luenberger observers In [101],the problem of fault detection based on proportional integral observers was solved

by using the parametric solutions given in [66]

In some design problems of descriptor linear systems, the generalized Sylvestermatrix equations (1.22) and (1.23) are very important In [64], the eigenstructureassignment via state feedback was investigated for descriptor linear systems based

on the proposed explicit solution to the generalized Sylvester matrix equation (1.22).The parametric solution of (1.22) proposed in [69] was applied to state feedbackeigenstructure assignment and response analysis in [70], output feedback eigen-structure assignment in [68] and eigenstructure assignment via static proportionalplus derivative state feedback in [97] In [291], the obtained iterative solution to thegeneralized Sylvester matrix equation (1.22) was used to solve the eigenstructureassignment problem for descriptor linear systems In [94], disturbance decouplingvia output feedback in descriptor linear systems was investigated This problem wastackled by output feedback eigenstructure assignment with the help of parametricsolutions to the generalized Sylvester matrix equation (1.22) Also, the parametricsolution of (1.22) in [69] was used to design some proportional integral observersfor descriptor linear systems in [244–246, 249] In [251], the parametric solutiongiven in [69] was applied to the design of generalized proportional integral deriva-tive observers for descriptor linear systems In addition, the result on a parametricsolution in [247] to the matrix equation (1.22) has been used to design proportionalmulti-integral observers for descriptor linear systems in [254] for discrete-time caseand in [263] for continuous-time case

Trang 40

24 1 Introduction

In analysis and design of second-order linear systems [74, 77, 166], the followingmatrix equation is often encountered

where X and Y are unknown matrices A more general form of (1.29) is as follows

MXF2+ DXF + KX = B2YF2+ B1YF + B0Y , (1.30)which was proposed in [84] to investigate the problem of generalized eigenstructureassignment in a type of second-order linear systems The following nonhomogeneousform of (1.30) was also studied in [87]

MXF2+ DXF + KX = B2YF2+ B1YF + B0Y + R, (1.31)

where R is an additional parameter matrix In the previous equations, the largest degree of F is 2, and thus they are called second-order generalized Sylvester matrix

equations

When the matrix M is nonsingular and the matrix F is in a Jordan form, two

general parametric solutions were established in [74] for the matrix equation (1.29).The first one mainly depends on a series of singular value decompositions, and isthus numerically simple and reliable The second one utilizes the right factorization,

and allows the eigenvalues of F to be set undetermined The approaches in [74] were generalized in [77] to the case where the matrix M in (1.29) is not required to

be nonsingular The second-order generalized Sylvester matrix equation (1.29) was

revisited in [113] Differently from [74, 77], the matrix F can be an arbitrary square

matrix in [113] By using the coefficient matrices of the right coprime factorization

of the system, a complete general explicit solution to the equation (1.29) was given in

a finite series form In [50], a finite iterative algorithm was proposed for the equation(1.29)

In [1], the matrix equation (1.30) with B2 = 0 was considered, and an explicitsolution was given by a finite iteration It can be found that the approach in [1]

is a generalization of that in [64] This case was also handled in [283], where the

matrix F is required to be diagonal, and an explicit solution was given by using the

right coprime factorization The homogeneous equation (1.30) was investigated in[84] It was first shown that the degrees of freedom in the general solution to this

equation are determined by a so-called F-left coprime condition, and then based

on a generalized version of matrix fraction right factorizations, a general completeparametric solution to this equation was established for the case where the matrix

F is an arbitrary square matrix In [87], the nonhomogeneous equation (1.31) wasstudied Based on the general complete parametric solution to the homogeneousequation (1.30) and Smith form reduction, a general complete parametric solution tothis equation was obtained

Ngày đăng: 16/06/2017, 14:51

TỪ KHÓA LIÊN QUAN

TRÍCH ĐOẠN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w