1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Fundamentals of Circuits and Filters pptx

918 372 1
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Fundamentals of Circuits and Filters
Trường học University of Illinois Chicago
Chuyên ngành Electronics and Electrical Engineering
Thể loại Handbook
Năm xuất bản 2009
Thành phố Chicago
Định dạng
Số trang 918
Dung lượng 19,3 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In relating vector spaces, thekey ideas of linear operators and matrix representations come to the fore.. , vm} belonging to an F-vector space V is said to span the vector space ifany el

Trang 2

Fundamentals of Circuits and Filters

Trang 3

The Circuits and Filters

Handbook Third Edition

Fundamentals of Circuits and Filters

Feedback, Nonlinear, and Distributed Circuits

Analog and VLSI Circuits Computer Aided Design and Design Automation

Passive, Active, and Digital Filters

Edited by

Wai-Kai Chen

Trang 4

Edited by

Wai-Kai Chen

University of Illinois Chicago, U S A.

Third Edition

Fundamentals of Circuits and Filters

Trang 5

Boca Raton, FL 33487-2742

© 2009 by Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S Government works

Printed in the United States of America on acid-free paper

10 9 8 7 6 5 4 3 2 1

International Standard Book Number-13: 978-1-4200-5887-1 (Hardcover)

This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid- ity of all materials or the consequences of their use The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy- ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.

uti-For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For orga- nizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for

identification and explanation without intent to infringe.

Library of Congress Cataloging-in-Publication Data

Fundamentals of circuits and filters / edited by Wai-Kai Chen.

Trang 6

Preface vii

SECTION I Mathematics

Cheryl B Schrader and Michael K Sain

Michael K Sain and Cheryl B Schrader

Hari C Reddy, I-Hung Khoo, and P K Rajan

v

Trang 7

SECTION II Circuit Elements, Devices, and Their Models

Stanisław Nowak, Tomasz W Postupolski, Gordon E Carlson,

and Bogdan M Wilamowski

Tomas H Lee, Maria del Mar Hershenson, Sunderarajan S Mohan,

Hirad Samavati, and C Patrick Yue

Josef A Nossek

Edwin W Greeneich and James F Delansky

David J Comer and Donald T Comer

David G Nairn and Sergio B Franco

Chris Toumazou and Alison Payne

SECTION III Linear Circuit Analysis

John Choma, Jr

Ray R Chen, Artice M Davis, and Marwan A Simaan

James A Svoboda

Pen-Min Lin

Jiri Vlach and John Choma, Jr

Jiri Vlach

Peter B Aronhime

Benedykt S Rodanski and Marwan M Hassoun

Robert W Newcomb

Kwong S Chao

Index IN-1

Trang 8

The purpose of this book is to provide in a single volume a comprehensive reference work covering thebroad spectrum of mathematics for circuits andfilters; circuits elements, devices, and their models; andlinear circuit analysis This book is written and developed for the practicing electrical engineers inindustry, government, and academia The goal is to provide the most up-to-date information in thefield.Over the years, the fundamentals of thefield have evolved to include a wide range of topics and a broadrange of practice To encompass such a wide range of knowledge, this book focuses on the key concepts,models, and equations that enable the design engineer to analyze, design, and predict the behavior oflarge-scale circuits While design formulas and tables are listed, emphasis is placed on the key conceptsand theories underlying the processes.

This book stresses fundamental theories behind professional applications and uses several examples toreinforce this point Extensive development of theory and details of proofs have been omitted The reader

is assumed to have a certain degree of sophistication and experience However, brief reviews of theories,principles, and mathematics of some subject areas are given These reviews have been done concisely withperception

The compilation of this book would not have been possible without the dedication and efforts ofProfessors Yih-Fang Huang and John Choma, Jr., and most of all the contributing authors I wish tothank them all

Wai-Kai Chen

vii

Trang 10

Wai-Kai Chenis a professor and head emeritus of the Department

of Electrical Engineering and Computer Science at the University ofIllinois at Chicago He received his BS and MS in electrical engin-eering at Ohio University, where he was later recognized as adistinguished professor He earned his PhD in electrical engineering

at the University of Illinois at Urbana–Champaign

Professor Chen has extensive experience in education and try and is very active professionally in the fields of circuits andsystems He has served as a visiting professor at Purdue University,the University of Hawaii at Manoa, and Chuo University in Tokyo,Japan He was the editor-in-chief of the IEEE Transactions onCircuits and Systems, Series I and II, the president of the IEEECircuits and Systems Society, and is the founding editor and theeditor-in-chief of the Journal of Circuits, Systems and Computers

indus-He received the Lester R Ford Award from the MathematicalAssociation of America; the Alexander von Humboldt Award from Germany; the JSPS FellowshipAward from the Japan Society for the Promotion of Science; the National Taipei University of Scienceand Technology Distinguished Alumnus Award; the Ohio University Alumni Medal of Merit forDistinguished Achievement in Engineering Education; the Senior University Scholar Award and the

2000 Faculty Research Award from the University of Illinois at Chicago; and the Distinguished AlumnusAward from the University of Illinois at Urbana–Champaign He is the recipient of the Golden JubileeMedal, the Education Award, and the Meritorious Service Award from the IEEE Circuits and SystemsSociety, and the Third Millennium Medal from the IEEE He has also received more than a dozenhonorary professorship awards from major institutions in Taiwan and China

A fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the American Associationfor the Advancement of Science (AAAS), Professor Chen is widely known in the profession for thefollowing works: Applied Graph Theory (North-Holland), Theory and Design of Broadband MatchingNetworks (Pergamon Press), Active Network and Feedback Amplifier Theory (McGraw-Hill), LinearNetworks and Systems (Brooks=Cole), Passive and Active Filters: Theory and Implements (John Wiley),Theory of Nets: Flows in Networks (Wiley-Interscience), The Electrical Engineering Handbook (AcademicPress), and The VLSI Handbook (CRC Press)

ix

Trang 12

San Jose State University

San Jose, California

Donald T ComerDepartment of Electrical andComputer EngineeringBrigham Young UniversityProvo, Utah

Artice M DavisDepartment of ElectricalEngineering

San Jose State UniversitySan Jose, CaliforniaJames F DelanskyDepartment of ElectricalEngineering

Pennsylvania State UniversityUniversity Park, PennsylvaniaJohn R Deller, Jr

Department of Electrical andComputer EngineeringMichigan State UniversityEast Lansing, MichiganIgor Djokovic

PairGain TechnologiesTustin, CaliforniaSergio B FrancoDivision of EngineeringSan Francisco State UniversitySan Francisco, California

Edwin W GreeneichDepartment of ElectricalEngineering

Arizona State UniversityTempe, ArizonaMarwan M HassounDepartment of Electrical andComputer EngineeringIowa State UniversityAmes, Iowa

Maria del Mar HershensonCenter for Integrated SystemsStanford University

Stanford, CaliforniaYih-Fang HuangDepartment of ElectricalEngineering

University of Notre DameNotre Dame, Indiana

W Kenneth JenkinsDepartment of ElectricalEngineering

Pennsylvania State UniversityUniversity Park, PennsylvaniaI-Hung Khoo

Department of ElectricalEngineering

California State UniversityLong Beach, CaliforniaJelena KovačevicAT&T Bell LaboratoriesMurray Hill, New Jersey

xi

Trang 13

Imperial College of Science,

Technology and Medicine

Hari C ReddyDepartment of ElectricalEngineering

California State UniversityLong Beach, Californiaand

Department of ComputerScience=Electrical andControl EngineeringNational Chiao-Tung University,Taiwan

Benedykt S RodanskiFaculty of EngineeringUniversity of Technology,Sydney

Sydney, New South Wales,Australia

Michael K SainDepartment of ElectricalEngineering

University of Notre DameNotre Dame, Indiana

Hirad SamavatiCenter for Integrated SystemsStanford University

Stanford, California

Cheryl B SchraderCollege of EngineeringBoise State UniversityBoise, Idaho

Marwan A SimaanDepartment of Electrical andComputer EngineeringUniversity of PittsburghPittsburgh, Pennsylvania

James A SvobodaDepartment of ElectricalEngineering

Clarkson UniversityPotsdam, New YorkKrishnaiyan ThulasiramanSchool of Computer ScienceUniversity of OklahomaNorman, OklahomaChris ToumazouInstitute of BiomedicalEngineeringImperial College of Science,Technology and MedicineLondon, England

P P VaidyanathanDepartment of ElectricalEngineering

California Institute ofTechnologyPasadena, CaliforniaJiri Vlach

Department of Electrical andComputer EngineeringUniversity of WaterlooWaterloo, Ontario, Canada

Bogdan M WilamowskiAlabama Nano=Micro Scienceand Technology CenterDepartment of Electrical andComputer EngineeringAuburn UniversityAuburn, Alabama

C Patrick YueCenter for Integrated SystemsStanford University

Stanford, California

Trang 14

and Singular Values 1-151.8 On Linear Systems 1-18References 1-20

1.1 Introduction

It is only after the engineer masters’ linear concepts—linear models and circuit and filter theory—that thepossibility of tackling nonlinear ideas becomes achievable Students frequently encounter linear meth-odologies, and bits and pieces of mathematics that aid in problem solution are stored away Unfortu-nately, in memorizing the process offinding the inverse of a matrix or of solving a system of equations,the essence of the problem or associated knowledge may be lost For example, most engineers are fairlycomfortable with the concept of a vector space, but have difficulty in generalizing these ideas to themodule level Therefore, the intention of this section is to provide a unified view of key concepts in thetheory of linear circuits and filters, to emphasize interrelated concepts, to provide a mathematicalreference to the handbook itself, and to illustrate methodologies through the use of many and variedexamples

This chapter begins with a basic examination of vector spaces overfields In relating vector spaces, thekey ideas of linear operators and matrix representations come to the fore Standard matrix operationsare examined as are the pivotal notions of determinant, inverse, and rank Next, transformations areshown to determine similar representations, and matrix characteristics such as singular values andeigenvalues are defined Finally, solutions to algebraic equations are presented in the context of matricesand are related to this introductory chapter on mathematics as a whole

Standard algebraic notation is introducedfirst To denote an element s in a set S, use s 2 S Considertwo sets S and T The set of all ordered pairs (s, t) where s2 S and t 2 T is defined as the Cartesianproduct set S3 T A function f from S into T, denoted by f : S ! T, is a subset U of ordered pairs (s, t) 2S3 T such that for every s 2 S, one and only one t 2 T exists such that (s, t) 2 U The function evaluated

at the element s gives t as a solution ( f(s)¼ t), and each s 2 S as a first element in U appears exactly once

1-1

Trang 15

A binary operation is a function acting on a Cartesian product set S3 T When T ¼ S, one speaks of abinary operation on S.

1.2 Vector Spaces over Fields

Afield F is a nonempty set F and two binary operations, sum (þ) and product, such that the followingproperties are satisfied for all a, b, c 2 F:

1 Associativity: (aþ b) þ c ¼ a þ (b þ c); (ab)c ¼ a(bc)

2 Commutativity: aþ b ¼ b þ a ; ab ¼ ba

3 Distributivity: a(bþ c) ¼ (ab) þ (ac)

4 Identities: (Additive) 02 F exists such that a þ 0 ¼ a

(Multiplicative) 12 F exists such that a1 ¼ a

5 Inverses: (Additive) For every a2 F, b 2 F exists such that a þ b ¼ 0

(Multiplicative) For every nonzero a2 F, b 2 F exists such that ab ¼ 1

Examples

The set of integers Z with the standard notions of addition and multiplication does not form afieldbecause a multiplicative inverse in Z exists only for1 The integers form a commutative ring Likewise,polynomials in the indeterminate s with coefficients from F form a commutative ring F[s] If fieldproperty 2 also is not available, then one speaks simply of a ring An additive group is a nonempty set Gand one binary operationþ satisfying field properties 1, 4, and 5 for addition, that is, associativity and theexistence of additive identity and inverse Moreover, if the binary operationþ is commutative (fieldproperty 2), then the additive group is said to be abelian Common notation regarding inverses is that theadditive inverse for a2 F is b ¼ a 2 F In the multiplicative case b ¼ a12 F

An F-vector space V is a nonempty set V and afield F together with binary operations þ: V 3 V ! Vand *: F 3 V ! V subject to the following axioms for all elements v, w 2 V and a, b 2 F:

1 V andþ form an additive abelian group

Elements of V are referred to as vectors, whereas elements of F are scalars Note that the terminologyvector space V over thefield F is used often A module differs from a vector space in only one aspect; theunderlying field in a vector space is replaced by a ring Thus, a module is a direct generalization of avector space

When considering vector spaces of n-tuples,þ is vector addition defined element by element using thescalar addition associated with F Multiplication (*), which is termed scalar multiplication, also is defined

Trang 16

element by element using multiplication in F The additive identity in this case is the zero vector (n-tuple

of zeros) or null vector, and Fndenotes the set of n-tuples with elements in F, a vector space over F

A nonempty subset eV  V is called a subspace of V if for each v, w 2 eV and every a2 F, v þ w 2 eV,and a * v 2 eV When the context makes things clear, it is customary to suppress the *, and write av inplace of a * v

A set of vectors {v1, v2, , vm} belonging to an F-vector space V is said to span the vector space ifany element v 2 V can be represented by a linear combination of the vectors vi That is, scalars

a1, a2, , am2 F are such that

Examples

100

00

26664

37775

010

00

26664

37775

001

00

26664

37775

000

10

26664

37775

000

01

26664

37775

Consider any basis {v1, v2, , vn} in an n-dimensional vector space Every v2 V can be representeduniquely by scalars a1, a2, , an2 F as

Trang 17

¼ [v1 v2    vn]a (1:5)Here, a2 Fn

is a coordinate representation of v2 V with respect to the chosen basis The reader will be able

to discern that each choice of basis will result in another representation of the vector under consideration

Of course, in the applications, some representations are more popular and useful than others

1.3 Linear Operators and Matrix Representations

First, recall the definition of a function f: S ! T Alternate terminology for a function is mapping,operator, or transformation The set S is called the domain of f, denoted by D( f ) The range of f, R( f ), isthe set of all t2 T such that (s, t) 2 U ( f(s) ¼ t) for some s 2 D( f )

Examples

If R( f )¼ T, then f is said to be surjective (onto) Loosely speaking, all elements in T are used up If

f : S! T has the property that f(s1)¼ f(s2) implies s1¼ s2, then f is said to be injective (one-to-one) Thismeans that any element in R( f ) comes from a unique element in D( f ) under the action of f If a function

is both injective and surjective, then it is said to be bijective (one-to-one and onto)

Examples

Now consider an operator L: V! W, where V and W are vector spaces over the same field F L is said

to be a linear operator if the following two properties are satisfied for all v, w 2 V and for all a 2 F:

L(av)¼ aL(v) (1:6)L(vþ w) ¼ L(v) þ L(w) (1:7)Equation 1.6 is the property of homogeneity and Equation 1.7 is the property of additivity Together theyimply the principle of superposition, which may be written as

L(a1v1þ a2v2)¼ a1L(v1)þ a2L(v2) (1:8)for all v, v 2 V and a, a 2 F If Equation 1.8 is not satisfied, then L is called a nonlinear operator

Trang 18

Such an operator is linear.

The null space (kernel) of a linear operator L: V! W is the set

ker L¼ {v 2 V such that L(v) ¼ 0} (1:9)Equation 1.9 defines a vector space In fact, ker L is a subspace of V The mapping L is injective if andonly if ker L¼ 0; that is, the only solution in the right member of Equation 1.9 is the trivial solution Inthis case, L is also called monic

The image of a linear operator L: V! W is the set

im L¼ {w 2 W such that L(v) ¼ w for some v 2 V} (1:10)Clearly, im L is a subspace of W, and L is surjective if and only if im L is all of W In this case, L is alsocalled epic

A method of relating specific properties of linear mappings is the exact sequence Consider a sequence

of linear mappings

   V !L

W !~L U!    (1:11)This sequence is said to be exact at W if im L¼ ker ~L A sequence is called exact if it is exact at each vectorspace in the sequence Examine the following special cases:

0! V !L

W !~L U! 0 (1:13)Sequence (Equation 1.12) is exact if and only if L is monic, whereas Equation 1.13 is exact if and only if ~L

is epic

Further, let L: V! W be a linear mapping between finite-dimensional vector spaces The rank of L,r(L), is the dimension of the image of L In such a case

Trang 19

r(L) þ dim ( ker L) ¼ dim V (1:14)Linear operators commonly are represented by matrices It is quite natural to interchange these twoideas, because a matrix with respect to the standard bases is indistinguishable from the linear operator itrepresents However, insight may be gained by examining these ideas separately For V and W, n- andm-dimensional vector spaces over F, respectively, consider a linear operator L: V ! W Moreover, let{v1, v2, , vn} and {w1, w2, , wm} be respective bases for V and W Then L: V! W can be representeduniquely by the matrix M2 Fm3nwhere

37

Matrices arise naturally as a means to represent sets of simultaneous linear equations For example, inthe case of Kirchhoff equations, Chapter 7 shows how incidence, circuit, and cut matrices arise Orconsider ap network having node voltages vi, i¼ 1, 2 and current sources ii, i¼ 1, 2 connected across theresistors Ri, i¼ 1, 2 in the two legs of the p The bridge resistor is R3 Thus, the unknown node voltagescan be expressed in terms of the known source currents in the manner

R2R3

v2 1

R3

v1¼ i2 (1:17)

If the voltages, vi, and the currents, ii, are placed into a voltage vector v2 R2

and current vector i2 R2

,respectively, then Equations 1.16 and 1.17 may be rewritten in matrix form as

Trang 20

Matrix addition, thus, is defined using addition in the field over which the matrix lies Accordingly, thematrix, each of whose entries is 02 F, is an additive identity for the family One can set up additiveinverses along similar lines, which, of course, turn out to be the matrices each of whose elements is thenegative of that of the original matrix.

Recall how scalar multiplication was defined in the example of the vector space of n-tuples Scalarmultiplication can also be defined between a field element a 2 F and a matrix M 2 Fm3nin such a waythat the product aM is calculated element-wise:

is, however, associative and distributive with respect to matrix addition Under certain conditions, thefield F of scalars, the set of matrices over F, and these three operations combine to form an algebra.Chapter 2 examines algebras in greater detail

Trang 21

. MN ¼ P ¼ 10 24 38

 , and evaluate Equation

as an identity matrix of appropriate size, without explicitly denoting the number of its rows andcolumns

The transpose MT2 Fn3mof a matrix M2 Fm3nis found by interchanging the rows and columns.Thefirst column of M becomes the first row of MT

, the second column of M becomes the second row of

MT, and so on The notations MTand M0are also used If M¼ MT

the matrix is called symmetric Notethat two matrices M, N2 Fm3n

are equal if and only if all respective elements are equal: mij¼ nijfor all i, j.The Hermitian transpose M* 2 Cn3mof M 2 Cm3n

is also termed the complex conjugate transpose

To compute M*, form MT and take the complex conjugate of every element in MT The followingproperties also hold for matrix transposition for all M, N2 Fm3n, P2 Fn3p, and a2 F : (MT)T¼ M,(Mþ N)T¼ MTþ NT

, (aM)T¼ aMT

, and (MP)T¼ PT

MT.Examples

1.5 Determinant, Inverse, and Rank

Consider square matrices of the form [m11]2 F131 For these matrices, define the determinant as m11andestablish the notation det([m11]) for this construction This definition can be used to establish themeaning of det(M), often denoted byjMj, for M 2 F232 Consider

To calculate the determinant of this M, (1) choose any row i (or column j), (2) multiply each element mik

(or mkj) in that row (or column) by its minor and by (1)iþk(or (1)kþj), and (3) add these results Notethat the product of the minor with the sign (1)iþk(or (1)kþj) is called the cofactor of the element inquestion If row 1 is chosen, the determinant of M is found to be m11(þm22)þ m12(m21), a well-knownresult The determinant of 23 2 matrices is relatively easy to remember: multiply the two elements alongthe main diagonal and subtract the product of the other two elements Note that it makes no differencewhich row or column is chosen in step 1

Trang 22

A similar procedure is followed for larger matrices Consider

Determinants of n3 n matrices for n > 3 are computed in a similar vein As in the earlier cases, thedeterminant of an n3 n matrix may be expressed in terms of the determinants of (n  1) 3 (n  1)submatrices; this is termed as Laplace’s expansion To expand along row i or column j in M 2 Fn3n

35

Trang 23

determinant equal to zero.

Determinants satisfy many interesting relationships For any n3 n matrix, the determinant may beexpressed in terms of determinants of (n 1) 3 (n  1) matrices or first-order minors In turn, deter-minants of (n 1) 3 (n  1) matrices may be expressed in terms of determinants of (n  2) 3 (n  2)matrices or second-order minors, etc Also, the determinant of the product of two square matrices isequal to the product of the determinants:

det (MN)¼ det (M) det (N) (1:29)For any M2 Fn3nsuch thatjMj 6¼ 0, a unique inverse M12 Fn3nsatisfies

MM1 ¼ M1M¼ I (1:30)For Equation 1.29 one may observe the special case in which N¼ M1, then (det(M))1¼ det(M1) Theinverse M1may be expressed using determinants and cofactors in the following manner Form thematrix of cofactors

37

7 (1:31)

The transpose of Equation 1.31 is referred to as the adjoint matrix or adj(M) Then,

M1¼M~TM

j j¼

adj(M)M

j j (1:32)

Trang 24

37

Note that Equation 1.32 is satisfied

changing the sign on the remaining elements, and dividing by the determinant

The rank of a matrix M,r(M), is the number of linearly independent columns of M over F, or usingother terminology, the dimension of the image of M For M2 Fm3nthe number of linearly independentrows and columns is the same, and is less than or equal to the minimum of m and n Ifr(M) ¼ n, M is offull-column rank; similarly, ifr(M) ¼ m, M is of full-row rank A square matrix with all rows (and allcolumns) linearly independent is said to be nonsingular In this case, det(M)6¼ 0 The rank of M also may

be found from the size of the largest square submatrix with a nonzero determinant A full-rank matrixhas a full-size minor with a nonzero determinant

The null space (kernel) of a matrix M2 Fm3nis the set

ker M¼ {v 2 Fn such that Mv¼ 0} (1:33)Over F, ker M is a vector space with dimension defined as the nullity of M, v(M) The fundamentaltheorem of linear equations relates the rank and nullity of a matrix M2 Fm3nby

0

 

Trang 25

For thefirst phase of the discussion, consider a linear operator that maps a vector space into itself, such

as L: V ! V, where V is n-dimensional Once a basis is chosen in V, L will have a unique matrixrepresentation Choose {v1, v2, , vn} and {v1,v2, ,vn} as two such bases A matrix M2 Fn3nmay bedetermined using thefirst basis, whereas another matrix M 2 Fn3n will result in the latter choice.According to the discussion following Equation 1.15, the ith column of M is the representation of L(vi)with respect to {v1, v2, , vn}, and the ith column of M is the representation of L(vi) with respect to{v1,v2, ,vn} As in Equation 1.4, any basis element vihas a unique representation in terms of the basis{v1,v2, ,vn} Define a matrix P 2 Fn3nusing the ith column as this representation Likewise, Q2 Fn3n

may have as its ith column the unique representation of v1 with respect to {v1, v2, , vn} Eitherrepresents a basis change which is a linear operator By construction, both P and Q are nonsingular.Such matrices and linear operators are sometimes called basis transformations Notice that P¼ Q1

a w M

Trang 26

counterclockwise so as to reach the lower left corner and set the result equal to that obtained byprogressing clockwise to the lower left corner In equations this is carried out as follows:

aw¼ Paw¼ PMav¼ PMQav¼ Mav (1:35)Inasmuch asav2 Fn

is arbitrary, it follows that

M¼ PMP1 (1:36)Sketches that have this type of property, namely the same result when the sketch is traversed from astarting corner to an finishing corner by two paths, are said to be commutative It is perhaps moretraditional to show the vector space Fninstead of the vectors at the corners Thus, the sketch would becalled a commutative diagram of vector spaces and linear operators M and M are said to be similarbecause a nonsingular matrix P2 Fn3nis such that Equation 1.36 is true The matrix P is then called asimilarity transformation Note that all matrix representations associated with the same linear operator,from a vector space to itself, are similar Certain choices of bases lead to special forms for the matrices ofthe operator, as are apparent in the following examples

Examples

Trang 27

theorem, which states that every matrix satisfies its own characteristic equation, it is always

linearly independent vectors is

37777

which is the transpose of what is known as the companion form

For the second phase of the discussion, select a pair of bases, one for the vector space V and onefor the vector space W, and construct the resulting matrix representation M of L: V! W Another choice

of bases exists for V and W, with the property that the resulting matrix M representing L is of the form

If a matrix M has been transformed into normal form, certain types of key information become available.For example, one knows the rank of M becauser(M) is the number of rows and columns of the identity inEquation 1.37 Perhaps more importantly, the normal form is easily factored in a fundamental way, and sosuch a construction is a natural means to construct two factors of minimal rank for a given matrix Thereader is cautioned, however, to be aware that computational linear algebra is quite a different subject thantheoretical linear algebra One common saying is that‘‘if an algorithm is straightforward, then it is notnumerically desirable.’’ This may be an exaggeration, but it is well to recognize the implications of finiteprecision on the computer Space limitations prevent addressing numerical issues

Many other thoughts can be expressed in terms of elementary basis transformations By way ofillustration, elementary basis transformations offer an alternative in finding the inverse of a matrix.For a nonsingular matrix M2 Fn3n, append to M an n3 n identity I to form the n 3 2n matrix

Trang 28

M¼ [M I] (1:38)Perform elementary row transformations on Equation 1.38 to transform M into normal form Then M1will appear in the last n columns of the transformed matrix

1.7 Characteristics: Eigenvalues, Eigenvectors,

and Singular Values

A matrix has certain characteristics associated with it Of these, characteristic values or eigenvalues may

be determined through the use of matrix pencils In general, a matrix pencil may be formed from twomatrices M and N2 Fm3nand an indeterminatel in the manner

Trang 29

[lN  M] 2 F[l]mn (1:39)

In determining the eigenvalues of a square matrix M2 Fn3n, one assumes the special case in which

N¼ I 2 Fn3n

Assume that M is a square matrix over the complex numbers Then,l 2 C is called an eigenvalue of M

if some nonzero vector v2 Cn

exists such that

Mv¼ lv (1:40)Any such v6¼ 0 satisfying Equation 1.40 is said to be an eigenvector of M associated with l It is easy tosee that Equation 1.40 can be rewritten as

(lI  M)v ¼ 0 (1:41)Because Equation 1.41 is a set of n linear homogeneous equations, a nontrivial solution (v6¼ 0) exists ifand only if

D(l) ¼ det (lI  M) ¼ 0 (1:42)

In other words, (lI  M) is singular Therefore, l is an eigenvalue of M if and only if it is a solution ofEquation 1.42 The polynomialD(l) is the characteristic polynomial and D(l) ¼ 0 is the characteristicequation Moreover, every n3 n matrix has n eigenvalues that may be real, complex, or both, wherecomplex eigenvalues occur in complex–conjugate pairs If two or more eigenvalues are equal they are said

to be repeated (not distinct) It is interesting to observe that although eigenvalues are unique, tors are not Indeed, an eigenvector can be multiplied by any nonzero element of C and still maintain itsessential features Sometimes this lack of uniqueness is resolved by selecting unit length for theeigenvectors with the aid of a suitable norm

eigenvec-Recall that matrices representing the same operator are similar One may question if these matricesindeed contain the same characteristic information To answer this question, examine

det (lI  M) ¼ det (lPP1 PMP1)¼ det (P(lI  M)P1) (1:43)

¼ det (P) det (lI  M) det (P1)¼ det (lI  M) (1:44)

From Equation 1.44 one may deduce that similar matrices have the same eigenvalues because theircharacteristic polynomials are equal

For every square matrix M with distinct eigenvalues, a similar matrix M is diagonal In particular, theeigenvalues of M, and hence M, appear along the main diagonal Letl1,l2, ,lnbe the eigenvalues(all distinct) of M and let v1, v2, , vnbe corresponding eigenvectors Then, the vectors {v1, v2, , vn} arelinearly independent over C Choose P1¼ Q ¼ [v1v2   vn] as the modal matrix Because Mvi¼ livi,

M¼ PMP1as before

For matrices with repeated eigenvalues, a similar approach may be followed wherein M is blockdiagonal, which means that matrices occur along the diagonal with zeros everywhere else Each matrixalong the diagonal is associated with an eigenvalue and takes a specific form depending upon thecharacteristics of the matrix itself The modal matrix consists of generalized eigenvectors, of whichthe aforementioned eigenvector is a special case; thus the modal matrix is nonsingular The matrix M isthe Jordan canonical form Space limitations preclude a detailed analysis of such topics here; the reader isdirected to Chen (1984) for further development

Trang 30

and write Equation 1.41

0 0    sr

266

37

The elementssi, called singular values, are related bys1 s2   sr> 0, and the columns of U (V)are referred to as left (right) singular vectors Although the unitary matrices U and V are not unique for agiven M, the singular values are unique

Trang 31

SVD is useful in the numerical calculation of rank After performing an SVD, the size of the matrixP

37

377

37

7 (1:50)

In the context of the foregoing discussion, Equation 1.50 represents the action of a linear operator If theleft member is a given vector, in the usual manner, then afirst basic issue concerns whether the vectorrepresented by the left member is in the image of the operator or not If it is in the image, the equationhas at least one solution; otherwise the equation has no solution A second basic issue concerns the kernel

of the operator If the kernel contains only the zero vector, then the equation has at most one solution;otherwise more than one solution can occur, provided that at least one solution exists

When one thinks of a set of simultaneous equations as a‘‘system’’ of equations, the intuitive transition

to the idea of a linear system is quite natural In this case the vector in the left member becomes the input

to the system, and the solution to Equation 1.50, when it exists and is unique, is the output of the system.Other than being a description in terms of inputs and outputs, as above, linear systems may also bedescribed in terms of sets of other types of equations, such as differential equations or differenceequations When that is the situation, the familiar notion of initial condition becomes an instance ofthe idea of state, and one must examine the intertwining of states and inputs to give outputs Then, theidea of Equation 1.50, when each input yields a unique output, is said to define a system function

If the differential (difference) equations are linear and have constant coefficients, the possibility exists

of describing the system in terms of transforms, for example, in the s- or z-domain This leads tofascinating new interpretations of the ideas of the foregoing sections, this time, for example, overfields ofrational functions Colloquially, such functions are best known as transfer functions

Associated with systems described in the time-, s-, or z-domain, some characteristics of the system alsoaid in analysis techniques Among the most basic of these are the entities termed poles and zeros, whichhave been linked to the various concepts of system stability Both poles and zeros may be associated withmatrices of transfer functions, and with the original differential or difference equations themselves Acomplete and in-depth treatment of the myriad meanings of poles and zeros is a challenging undertaking,particularly in matrix cases For a survey of the ideas, see Schrader and Sain (1989) However, a great many

of the definitions involve such concepts as rank, pencils, eigenvalues, eigenvectors, special matrix forms,vector spaces, and modules—the very ideas sketched out in the sections preceding

Trang 32

Algebraic methods are of growing importance in system theory Of particular interest is thehistorical survey by Schrader and Wyman (2008) which describes the module-theoretic approach tozeros of a linear system and the application of these ideas to inverse systems and system design.The longstanding notion that zeros of a linear system become the poles of its inverse system isresolved using these methods Additionally, for the first time, by employing module-theoreticapproaches, a structural algebraic meaning is given to the principle that the number of zeros of atransfer function matrix equals the number of poles Central to this result is the Fundamental Pole-Zero Exact Sequence, an exact sequence of finite-dimensional vector spaces over the same field,represented by

0! Z(G) ! X(G)

W ( ker G(z))! W (im G(z)) ! 0

where

Z(G) is the global space of zeros of a transfer function matrix G(z)

X(G) is the global space of poles of G(z)

W () is the Wedderburn–Forney construction

The sequence is presented here to encourage further investigation into these powerful methods.One very commonly known idea for representing solutions to Equation 1.50 is Cramer’s rule.When m¼ n, and when M has an inverse, the use of Cramer’s rule expresses each unknown variableindividually by using a ratio of determinants Choose the ith unknown vi Define the determinant Miasthe determinant of a matrix formed by replacing column i in M with w Then,

The concepts presented previously allow for more detailed considerations in the solution of circuit andfilter problems, using various approaches outlined in the remainder of this book Chapter 2 discusses themultiplication of vectors by means of the foundational idea of bilinear operators and matrices Chapters

3 through 5 on transforms—Fourier, z, and Laplace—provide the tools for analysis by allowing a set ofdifferential or difference equations describing a circuit to be written as a system of linear algebraicequations Moreover, each transform itself can be viewed as a linear operator, and thus becomes a primeexample of the ideas of this chapter The remaining chapters focus on graph–theoretical approaches tothe solution of systems of algebraic equations From this vantage point, one can see the entire Section I inthe context of linear operators, their addition, and multiplication

A brief treatment cannot deal with all the interesting questions and answers associated with linearoperators and matrices For a more detailed treatment of these standard concepts, see any basic algebratext, for example, Greub (1967)

Trang 33

C.-T Chen, Linear System Theory and Design, New York: GBS College Publishing, 1984

W H Greub, Linear Algebra, New York: Springer-Verlag, 1967

C B Schrader and M K Sain, Research on system zeros: A survey, Int J Control, 50, 4, 1407–1433,Oct 1989

C B Schrader and B F Wyman, Modules of zeros for linear multivariable systems, in Advances

in Statistical Control, Algebraic System Theory, and Dynamic Systems Characteristics C.-H Won,

C B Schrader and A N Michel, eds., New York: Birkhauser Boston, Inc., pp 145–158, 2008

Trang 34

As useful as it is, the notion of an F-vector space V fails to provide for one of the most important ideas

in the applications—the concept of multiplication of vectors In a vector space one can add vectors andmultiply vectors by scalars, but one cannot multiply vectors by vectors Yet, there are numeroussituations in which one faces exactly these operations Consider, for instance, the cross and dot productsfrom field theory Even in the case of matrices, the ubiquitous and crucial matrix multiplication isavailable, when it is defined The key to the missing element in the discussion lies in the terminology for

2-1

Trang 35

matrix operations, which will be familiar to the reader as the matrix algebra What must occur in orderfor vector-to-vector multiplication to be available is for the vector space to be extended into an algebra.Unfortunately, the word‘‘algebra’’ carries a rather imprecise meaning from the most elementary andearly exposures from which it came to signify the collection of operations done in arithmetic, at the timewhen the operations are generalized to include symbols or literals such as a, b, and c or x, y, and z Such anotion generally corresponds closely with the idea of afield, F, as defined in Chapter 1, and is not muchoff the target for an environment of scalars It may, however, come as a bit of a surprise to the reader thatalgebra is a technical term, in the same spirit asfields, vector spaces, rings, etc Therefore, if one is to haveavailable a notion of multiplication of vectors, then it is appropriate to introduce the precise notion of analgebra, which captures the desired idea in an axiomatic sense.

It is probably true that thefield is the most comfortable of axiomatic systems for most persons because

it corresponds to the earliest and most persistent of calculation notions However, it is also true that thering has an intuitive and immediate understanding as well, which can be expressed in terms of the well-known phrase‘‘playing with one arm behind one’s back.’’ Indeed, each time an axiom is removed, it issimilar to removing one of the options in a game This adds to the challenge of a game, and leads to allsorts of new strategies Such is the case for algebras, as is clear from the next definition What follows isnot the most general of possible definitions, but probably that which is most common

An algebra A is an F-vector space A, which is equipped with a multiplication a1a2of vectors a1and a2

in such a manner that it is also a ring First, addition in the ring is simply addition of vectors in the vectorspace Second, a special relationship exists between multiplication of vectors and scalar multiplication inthe vector space If a1and a2are vectors in A, and if f is a scalar in F, then the following identity holds:

f (a1a2)¼ ( fa1)a2¼ a1( fa2) (2:1)

Note that the order of a1and a2does not change in the above equalities This must be true because noaxiom of commutativity exists for multiplication The urge to define a symbol for vector multiplication isresisted here so as to keep things as simple as possible In the same way the notation for scalarmultiplication, as introduced in Chapter 1, is suppressed here in the interest of simplicity Thus, thescalar multiplication can be associated either with the vector product, which lies in A, or with one orother of the vector factors This is exactly the familiar situation with the matrix algebra

Hidden in the definition of the algebra A above is the precise detail arising from the statement that A is

a ring Associated with that detail is the nature of the vector multiplication represented above with thejuxtaposition a1a2 Because all readers are familiar with several notions of vector multiplication, thequestion arises as to just what constitutes such a multiplication It turns out that a precise notion for

Trang 36

multiplication can be found in the idea of a bilinear operator Thus, an alternative description of Section2.3 is that of vector spaces equipped with vector multiplication Moreover, one is tempted to inquirewhether a vector multiplication exists that is so general in nature that all the other vector multiplicationscan be derived from it In fact, this is the case, and the following section sets the stage for introducingsuch a multiplication.

2.3 Bilinear Operators

Suppose that there are three F-vector spaces: U, V, and W Recall that U3 V is the Cartesian product of

U with V, and denotes the set of all ordered pairs, thefirst from U and the second from V Now, consider

a mapping b from U3 V into W For brevity of notation, this can be written b: U 3 V ! W Themapping b is a bilinear operator if it satisfies the following pair of conditions

b( f1u1þ f2u2, v)¼ f1b(u1, v)þ f2b(u2, v) (2:2)b(u, f1v1þ f2v2)¼ f1b(u, v1)þ f2b(u, v2) (2:3)

for all f1and f2in F, for all u, u1, and u2in U, and for all v, v1, and v2in V The basic idea of the bilinearoperator is apparent from this definition It is an operator with two arguments, having the property that ifeither of the two arguments isfixed, the operator becomes linear in the remaining argument A moment’sreflection will show that the intuitive operation of multiplication is of this type

One of the important features of a bilinear operator is that its image need not be a subspace of W This

is in marked contrast with the image of a linear operator, whose image is always a subspace Thisproperty leads to great interest in the manipulations associated with vector products At the same time, itbrings about a great deal of nontriviality The best way to illustrate the point is with an example.Example

Trang 37

the last coordinate is 4h3h2=9h1, which by virtue of the property 9h1h4¼ 4h2h3is equal toh4as desired.

shown that the vectors in this class are not closed under addition For this purpose, simply select a pair ofvectors represented by (1, 1, 9, 4) and (4, 9, 1, 1) The sum, (5, 10, 10, 5), does not satisfy the condition

It is perhaps not so surprising that the image of b in this example is not a subspace of W After all, theoperator b is nonlinear, when both of its arguments are considered What may be surprising is that anatural and classical way can be used to circumvent this difficulty, at least to a remarkable degree Themechanism that is introduced in order to address such a question is the tensor The reader should bear inmind that many technical personnel have prior notions and insights on this subject emanating from areassuch as the theory of mechanics and related bodies of knowledge For these persons, the authors wish toemphasize that the following treatment is algebraic in character and may exhibit, at least initially, aflavordifferent from that to which they may be accustomed This difference is quite typical of the distinctivepoints of view that often can be found between the mathematical areas of algebra and analysis Suchdifferences are fortunate insofar as they promote progress in understanding

2.4 Tensor Product

The notions of tensors and tensor product, as presented in this treatment, have the intuitive meaning of avery general sort of bilinear operator, in fact, the most general such operator Once again, F-vector spaces

U, V, and W are assumed Suppose that b: U3 V ! W is a bilinear operator Then the pair (b, W) is said

to be a tensor product of U and V if two conditions are met Thefirst condition is that W is the smallestF-vector space that contains the image of b Using alternative terminology this could be expressed as Wbeing the vector space generated by the image of b The term‘‘generated’’ in this expression refers to theformation of all possible linear combinations of elements in the image of b The second condition relates

b to an arbitrary bilinear operator b: U3 V ! X in which X is another F-vector space To be precise, thesecond condition states that for every such b, a linear operator B: W! X exists with the property that

b(u, v) ¼ B(b(u, v)) (2:8)for all pairs (u, v) in U3 V Intuitively, this means that the arbitrary bilinear operator b can be factored interms of the given bilinear operator b, which does not depend upon b, and a linear operator B which doesdepend upon b

The idea of the tensor product is truly remarkable Moreover, for any bilinear operator b, the inducedlinear operator B is unique The latter result is easy to see Suppose that there are two such induced linearoperators, e.g., B1and B2 It follows immediately that

(B1 B2)(b(u, v))¼ 0 (2:9)for all pairs (u, v) However, thefirst condition of the tensor product assures that the image of b contains

a set of generators for W, and thus that (B1 B2) must in fact be the zero operator Therefore, once thetensor product of U and V is put into place, bilinear operations are in a one-to-one correspondence withlinear operations This is the essence of the tensor idea, and a very significant way to parameterizeproduct operations in terms of matrices In a certain sense, then, the idea of this chapter is to relate thefundamentally nonlinear product operation to the linear ideas of Chapter 1 That this is possible is, ofcourse, classical; nonetheless, it remains a relatively novel idea for numerous workers in the applications.Intuitively, what happens here is that the idea of product is abstracted in the bilinear operator b, with allthe remaining details placed in the realm of the induced linear operator B

When a pair (b, W) satisfies the two conditions above, and is therefore a tensor product for U and V, it

is customary to replace the symbol b with the more traditional symbol However, in keeping the notion

Trang 38

that represents a product and not just a general mapping it is common to write u  v in place of the morecorrect, but also more cumbersome, (u, v) Along the same lines, the space W is generally denoted U  V.Thus, a tensor product is a pair (U V, ) The former is called the tensor product of U with V, and  isloosely termed the tensor product Clearly, is the most general sort of product possible in the presentsituation because all other products can be expressed in terms of it by means of linear operators B Onceagain, the colloquial use of the word‘‘product’’ is to be identified with the more precise algebraic notion ofbilinear operation In this way the tensor product becomes a sort of‘‘grandfather’’ for all vector products.Tensor products can be constructed for arbitrary vector spaces They are not, however, unique Forinstance, if U V has finite dimension, then W obviously can be replaced by any other F-vector space ofthe same dimension, and can be adjusted by a vector space isomorphism Here, the term ‘‘isomorph-ism’’ denotes an invertible linear operator between the two spaces in question It can also be said that thetwo tensor product spaces U V and W are isomorphic to each other Whatever the terminology chosen,the basic idea is that the two spaces are essentially the same within the axiomatic framework in use.

2.5 Basis Tensors

Attention is now focused on the case in which U and V arefinite-dimensional vector spaces over the field

F Suppose that {u1, u2, , um} is a basis for U and {v1, v2, , vn} is a basis for V Consider the followingvectors

u1 v1 u1 v2    u1 vn u2 v1    um vn (2:10)which can be represented in the manner {ui vj, i¼ 1, 2, , m; j ¼ 1, 2, , n} These vectors form abasis for the vector space U V To understand the motivation for this, note that vectors in U and V,respectively, can be written uniquely in the following forms

u v ¼Xm

i¼1

Xn j¼1

figjui vj (2:13)

which establishes that the proposed basis vectors certainly span the image of, and thus that they spanthe tensor product space U V It also can be shown that the proposed set of basis vectors is linearlyindependent However, in the interest of brevity for this summary exposition, the details are omitted.From this point onward, inasmuch as the symbol has replaced b, it will be convenient to use b inplace of b and B in place of B It is hoped that this leads to negligible confusion Thus, in the sequel brefers simply to a bilinear operator and B to its induced linear counterpart

Example

Trang 39

Observe that this can be put into the more transparent form

3

24

Example

In order to generalize the preceding example, one has only to be more general in describing the matrix

Trang 40

2 i¼1

!

3 j¼1

Ngày đăng: 23/03/2014, 08:21

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm