1. Trang chủ
  2. » Khoa Học Tự Nhiên

analysis and simulation of chaotic systems 2nd ed. - f. hoppensteadt

331 380 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Analysis and Simulation of Chaotic Systems
Tác giả Frank C. Hoppensteadt
Trường học Springer
Chuyên ngành Chaotic Systems Analysis and Simulation
Thể loại Book
Năm xuất bản Second Edition
Định dạng
Số trang 331
Dung lượng 3,67 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Mathematical analysis includes geometrical forms, such as hyperbolicstructures, phase planes, and isoclines, and analytical methods that derivefrom calculus and involve iterations, pertu

Trang 1

Analysis and Simulation

of Chaotic Systems, Second Edition

Frank C Hoppensteadt

Springer

Trang 6

I thank all of the teachers and students whom I have encountered, and Ithank my parents, children, and pets for the many insights into life andmathematics that they have given me—often unsolicited I have publishedparts of this book in the context of research or expository papers donewith co-authors, and I thank them for the opportunity to have workedwith them The work presented here was mostly derived by others, al-though parts of it I was fortunate enough to uncover for the first time Mywork has been supported by various agencies and institutions including theUniversity of Wisconsin, New York University and the Courant Institute ofMathematical Sciences, the University of Utah, Michigan State University,Arizona State University, the National Science Foundation, ARO, ONR,and the AFOSR This investment in me has been greatly appreciated, andthe work in this book describes some outcomes of that investment I thankthese institutions for their support

The preparation of this second edition was made possible through thehelp of Linda Arneson and Tatyana Izhikevich My thanks to them for theirhelp

Trang 7

1.1 Examples of Linear Oscillators 1

1.1.1 Voltage-Controlled Oscillators 2

1.1.2 Filters 3

1.1.3 Pendulum with Variable Support Point 4

1.2 Time-Invariant Linear Systems 5

1.2.1 Functions of Matrices 6

1.2.2 exp(At) 7

1.2.3 Laplace Transforms of Linear Systems 9

1.3 Forced Linear Systems with Constant Coefficients 10

1.4 Linear Systems with Periodic Coefficients 12

1.4.1 Hill’s Equation 14

1.4.2 Mathieu’s Equation 15

1.5 Fourier Methods 18

1.5.1 Almost-Periodic Functions 18

1.5.2 Linear Systems with Periodic Forcing 21

1.5.3 Linear Systems with Quasiperiodic Forcing 22

1.6 Linear Systems with Variable Coefficients: Variation of Constants Formula 23

1.7 Exercises 24

Trang 8

viii Contents

2.1 Systems of Two Equations 28

2.1.1 Linear Systems 28

2.1.2 Poincar´e and Bendixson’s Theory 29

2.1.3 x  + f (x)x  + g(x) = 0 32

2.2 Angular Phase Equations 35

2.2.1 A Simple Clock: A Phase Equation on T1 37

2.2.2 A Toroidal Clock: Denjoy’s Theory 38

2.2.3 Systems of N (Angular) Phase Equations 40

2.2.4 Equations on a Cylinder: PLL 40

2.3 Conservative Systems 42

2.3.1 Lagrangian Mechanics 42

2.3.2 Plotting Phase Portraits Using Potential Energy 43 2.3.3 Oscillation Period of x  + U x (x) = 0 46

2.3.4 Active Transmission Line 47

2.3.5 Phase-Amplitude (Angle-Action) Coordinates 49

2.3.6 Conservative Systems with N Degrees of Freedom 52 2.3.7 Hamilton–Jacobi Theory 53

2.3.8 Liouville’s Theorem 56

2.4 Dissipative Systems 57

2.4.1 van der Pol’s Equation 57

2.4.2 Phase Locked Loop 57

2.4.3 Gradient Systems and the Cusp Catastrophe 62

2.5 Stroboscopic Methods 65

2.5.1 Chaotic Interval Mappings 66

2.5.2 Circle Mappings 71

2.5.3 Annulus Mappings 74

2.5.4 Hadamard’s Mappings of the Plane 75

2.6 Oscillations of Equations with a Time Delay 78

2.6.1 Linear Spline Approximations 80

2.6.2 Special Periodic Solutions 81

2.7 Exercises 83

3 Stability Methods for Nonlinear Systems 91 3.1 Desirable Stability Properties of Nonlinear Systems 92

3.2 Linear Stability Theorem 94

3.2.1 Gronwall’s Inequality 95

3.2.2 Proof of the Linear Stability Theorem 96

3.2.3 Stable and Unstable Manifolds 97

3.3 Liapunov’s Stability Theory 99

3.3.1 Liapunov’s Functions 99

3.3.2 UAS of Time-Invariant Systems 100

3.3.3 Gradient Systems 101

3.3.4 Linear Time-Varying Systems 102

3.3.5 Stable Invariant Sets 103

Trang 9

3.4 Stability Under Persistent Disturbances 106

3.5 Orbital Stability of Free Oscillations 108

3.5.1 Definitions of Orbital Stability 109

3.5.2 Examples of Orbital Stability 110

3.5.3 Orbital Stability Under Persistent Disturbances 111 3.5.4 Poincar´e’s Return Mapping 111

3.6 Angular Phase Stability 114

3.6.1 Rotation Vector Method 114

3.6.2 Huygen’s Problem 116

3.7 Exercises 118

4 Bifurcation and Topological Methods 121 4.1 Implicit Function Theorems 121

4.1.1 Fredholm’s Alternative for Linear Problems 122

4.1.2 Nonlinear Problems: The Invertible Case 126

4.1.3 Nonlinear Problems: The Noninvertible Case 128

4.2 Solving Some Bifurcation Equations 129

4.2.1 q = 1: Newton’s Polygons 130

4.3 Examples of Bifurcations 132

4.3.1 Exchange of Stabilities 132

4.3.2 Andronov–Hopf Bifurcation 133

4.3.3 Saddle-Node on Limit Cycle Bifurcation 134

4.3.4 Cusp Bifurcation Revisited 134

4.3.5 Canonical Models and Bifurcations 135

4.4 Fixed-Point Theorems 136

4.4.1 Contraction Mapping Principle 136

4.4.2 Wazewski’s Method 138

4.4.3 Sperner’s Method 141

4.4.4 Measure-Preserving Mappings 142

4.5 Exercises 142

5 Regular Perturbation Methods 145 5.1 Perturbation Expansions 147

5.1.1 Gauge Functions: The Story of o, O 147

5.1.2 Taylor’s Formula 148

5.1.3 Pad´e’s Approximations 148

5.1.4 Laplace’s Methods 150

5.2 Regular Perturbations of Initial Value Problems 152

5.2.1 Regular Perturbation Theorem 152

5.2.2 Proof of the Regular Perturbation Theorem 153

5.2.3 Example of the Regular Perturbation Theorem 155 5.2.4 Regular Perturbations for 0≤ t < ∞ 155

5.3 Modified Perturbation Methods for Static States 157

5.3.1 Nondegenerate Static-State Problems Revisited 158 5.3.2 Modified Perturbation Theorem 158

Trang 10

x Contents

5.3.3 Example: q = 1 160

5.4 Exercises 161

6 Iterations and Perturbations 163 6.1 Resonance 164

6.1.1 Formal Perturbation Expansion of Forced Oscilla-tions 166

6.1.2 Nonresonant Forcing 167

6.1.3 Resonant Forcing 170

6.1.4 Modified Perturbation Method for Forced Oscilla-tions 172

6.1.5 Justification of the Modified Perturbation Method 173 6.2 Duffing’s Equation 174

6.2.1 Modified Perturbation Method 175

6.2.2 Duffing’s Iterative Method 176

6.2.3 Poincar´e–Linstedt Method 177

6.2.4 Frequency-Response Surface 178

6.2.5 Subharmonic Responses of Duffing’s Equation 179

6.2.6 Damped Duffing’s Equation 181

6.2.7 Duffing’s Equation with Subresonant Forcing 182

6.2.8 Computer Simulation of Duffing’s Equation 184

6.3 Boundaries of Basins of Attraction 186

6.3.1 Newton’s Method and Chaos 187

6.3.2 Computer Examples 188

6.3.3 Fractal Measures 190

6.3.4 Simulation of Fractal Curves 191

6.4 Exercises 194

7 Methods of Averaging 195 7.1 Averaging Nonlinear Systems 199

7.1.1 The Nonlinear Averaging Theorem 200

7.1.2 Averaging Theorem for Mean-Stable Systems 202

7.1.3 A Two-Time Scale Method for the Full Problem 203 7.2 Highly Oscillatory Linear Systems 204

7.2.1 dx/dt = εB(t)x 205

7.2.2 Linear Feedback System 206

7.2.3 Averaging and Laplace’s Method 207

7.3 Averaging Rapidly Oscillating Difference Equations 207

7.3.1 Linear Difference Schemes 210

7.4 Almost Harmonic Systems 214

7.4.1 Phase-Amplitude Coordinates 215

7.4.2 Free Oscillations 216

7.4.3 Conservative Systems 219

7.5 Angular Phase Equations 223

7.5.1 Rotation Vector Method 224

Trang 11

7.5.2 Rotation Numbers and Period Doubling

Bifurca-tions 227

7.5.3 Euler’s Forward Method for Numerical Simulation 227 7.5.4 Computer Simulation of Rotation Vectors 229

7.5.5 Near Identity Flows on S1× S1 . 231

7.5.6 KAM Theory 233

7.6 Homogenization 234

7.7 Computational Aspects of Averaging 235

7.7.1 Direct Calculation of Averages 236

7.7.2 Extrapolation 237

7.8 Averaging Systems with Random Noise 238

7.8.1 Axioms of Probability Theory 238

7.8.2 Random Perturbations 241

7.8.3 Example of a Randomly Perturbed System 242

7.9 Exercises 243

8 Quasistatic-State Approximations 249 8.1 Some Geometrical Examples of Singular Perturbation Problems 254

8.2 Quasistatic-State Analysis of a Linear Problem 257

8.2.1 Quasistatic Problem 258

8.2.2 Initial Transient Problem 261

8.2.3 Composite Solution 263

8.2.4 Volterra Integral Operators with Kernels Near δ 264 8.3 Quasistatic-State Approximation for Nonlinear Initial Value Problems 264

8.3.1 Quasistatic Manifolds 265

8.3.2 Matched Asymptotic Expansions 268

8.3.3 Construction of QSSA 270

8.3.4 The Case T = ∞ 271

8.4 Singular Perturbations of Oscillations 273

8.4.1 Quasistatic Oscillations 274

8.4.2 Nearly Discontinuous Oscillations 279

8.5 Boundary Value Problems 281

8.6 Nonlinear Stability Analysis near Bifurcations 284

8.6.1 Bifurcating Static States 284

8.6.2 Nonlinear Stability Analysis of Nonlinear Oscilla-tions 287

8.7 Explosion Mode Analysis of Rapid Chemical Reactions 289 8.8 Computational Schemes Based on QSSA 292

8.8.1 Direct Calculation of x0(h), y0(h) 293

8.8.2 Extrapolation Method 294

8.9 Exercises 295

Trang 12

xii Contents

Trang 13

This book describes aspects of mathematical modeling, analysis, computersimulation, and visualization that are widely used in the mathematicalsciences and engineering

Scientists often use ordinary language models to describe observations

of physical and biological phenomena These are precise where data areknown and appropriately imprecise otherwise Ordinary language modelerscarve away chunks of the unknown as they collect more data On the otherhand, mathematical modelers formulate minimal models that produce re-sults similar to what is observed This is the Ockham’s razor approach,where simpler is better, with the caution from Einstein that “Everythingshould be made as simple as possible, but not simpler.”

The success of mathematical models is difficult to explain The sametractable mathematical model describes such diverse phenomena as when

an epidemic will occur in a population or when chemical reactants willbegin an explosive chain-branched reaction, and another model describesthe motion of pendulums, the dynamics of cryogenic electronic devices, andthe dynamics of muscle contractions during childbirth

Ordinary language models are necessary for the accumulation of mental knowledge, and mathematical models organize this information, testlogical consistency, predict numerical outcomes, and identify mechanismsand parameters that characterize them

experi-Often mathematical models are quite complicated, but simple imations can be used to extract important information from them Forexample, the mechanisms of enzyme reactions are complex, but they can

approx-be descriapprox-bed by a single differential equation (the Michaelis–Menten

Trang 14

equa-xiv Introduction

tion [14]) that identifies two useful parameters (the saturation constant anduptake velocity) that are used to characterize reactions So this modelingand analysis identifies what are the critical data to collect Another example

is Semenov’s theory of explosion limits [128], in which a single differentialequation can be extracted from over twenty chemical rate equations model-ing chain-branched reactions to describe threshold combinations of pressureand temperature that will result in an explosion

Mathematical analysis includes geometrical forms, such as hyperbolicstructures, phase planes, and isoclines, and analytical methods that derivefrom calculus and involve iterations, perturbations, and integral transforms.Geometrical methods are elegant and help us visualize dynamical processes,but analytical methods can deal with a broader range of problems, for ex-ample, those including random perturbations and forcing over unboundedtime horizons Analytical methods enable us to calculate precisely howsolutions depend on data in the model

As humans, we occupy regions in space and time that are between verysmall and very large and very slow and very fast These intermediatespace and time scales are perceptible to us, but mathematical analysishas helped us to perceive scales that are beyond our senses For example,

it is very difficult to “understand” electric and magnetic fields Instead,our intuition is based on solutions to Maxwell’s equations Fluid flows arequite complicated and usually not accessible to experimental observations,but our knowledge is shaped by the solutions of the Navier–Stokes equa-tions We can combine these multiple time and space scales together withmathematical methods to unravel such complex dynamics While realisticmathematical models of physical or biological phenomena can be highlycomplicated, there are mathematical methods that extract simplifications

to highlight and elucidate the underlying process In some cases, engineersuse these representations to design novel and useful things

We also live with varying levels of logical rigor in the mathematicalsciences that range from complete detailed proofs in sharply defined math-ematical structures to using mathematics to probe other structures whereits validity is not known

The mathematical methods presented and used here grew from severaldifferent scientific sources Work of Newton and Leibniz was partly rigor-ous and partly speculative The G¨ottingen school of Gauss, Klein, Hilbert,and Courant was carried forward in the U.S by Fritz John, James Stoker,and Kurt Friedrichs, and they and their students developed many impor-tant ideas that reached beyond rigorous differential equation models andstudied important problems in continuum mechanics and wave propaga-tion Russian and Ukrainian workers led by Liapunov, Bogoliubov, Krylov,and Kolmogorov developed novel approaches to problems of bifurcationand stability theory, statistical physics, random processes, and celestialmechanics Fourier’s and Poincar´e’s work on mathematical physics and dy-namical systems continues to provide new directions for us, and the U.S

Trang 15

mathematicians G D Birkhoff and N Wiener and their students have tributed to these topics as well Analytical and geometrical perturbationand iteration methods were important to all of this work, and all involveddifferent levels of rigor.

con-Computer simulations have enabled us to study models beyond the reach

of mathematical analysis For example, mathematical methods can provide

a language for modeling and some information, such as existence, ness, and stability, about their solutions And then well executed computeralgorithms and visualizations provide further qualitative and quantitativeinformation about solutions The computer simulations presented here de-scribe and illustrate several critical computer experiments that producedimportant and interesting results

unique-Analysis and computer simulations of mathematical models are portant parts of understanding physical and biological phenomena Theknowledge created in modeling, analysis, simulation, and visualizationcontributes to revealing the secrets they embody

im-The first two chapters present background material for later topics inthe book, and they are not intended to be complete presentations of LinearSystems (Chapter 1) and Dynamical Systems (Chapter 2) There are manyexcellent texts and research monographs dealing with these topics in greatdetail, and the reader is referred to them for rigorous developments andinteresting applications In fact, to keep this book to a reasonable sizewhile still covering the wide variety of topics presented here, detailed proofsare not usually given, except in cases where there are minimal notationalinvestments and the proofs give readily accessible insight into the meaning

of the theorem For example, I see no reason to present the details of proofsfor the Implicit Function Theorem or for the main results of Liapunov’sstability theory Still, these results are central to this book On the otherhand, the complete proofs of some results, like the Averaging Theorem forDifference Equations, are presented in detail

The remaining chapters of this book present a variety of mathematicalmethods for solving problems that are sorted by behavior (e.g., bifurca-tion, stability, resonance, rapid oscillations, and fast transients) However,interwoven throughout the book are topics that reappear in many differ-ent, often surprising, incarnations For example, the cusp singularity andthe property of stability under persistent disturbances arise often Thefollowing list describes cross-cutting mathematical topics in this book

1 Perturbations Even the words used here cause some problems For example, perturb means to throw into confusion, but its purpose here is

to relate to a simpler situation While the perturbed problem is confused,the unperturbed problem should be understandable Perturbations usuallyinvolve the identification of parameters, which unfortunately is often mis-understood by students to be perimeters from their studies of geometry.Done right, parameters should be dimensionless numbers that result fromthe model, such as ratios of eigenvalues of linear problems Parameter iden-

Trang 16

2 Iterations Iterations are mathematical procedures that begin with a

state vector and change it according to some rule The same rule is applied

to the new state, and so on, and a sequence of iterates of the rule results.Fra Fibonacci in 1202 introduced a famous iteration that describes thedynamics of an age-structured population In Fibonacci’s case, a populationwas studied, geometric growth was deduced, and the results were used todescribe the compounding of interest on investments

Several iterations are studied here First, Newton’s method, which tinues to be the paradigm for iteration methods, is studied Next, we studyDuffing’s iterative method and compare the results with similar ones de-rived using perturbation methods Finally, we study chaotic behavior thatoften occurs when quite simple functions are iterated There has been acontroversy of sorts between iterationists and perturbationists; each has itsadvocates and each approach is useful

con-3 Chaos The term was introduced in its present connotation by Yorke

and Li in 1976 [101, 48] It is not a precisely defined concept, but it occurs invarious physical and religious settings For example, Boltzmann used it in asense that eventually resulted in ergodic theories for dynamical systems andrandom processes, and Poincar´e had a clear image of the chaotic behavior

of dynamical systems that occurs when stable and unstable manifolds cross.The book of Genesis begins with chaos, and philosophical discussions about

it and randomness continue to this day For the most part, the word chaos

is used here to indicate behavior of solutions to mathematical models that

is highly irregular and usually unexpected We study several problems thatare known to exhibit chaotic behavior and present methods for uncoveringand describing this behavior Related to chaotic systems are the following:

a Almost periodic functions and generalized Fourier analysis [11, 140]

b Poincar´e’s stroboscopic mappings, which are based on snapshots of

a solution at fixed time intervals—“Chaos, illumined by flashes oflightning” [from Oscar Wilde in another context] [111]

Trang 17

c Fractals, which are space filling curves that have been studied sinceWeierstrass, Hausdorff, Richardson, and Peano a century ago andmore recently by Mandelbrodt [107].

d Catastrophes, which were introduced by Ren´e Thom [133] in the1960s

e Fluid turbulence that occurs in convective instabilities described byLorenz and Keller [104]

f Irregular ecological dynamics studied by Ricker and May [48]

g Random processes, including the Law of Large Numbers and ergodicand other limit theorems [82]

These and many other useful and interesting aspects of chaos are describedhere

4 Oscillations Oscillators play fundamental roles in our lives—

“discontented pendulums that we are” [R.W Emerson] For example, most

of the cells in our bodies live an oscillatory life in an oscillating chemicalenvironment The study of pendulums gives great insight into oscillators,and we focus a significant effort here in studying pendulums and similarphysical and electronic devices

One of the most interesting aspects of oscillators is their tendency to chronize with other nearby oscillators This had been observed by musiciansdating back at least to the time of Aristotle, and eventually it was addressed

syn-as a mathematical problem by Huygens in the 17th century and Kortewegaround 1900 [142] This phenomenon is referred to as phase locking, and

it now serves as a fundamental ingredient in the design of communicationsand computer-timing circuits Phase locking is studied here for a variety ofdifferent oscillator populations using the rotation vector method For ex-ample, using the VCON model of a nerve cell, we model neural networks asbeing flows on high-dimensional tori Phase locking occurs when the flowreduces to a knot on the torus for the original and all nearby systems

5 Stability The stability of physical systems is often described using

energy methods These methods have been adapted to more general namical systems by Liapunov and others Although we do study linear andLiapunov stability properties of systems here, the most important stability

dy-concept used here is that of stability under persistent disturbances This

idea explains why mathematical results obtained for minimal models canoften describe behavior of systems that are operating in noisy environ-ments For example, think of a metal bowl having a lowest point in it Amarble placed in the bowl will eventually move to the minimum point Ifthe bowl is now dented with many small craters or if small holes are put

in it, the marble will still move to near where the minimum of the originalbowl had been, and the degree of closeness can be determined from thesize of the dents and holes The dents and the holes introduce irregular

Trang 18

struc-˙x = ax − x3+ εf (t),

where f is bounded and integrable, ε is small, and a is another parameter, occurs in many models When ε = 0 and a increases through the value

a = 0, the structure of static state solutions changes dramatically: For

a < 0, there is only one (real) static state, x = 0; but for a > 0 there

are three: x = ± √ a are stable static states, and x = 0 is an unstable

one This problem is important in applications, but it is not structurally

stable at a = 0 Still, there is a Liapunov function for a neighborhood of

x = 0, a = 0, ε = 0, namely, V (x) = x2 So, the system is stable underpersistent disturbances Stability under persistent disturbances is based onresults of Liapunov, Malkin, and Massera that we study here

6 Computer simulation The two major topics studied in this book are

mathematical analysis and computer simulation of mathematical models.Each has its uses, its strengths, and its deficiencies Our mathematical anal-ysis builds mostly on perturbation and iteration methods: They are oftendifficult to use, but once they are understood, they can provide informationabout systems that is not otherwise available Understanding them for theexamples presented here also lays a basis for one to use computer packagessuch as Mathematica, Matlab or Maple to construct perturbation expan-sions Analytical methods can explain regular behavior of noisy systems,they can simplify complicated systems with fidelity to real behavior, andthey can go beyond the edges of practical computability in dealing withfast processes (e.g., rapid chemical reactions) and small quantities (e.g.,trace-element calculations)

Computer simulation replaces much of the work formerly done by maticians (often as graduate students), and sophisticated software packagesare increasing simulation power Simulations illustrate solutions of a math-ematical model by describing a sample trajectory, or sample path, of theprocess Sample paths can be processed in a variety of ways—plotting, cal-culating ensemble statistics, and so on Simulations do not describe thedependence of solutions on model parameters, nor are their stability, ac-curacy, or reliability always assured They do not deal well with chaotics

mathe-or unexpected catastrophes—irregular mathe-or unexpected rapid changes in asolution—and it is usually difficult to determine when chaos lurks nearby

Trang 19

Mathematical analysis makes possible computer simulations; conversely,computer simulations can help with mathematical analysis New computer-based methods are being derived with parallelization of computations,simplification of models through automatic preprocessing, and so on, andthe future holds great promise for combined work of mathematical andcomputer-based analysis There have been many successes to date, forexample the discovery and analysis of solitons.

The material in this book is not presented in order of increasing difficulty.The first two chapters provide background information for the last six chap-ters, where oscillation, iteration, and perturbation techniques and examplesare developed We begin with three examples that are useful throughoutthe rest of the book These are electrical circuits and pendulums Next,

we describe linear systems and spectral decomposition methods for ing them These involve finding eigenvalues of matrices and deducing howthey are involved in the solution of a problem In the second chapter westudy dynamical systems, beginning with descriptions of how periodic oralmost periodic solutions can be found in nonlinear dynamical systemsusing methods ranging from Poincar´e and Bendixson’s method for two dif-ferential equations to entropy methods for nonlinear iterations The thirdchapter presents stability methods for studying nonlinear systems Partic-ularly important for later work is the method of stability under persistentdisturbances

solv-The remainder of the book deals with methods of approximation andsimulation First, some useful algebraic and topological methods are de-scribed, followed by a study of implicit function theorems and modificationsand generalizations of them These are applied to several bifurcation prob-lems Then, regular perturbation problems are studied, in which a smallparameter is identified and the solutions are constructed directly using theparameter This is illustrated by several important problems in nonlinearoscillations, including Duffing’s equation and nonlinear resonance

In Chapter 7 the method of averaging is presented This is one of the mostinteresting techniques in all of mathematics It is closely related to Fourieranalysis, to the Law of Large Numbers in probability theory, and to thedynamics of physical and biological systems in oscillatory environments

We describe here multitime methods, Bogoliubov’s transformation, andintegrable systems methods

Finally, the method of quasistatic-state approximations is presented.This method has been around in various useful forms since 1900, and ithas been called by a variety of names—the method of matched asymptoticexpansions being among the most civil It has been derived in some quitecomplicated ways and in some quite simple ones The approach taken here

is of quasistatic manifolds, which has a clear geometric flavor that can aidintuition It combines the geometric approach of Hadamard with the an-

Trang 20

oscilla-of these methods apply, including diffraction by crossed wires in magnetic theory, stagnation points in fluid flows, flows in domains withsharp corners, and problems with intermittent rapid time scales.

electro-I have taught courses based on this book in a variety of ways ing on the time available and the background of the students When thematerial is taught as a full year course for graduate students in mathe-matics and engineering, I cover the whole book Other times I have takenmore advanced students who have had a good course in ordinary differentialequations directly to Chapters 4, 5, 6, 7, and 8 A one quarter course is pos-sible using, for example, Chapters 1, 7, and 8 For the most part Chapters 1and 2 are intended as background material for the later chapters, althoughthey contain some important computer simulations that I like to cover inall of my presentations of this material A course in computer simulationscould deal with sections from Chapters 2, 4, 7, and 8 The exercises alsocontain several simulations that have been interesting and useful

depend-The exercises are graded roughly in increasing difficulty in each chapter.Some are quite straightforward illustrations of material in the text, andothers are quite lengthy projects requiring extensive mathematical analysis

or computer simulation I have tried to warn readers about more difficultproblems with an asterisk where appropriate

Students must have some degree of familiarity with methods of ordinarydifferential equations, for example, from a course based on Coddingtonand Levinson [24], Hale [58], or Hirsch and Smale [68] They should also becompetent with matrix methods and be able to use a reference text such as

Gantmacher [46] Some familiarity with Interpretation of Dreams [45] has

also been found to be useful by some students

Frank C Hoppensteadt

Paradise Valley, Arizona

June 1999

Trang 21

Given an N -dimensional vector f and an N × N matrix A(t) of functions

of t, we seek a solution vector x(t) We write x, f ∈ E N and A ∈ E N ×N

and sometimes x  = dx/dt or ˙x = dx/dt.

Many design methods in engineering are based on linear systems Also,most of the methods used to study nonlinear problems grew out of methodsfor linear problems, so mastery of linear problems is essential for under-standing nonlinear ones Section 1.1 presents several examples of physicalsystems that are analyzed in this book In Sections 1.2 and 1.3 we study

linear systems where A is a matrix of constants In Sections 1.4 and 1.5

we study systems where A is a periodic or almost-periodic matrix, and in

Section 1.6 we consider general linear systems

The following examples illustrate typical problems in oscillations and turbations, and they are referred to throughout this book The first twoexamples describe electrical circuits and the third a mechanical system

Trang 22

per-2 1 Linear Systems

V(x)

Figure 1.1 A voltage-controlled oscillator The controlling voltage Vinis applied

to the circuit, and the output has a fixed periodic waveform (V ) whose phase x

is modulated by Vin

Modern integrated circuit technology has had a surprising impact on ematical models Rather than the models becoming more complicated

math-as the number of transistors on a chip incremath-ases, the mathematics in

many cases has become dramatically simpler, usually by design

Voltage-controlled oscillators (VCOs) illustrate this nicely A VCO is an electronic

device that puts out a voltage in a fixed waveform, say V , but with a variable phase x that is controlled by an input voltage Vin The device isdescribed by the circuit diagram in Figure 1.1 The voltages in this andother figures are measured relative to a common ground that is not shown

The output waveform V might be a fixed period square wave, a triangular wave, or a sinusoid, but its phase x is the unknown VCOs are made up of

many transistors, and a detailed model of the circuit is quite complicated[83, 65] However, there is a simple input–output relation for this device:

The input voltage Vindirectly modulates the output phase as described bythe equation

dx

dt = ω + Vin,

where the constant ω is called the center frequency The center frequency

is sustained by a separate (fixed supply) voltage in the device, and it can

be changed by tuning resistances in the VCO Thus, a simple differential

equation models this device The solution for x is found by integrating this

Equations like this one for x play a central role in the theory of nonlinear

oscillations In fact, a primary goal is often to transform a given systeminto phase-and-amplitude coordinates, which is usually difficult to carryout This model is given in terms of phase and serves as an example of howsystems are studied once they are in phase and amplitude variables

Trang 23

V in V

L R

C I

Ground

Figure 1.2 An RLC circuit

Filters are electrical circuits composed of resistors, inductors, and

capaci-tors Figure 1.2 shows an RLC circuit, in which Vin, R, L, and C are given, and the unknowns are the output voltage (V ) and the current (I) through

the circuit The circuit is described by the mathematical equation

resistor (Ohm’s Law) to the total voltage Vin− V Using the first equation

to eliminate I from the model results in a single second-order equation:

where W is the input voltage, V is the output voltage, and the constants

{a i } and {b i } characterize various circuit elements Once W is given, this

equation must be solved for V

Filters can be described in a concise form: Using the notation p = d/dt, sometimes referred to as Heaviside’s operator, we can write the filter

Trang 24

4 1 Linear Systems

This notation is made precise later using Laplace transforms, but for now

it is taken to be a shorthand notation for the input–output relation of the

filter The function H is called the filter’s transfer function.

In summary, filters are circuits whose models are linear nth-order

ordi-nary differential equations They can be written concisely using the transferfunction notation, and they provide many examples later in this book

Simple pendulums are described by equations that appear in a surprisingnumber of different applications in physics and biology Consider a pen-

dulum of length L supporting a mass m that is suspended from a point with vertical coordinate V (t) and horizontal coordinate H(t) as shown in

Figure 1.3 The action integral for this mechanical system is defined by

where g is the acceleration of gravity Hamilton’s principle [28] shows that

an extremum of this integral is attained by the solution x(t) of the equation

which is the Euler–Lagrange equation for functions x(t) that make the

action integral stationary

Furthermore, a pendulum in a resistive medium to which a torque isapplied at the support point is described by

where f is the coefficient of friction and I is the applied torque.

For x near zero, sin x ≈ x and cos x ≈ 1, so the equation is approximately

This linear equation for x(t), whose coefficients vary with t, involves many

difficult problems that must be solved to understand the motion of a dulum Many of the methods used in the theory of nonlinear oscillationsgrew out of studies of such pendulum problems; they are applicable now to

pen-a wide vpen-ariety of new problems in physics pen-and biology

Trang 25

H(t)

V(t)

L

m x

Figure 1.3 A pendulum with a moving support point (H(t), V (t)) gives the location of the support point at time t The pendulum is a massless rod of length

L suspending a mass m, and x measures the angular deflection of the pendulum

from rest (down)

Systems of linear, time-invariant differential equations can be studied in

detail Suppose that the vector of functions x(t) ∈ E N satisfies the system

of differential equations

dx

dt = Ax

for a ≤ t ≤ b, where A ∈ E N×N is a matrix of constants.

Systems of this kind occur in many ways For example, time-invariant

linear nth-order differential equations can be rewritten in the form of order systems of equations: Suppose that y(t) is a scalar function that

first-satisfies the linear equation

Trang 26

6 1 Linear Systems

where b0=−a0/a n , , b n−1=−a n−1 /a n A matrix in this form is called

a companion matrix [40] With this, the vector x satisfies the differential

Here a i,j is the component in the ith row and jth column of A.

If A is a diagonalizable matrix, then it can be written in terms of its

where λ1, , λ n are the eigenvalues of A and P1, , P n are projection

matrices, which satisfy the conditions P i P j = P i if i = j and P i P j = 0

otherwise Because of this, we see that for any integer m,

Trang 27

provided that each eigenvalue λ j lies in the domain where g is analytic.

Note that the spectral decomposition enables us to calculate functions

of A in terms of powers of the scalars {λ j }, rather than powers of A The

result is that once the eigenvalues and their projection matrices are found,

effort in calculating functions of A is greatly reduced.

However, not every matrix can be diagonalized The most that can be

said is that any matrix A can be put into Jordan canonical form That is, there is a block diagonal matrix J and a transforming matrix T such that

where I k is an identity matrix of dimension k for k = 1, , K, where K

is the number of blocks and Z is a matrix of zeros except for some ones on the superdiagonal (where Z i,j = δ i +1,j ) Since Z N = 0, Z is referred to as being a nilpotent matrix There may also be a diagonalizable term on the main diagonal of J (see [46]).

which converges for all (real and complex) numbers t and for any matrix

A We see directly from this series that

Trang 28

exp(At) = exp(T J T −1 t) = T exp(J t)T −1

Moreover, the exponential of a block diagonal matrix is again a block agonal matrix having blocks of the same dimensions Thus, it is sufficient

di-to consider a typical irreducible block matrix:

Finally, we note that the spectrum of a matrix A can be split into three parts: those eigenvalues having negative real parts (S), those having pos- itive real parts (U ), and those that are purely imaginary (O) If A is

diagonalizable, then it can be transformed into a block diagonal matrix

having three blocks: The first consists of those eigenvalues in S, the second

of those in O, and the third of those in U Therefore, we can write

The first sum approaches zero as t → ∞, the second one oscillates, and the

third one grows as t → ∞ Any solution of the system

In the first sum, x(0) is said to excite stable modes; in the second,

oscilla-tory modes; and in the third, unstable modes Thus, the matrix A defines

a partition of the entire space E N into three parts: a stable manifold that

is defined by the span of the projection matrices P for j ∈ S, an unstable

Trang 29

manifold defined by the span of the matrices P j for j ∈ U, and an

oscil-latory, or center manifold, that is spanned by the matrices P j for j ∈ O.

We shall see later that this decomposition carries over to certain nonlinearsystems

Suppose that g(t) is a vector of smooth functions that grow no faster than

an exponential as t → ∞ We define the Laplace transform of g to be



= pg ∗ − g(0).

Therefore, we see that Laplace’s transform converts differentiation into

multiplication, and it justifies using p as the Heaviside operator described

in Section 1.1.2

How does one recover g from its transform? If G(p) is a function that is

analytic in a region except for pole singularities, then we define

of G With g defined in this way, g ∗ (p) = G(p) This formula for g is the

Laplace inversion formula, and it shows how to recover the original functionfrom its transform

Calculation of the inverse formula uses the method of residues, which is based on Cauchy’s formula: If F (z) is a function that is analytic in some region containing a point z0 and if C is a curve lying in this region and enclosing z0, then we have the formula

This is referred to as the Cauchy integral formula, and the method of

residues is based on it For example, if G(p) = 1/(p − a), then

if C encloses the point z = a.

Trang 30

10 1 Linear Systems

A low-pass filter is described in Section 1.1.2 (with inductance L = 0).

Using the notation of that section, we have

V = H(p)W,

where H(p) = (RCp + 1) −1 This formula should be interpreted as one for

the transforms of V and W :



W (s) ds

RC .

Finally, we note that another useful representation of the matrix exp(At)

can be found using Laplace transforms Namely, we can define

This formula is proved by reducing A to its Jordan canonical form and

applying Cauchy’s formula to each term in the matrix, as shown in thenext section

x(t) =

 t

0 h(t − s)f(s)ds,

where h ∗ (p) = (pI − A) −1.

Trang 31

What is h? If A is diagonalizable, then we can use its spectral decomposition to evaluate h Since

This formula for y gives a particular solution of the equation, and the

general solution has the form

Trang 32

12 1 Linear Systems

so we see that the transfer function notation summarizes a great deal ofwork The general variation of constants formula is described in Section 1.6

Consider the linear system

dt = A(t)Φ and Φ(0) is nonsingular.

Note that if d(t) denotes the determinant of Φ(t), then

so d(t) = 0 for all t where tr(A) = N

k=1A k,k is the trace of A It follows that Φ(t) is nonsingular for all t, and therefore the columns of Φ(t) define

a set of N linearly independent solutions of the system.

The following theorem is very useful for studying periodic systems

Floquet’s Theorem.Let Φ be as described above Then there is a

periodic matrix P (t), having period T , and a constant matrix R such that

Φ(t) = P (t) exp(Rt).

Proof of Floquet’s Theorem If Φ(t) is a fundamental matrix of the problem,

then so is Φ(t + T ) Moreover, calculation shows that

Trang 33

such that

Φ(t + T ) = Φ(t)C for all t.

This is the key observation on which further developments are based In

particular, it follows from this formula that for any integer n,

Φ(nT ) = Φ(0)C n

Therefore, the long-term behavior of solutions can be determined from the

eigenvalues of the matrix C If we can define a matrix R by the formula

for all t This follows because for all t,

P (t + T ) = Φ(t + T ) exp(−RT ) exp(−Rt) = Φ(t) exp(−Rt) = P (t).

The logarithm of C is well-defined (as shown in [24]), although it might be

a matrix of complex numbers

This result is especially helpful in determining the behavior of x(t) as

t → ∞ For example, if all of the eigenvalues of R have negative real parts,

then x(t) → 0 as t → ∞ However, this theorem is difficult to apply, since

it is usually difficult to find the matrix R.

An interesting consequence of Floquet’s Theorem is that any periodicsystem can be transformed into one having constant coefficients In fact,

the change of variables x = P (t)y takes the problem

Trang 34

14 1 Linear Systems

we convert this system into

d dt

Thus, in this case, Floquet’s transformation is easy to carry out, and R =

B + ωJ , where J is Jacobi’s matrix:

where p is a continuous periodic function, is known as Hill’s equation.

This equation arises frequently in mathematical physics, for example,Schr¨odinger’s equation in quantum mechanics and studies of the stability

of periodic solutions by linearization often have this form (see [28, 105])

Let y1 denote the solution of this equation that satisfies the initial

con-ditions y1(0) = 1 and y1 (0) = 0, and let y2 denote the solution of this

equation that satisfies y2(0) = 0, y2 (0) = 1 Then, if we set x  = y, we can

rewrite Hill’s equation as a first-order system:

defines a fundamental solution The matrix Φ is called the Wronskian

ma-trix for this system In fact, each column of this mama-trix solves the system,

and Φ(0) = I.

Suppose that the period of p is T , so p(t + T ) = p(t) for all t Floquet’s Theorem shows that Φ(T ) = exp(RT ) The eigenvalues of this matrix are called characteristic multipliers, and if they have modulus equal to one, then all solutions of Hill’s equation are bounded as t → ±∞ The

eigenvalues of RT are called the characteristic exponents of the problem.

On the other hand, a great deal has been determined about the

eigen-values of R for Hill’s equation For example, the eigeneigen-values of Φ(T ) are

determined from the characteristic equation

λ2− [y1(T ) + y2 (T )]λ + det Φ(T ) = 0,

Trang 35

and they have the form

Therefore, if ∆ can be evaluated, then the nature of the eigenvalues of

Φ(T ) can be determined In particular, if |∆| < 1, then the eigenvalues of

Φ(T ) are complex conjugates and have modulus 1 In this case, all solutions

of Hill’s equation are bounded on the entire interval−∞ < t < ∞ On the

other hand, if ∆ > 1, both roots have positive real parts, and if ∆ < −1,

then both eigenvalues have negative real parts In either of these two cases,

no solution of Hill’s equation remains bounded on the whole interval−∞ <

t < ∞ (see [105]) Finally, if ∆ = 1, then there is (at least) one solution

having period T , and if ∆ = −1, then there is (at least) one solution having

period 2T In either case, the other solution can grow no faster than O(t)

dt = −p(t) cos2θ − sin2θ.

Note that the angular variable θ is separated from the amplitude variable in

this case! Thus, Hill’s equation is easily put into phase–amplitude variables,which we study further in Section 2.3.5, and the problem reduces to study

of the first-order differential equation for θ.

More can be said about the special case of Hill’s equation when p(t) =

δ + ε cos t, where δ and ε are constants The result is known as Mathieu’s equation, and its solutions are either bounded or unbounded, as for Hill’s

Trang 36

Figure 1.4 Stability diagram for Mathieu’s equation If (δ, ε) lies in one of the

labeled regions, then|∆| < 1 and all solutions of Mathieu’s equation are bounded.

Note that if δ = 0 and ε = 0, then x(t) = at + b for some constants a and b.

equation Figure 1.4 shows the values of δ and ε for which solutions are

bounded

Meissner introduced a practice problem, where the term cos t is replaced

by a 2π-periodic square wave q(t), where

q(t) =



1 for 0≤ t < π,

−1 for π ≤ t < 2π.

There is an interesting application of this to the pendulum model

de-scribed in Section 1.1.3 Let H = 0 and V = A cos ωt Then linearizing the pendulum equation about x = 0 and replacing t by ωt gives

We see that if δ = ±(g/Lω2) and ε = ±A/L lie in the overlap region

in Figure 1.5, then both equilibria are stable Thus, if the support point

Trang 37

δ

xxxxxx xxxxxx xxxxxx xxxxxx xxxxxx xxxxxx xxxxxx

xxxxx xxxxx xxxxx

xxxxx xxxxx xxxxx

xxxxxx xxxxxx xxxxxx xxxxxx xxxxxx xxxxxx

xx xx xxxxx xxxxx xxxxx

xxxxx xxxxx xxxxx

Overlap

Regions

Figure 1.5 Overlap regions If the data are in the doubly shaded regions, thenboth the up and down positions of the pendulum are stable In this case, a smallperturbation of the pendulum from the straight up position persists, but thependulum remains nearly straight up for all future times

is vibrated vertically with a range of frequencies and amplitudes so that

(g/Lω2) and A/L lie in the doubly shaded region in Figure 1.5, then both

the up and the down positions of the pendulum are stable

If the pendulum is damped, then the linear equations are

Figure 1.5 shows the stability diagram for these two oscillators drawn

together on the same coordinate axes when r = 0 Figure 1.6 shows the same result for small r In this case small perturbations of the pendulum

from the straight up position die out, approximately like exp(−rt/2) This

Trang 38

18 1 Linear Systems

ε

δ

xx xx

Overlap

Regions

r -r

Figure 1.6 The stability diagram of the up and down positions when damping

is accounted for (r > 0) In this case, small perturbations of the pendulum from

the straight up position die out, approximately like exp(−rt/2).

shows that an oscillating environment can stabilize a static state that isunstable without oscillation This is an important phenomenon found inmany physical and biological systems [1, 87, 127]

No result comparable to Floquet’s Theorem is available for systems havingalmost periodic coefficients However, when these systems do arise in ap-plications, they can be studied using generalized Fourier methods Some ofthese methods are described in this section

Almost-periodic functions play important roles in studies of nonlinear

os-cillators An almost-periodic function, say f (t), is one that comes close to being periodic in the following sense: For a given tolerance ε there is a number T , called a translation number, such that in any interval of length

Trang 39

T ε there is a number T  for which

|f(t + T )− f(t)| < ε

for all t The translation number of a periodic function is simply its period.

References [11] and [140] present introductions to almost-periodic functionsand their properties

If f is almost-periodic, then it has a generalized Fourier expansion:

f (t) ∼

n=1

C n exp(itλ n ),

where the amplitudes{C n } (complex) and the frequencies {λ n } (real)

char-acterize the function The frequencies of f are defined to be the values of

λ for which the average

is not zero It is known that this happens for at most countably many

values of λ, say λ1, λ2, The amplitudes of the modes are given by the

The nth term in the series has period 2 n+1π, so the function is not periodic.

But given a tolerance ε, we can choose N so large that

for all t This shows that f is almost periodic, but note that the integral

of this series does not converge, and so does not define an almost-periodicfunction

The class of all almost-periodic functions is larger than needed to study

many nonlinear oscillators, and the smaller class of quasiperiodic functions

is useful These functions are generated by finitely many frequencies as

follows Let ω be a vector of M real numbers,

ω = (ω1, , ω M ),

Trang 40

20 1 Linear Systems

and let  n be a multi-index, that is,  n is a vector of integers such that

 n = (n1, , n M ).

If the components of ω are rationally related, then the sequence {n · ω} is

equivalent to a sequence of integer multiples of a real number However,

if the components of ω are not rationally related, then the sequence is

dense in the real numbers We define |n| = n i, and we consider a set

of amplitudes{C  n } that satisfies some convergence condition, say |C  n | ≤

1/ |n|2as |n| → ∞ Then the series

|n|=−∞

C  n exp(it n · ω)

defines an almost-periodic function whose frequencies are generated by

a finite set, namely, the components of ω Such a function is called a

quasiperiodic function.

Quasiperiodic functions are closely related to periodic functions For

example, if f is a quasiperiodic function, then there is a function F (s),

s∈ E M , that is periodic in each component of s, such that f (t) = F ( ωt).

In particular, if f has the series representation

On the other hand, let F (s1, , s M) be a differentiable function that is

2π-periodic in each variable s1, , s M Such functions have Fourier series,say

where  n = (n1, , n M ) and s = (s1, , s M) With a vector of frequencies

ω = (ω1, , ω M ), we can define a function f by the formula

... periodic, but note that the integral

of this series does not converge, and so does not define an almost-periodicfunction

The class of all almost-periodic functions is larger than needed... class="text_page_counter">Trang 40

20 Linear Systems< /p>

and let  n be a multi-index, that is,  n is a vector of integers such that

 n = (n1,... n } (complex) and the frequencies {λ n } (real)

char-acterize the function The frequencies of f are defined to be the values of< /i>

λ for which

Ngày đăng: 31/03/2014, 15:09

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972 Sách, tạp chí
Tiêu đề: Handbook of Mathematical Functions
[2] P. Alfeld, F.C. Hoppensteadt, Explosion Mode Analysis of H 2 –O 2 Combus- tion, Chemical Physics Series, Springer-Verlag, New York, 1980 Sách, tạp chí
Tiêu đề: Explosion Mode Analysis of H 2 –O 2 Combustion
Tác giả: P. Alfeld, F.C. Hoppensteadt
Nhà XB: Springer-Verlag
Năm: 1980
[3] A.A. Andronov, A.A. Witt, S.E. Chaikin, Theory of Oscillators, Dover, New York, 1966 Sách, tạp chí
Tiêu đề: Theory of Oscillators
[4] V.I. Arnol’d, Mathematical Methods of Classical Mechanics, Springer-Verlag, New York, 1978 Sách, tạp chí
Tiêu đề: Mathematical Methods of Classical Mechanics
[6] H. Antosiewicz, A survey of Liapunov’s second method, in Contributions to the Theory of Nonlinear Oscillations, S. Lefschetz (ed.), Vol. IV, Princeton, N.J., 1958 Sách, tạp chí
Tiêu đề: Contributions to"the Theory of Nonlinear Oscillations
[7] H.T. Banks, F. Kappel, Spline approximations for functional differential equations, J. Differential Eqns., 34(1979): 496–522 Sách, tạp chí
Tiêu đề: J. Differential Eqns
Tác giả: H.T. Banks, F. Kappel, Spline approximations for functional differential equations, J. Differential Eqns., 34
Năm: 1979
[8] K.G. Beauchamp, Walsh Functions and their Applications, Academic Press, New York, 1975 Sách, tạp chí
Tiêu đề: Walsh Functions and their Applications
[10] R. Bellman, K. Cooke, Differential-Difference Equations, Academic Press, New York, 1963 Sách, tạp chí
Tiêu đề: Differential-Difference Equations
[11] A.S. Besicovitch, Almost Periodic Functions, Dover, New York, 1954 Sách, tạp chí
Tiêu đề: Almost Periodic Functions
[12] G.D. Birkhoff, Dynamical Systems, Vol. IX., American Mathematical Society, Providence, RI, 1966 Sách, tạp chí
Tiêu đề: Dynamical Systems
[16] P.H. Carter, An improvement of the Poincar´ e–Birkhoff fixed point theorem, Trans. AMS 269(1982): 285–299 Sách, tạp chí
Tiêu đề: Trans. AMS
Tác giả: P.H. Carter, An improvement of the Poincar´ e–Birkhoff fixed point theorem, Trans. AMS 269
Năm: 1982
[17] M.A. Cartwright, J.E. Littlewood, Ann. Math. 54(1951): 1–37 Sách, tạp chí
Tiêu đề: Ann. Math
Tác giả: M.A. Cartwright, J.E. Littlewood, Ann. Math. 54
Năm: 1951
[18] L. Cesari, Asymptotic Behavior and Stability Problems in Ordinary Dif- ferential Equations, Ergebnisse der Math. New Series, Vol. 16, 1963, 2nd ed Sách, tạp chí
Tiêu đề: Asymptotic Behavior and Stability Problems in Ordinary Differential Equations
Tác giả: L. Cesari
Nhà XB: Ergebnisse der Math. New Series
Năm: 1963
[19] S. Chandrasekhar, Hydrodynamic and Hydromagnetic Stability, Oxford University Press, 1961 Sách, tạp chí
Tiêu đề: Hydrodynamic and Hydromagnetic Stability
[20] E.W. Cheney, Introduction to Approximation Theory, McGraw-Hill, New York, 1966 Sách, tạp chí
Tiêu đề: Introduction to Approximation Theory
[21] E.W. Cheney, D. Kincaid, Numerical Mathematics and Computing, Brooks- Cole, Monterey, CA, 1980 Sách, tạp chí
Tiêu đề: Numerical Mathematics and Computing
[22] W. Chester, The forced oscillations of a simple pendulum, J. Inst. Maths.Appl. 15(1975): 298–306 Sách, tạp chí
Tiêu đề: J. Inst. Maths."Appl
Tác giả: W. Chester, The forced oscillations of a simple pendulum, J. Inst. Maths.Appl. 15
Năm: 1975
[23] S.N. Chow, J.K. Hale, Methods of Bifurcation Theory, Springer-Verlag, New York, 1982 Sách, tạp chí
Tiêu đề: Methods of Bifurcation Theory
[24] E.A. Coddington, N. Levinson, Theory of Ordinary Differential Equations, McGraw Hill, New York, 1955 Sách, tạp chí
Tiêu đề: Theory of Ordinary Differential Equations
[25] D.S. Cohen, F.C. Hoppensteadt, R.M. Miura, Slowly modulated oscillations in nonlinear diffusion processes, SIAM J. Appl. Math. 33 (1977):217–229 Sách, tạp chí
Tiêu đề: SIAM J. Appl. Math
Tác giả: D.S. Cohen, F.C. Hoppensteadt, R.M. Miura, Slowly modulated oscillations in nonlinear diffusion processes, SIAM J. Appl. Math. 33
Năm: 1977

TỪ KHÓA LIÊN QUAN