An im-portant feature of this book is that it gives an integrated treatment of ODE initial value problems, ODE boundary value problems, and DAEs, empha-sizing not only the dierences betw
Trang 1Computer Methods for Ordinary Di erential
Equations and Di erential-Algebraic
Equations Uri M Ascher and Linda R Petzold
December 2, 1997
Trang 2This book has been developed from course notes that we wrote, havingrepeatedly taught courses on the numerical solution of ordinary dierentialequations (ODEs) and related problems We have taught such courses at asenior undergraduate level as well as at the level of a rst graduate course onnumerical methods for dierential equations The audience typically consists
of students from Mathematics, Computer Science and a variety of disciplines
in engineering and sciences such as Mechanical, Electrical and Chemical gineering, Physics, Earth Sciences, etc
En-The material that this book covers can be viewed as a rst course on thenumerical solution of dierential equations It is designed for people whowant to gain a practical knowledge of the techniques used today The courseaims to achieve a thorough understanding of the issues and methods involvedand of the reasons for the successes and failures of existing software On onehand, we avoid an extensive, thorough, theorem-proof type exposition: wetry to get to current methods, issues and software as quickly as possible
On the other hand, this is not a quick recipe book, as we feel that a deeperunderstanding than can usually be gained by a recipe course is required toenable the student or the researcher to use their knowledge to design theirown solution approach for any nonstandard problems they may encounter infuture work The book covers initial-value and boundary-value problems, aswell as dierential-algebraic equations (DAEs) In a one-semester course wehave been typically covering over 75% of the material it contains
We wrote this book partially as a result of frustration at not being able
to assign a textbook adequate for the material that we have found ourselvescovering There is certainly excellent, in-depth literature around In fact, weare making repeated references to exhaustive texts which, combined, coveralmost all the material in this book Those books contain the proofs andreferences which we omit They span thousands of pages, though, and thetime commitment required to study them in adequate depth may be morethan many students and researchers can aord to invest We have tried tostay below a 350-page limit and to address all three ODE-related areas men-
ii
Trang 3tioned above A signicant amount of additional material is covered in the
Exercises Other additional important topics are referred to in brief sections
of Notes and References Software is an important and well-developed part
of this subject We have attempted to cover the most fundamental software
issues in the text Much of the excellent and publicly-available software is
described in the Software sections at the end of the relevant chapters, and
available codes are cross-referenced in the index Review material is
high-lighted and presented in the text when needed, and it is also cross-referenced
in the index
Traditionally, numerical ODE texts have spent a great deal of time
de-veloping families of higher order methods, e.g Runge-Kutta and linear
mul-tistep methods, applied rst to nonsti problems and then to sti problems
Initial value problems and boundary value problems have been treated in
separate texts, although there is much in common There have been
fun-damental dierences in approach, notation, and even in basic denitions,
between ODE initial value problems, ODE boundary value problems, and
partial dierential equations (PDEs)
We have chosen instead to focus on the classes of problems to be solved,
mentioning wherever possible applications which can lend insight into the
We begin by outlining the relevant mathematical properties of each problem
class, then carefully develop the lower-order numerical methods and
funda-mental concepts for the numerical analysis Next we introduce the
appropri-ate families of higher-order methods, and nally we describe in some detail
how these methods are implemented in modern adaptive software An
im-portant feature of this book is that it gives an integrated treatment of ODE
initial value problems, ODE boundary value problems, and DAEs,
empha-sizing not only the dierences between these types of problems but also the
fundamental concepts, numerical methods and analysis which they have in
common This approach is also closer to the typical presentation for PDEs,
leading, we hope, to a more natural introduction to that important subject
Knowledge of signicant portions of the material in this book is essential
for the rapidly emerging eld of numerical dynamical systems These are
nu-merical methods employed in the study of the long term, qualitative behavior
of various nonlinear ODE systems We have emphasized and developed in
this work relevant problems, approaches and solutions But we avoided
de-veloping further methods which require deeper, or more specic, knowledge
of dynamical systems, which we did not want to assume as a prerequisite
The plan of the book is as follows Chapter 1 is an introduction to the
dierent types of mathematical models which are addressed in the book
We use simple examples to introduce and illustrate initial- and
boundary-value problems for ODEs and DAEs We then introduce some important
applications where such problems arise in practice
Trang 4Each of the three parts of the book which follow starts with a chapterwhich summarizes essential theoretical, or analytical issues (i.e before ap-plying any numerical method) This is followed by chapters which developand analyze numerical techniques For initial value ODEs, which compriseroughly half this book, Chapter 2 summarizes the theory most relevant forcomputer methods, Chapter 3 introduces all the basic concepts and simplemethods (relevant also for boundary value problems and for DAEs), Chapter
4 is devoted to one-step (Runge-Kutta) methods and Chapter 5 discussesmultistep methods
Chapters 6-8 are devoted to boundary value problems for ODEs Chapter
6 discusses the theory which is essential to understand and to make eectiveuse of the numerical methods for these problems Chapter 7 briey con-siders shooting-type methods and Chapter 8 is devoted to nite dierenceapproximations and related techniques
The remaining two chapters consider DAEs This subject has been searched and solidied only very recently (in the past 15 years) Chapter 9
re-is concerned with background material and theory It re-is much longer thanChapters 2 and 6 because understanding the relationship between ODEs andDAEs, and the questions regarding reformulation of DAEs, is essential andalready suggests a lot regarding computer approaches Chapter 10 discussesnumerical methods for DAEs
Various courses can be taught using this book A 10-week course can bebased on the rst 5 chapters, with an addition from either one of the remain-ing two parts In a 13-week course (or shorter in a more advanced graduateclass) it is possible to cover comfortably Chapters 1-5 and either Chapters 6-8
or Chapters 9-10, with a more supercial coverage of the remaining material.hints, or at least warnings, for those exercises that we (or our students) havefound more demanding
Many people helped us with the tasks of shaping up, correcting, lteringand rening the material in this book First and foremost there are ourstudents in the various classes we taught on this subject They made usacutely aware of the dierence between writing with the desire to explainand writing with the desire to impress We note, in particular, G Lakatos,
D Aruliah, P Ziegler, H Chin, R Spiteri, P Lin, P Castillo, E Johnson,
D Clancey and D Rasmussen We have beneted particularly from ourearlier collaborations on other, related books with K Brenan, S Campbell,
R Mattheij and R Russell Colleagues who have oered much insight, adviceand criticism include E Biscaia, G Bock, C W Gear, W Hayes, C Lubich,
V Murata, D Pai, J B Rosen, L Shampine and A Stuart Larry Shampine,
in particular, did an incredibly extensive refereeing job and oered manycomments which have helped us to signicantly improve this text We have
Trang 5also beneted from comments of numerous anonymous referees.
L R Petzold
Trang 71.1 Initial Value Problems 3
1.2 Boundary Value Problems 8
1.3 Dierential-Algebraic Equations 9
1.4 Families of Application Problems 11
1.5 Dynamical Systems 15
1.6 Notation 16
2 On Problem Stability 19 2.1 Test Equation and General Denitions 21
2.4 Nonlinear Problems 28
2.5 Hamiltonian Systems 29
2.6 Notes and References 31
2.7 Exercises 31
3 Basic Methods, Basic Concepts 35 3.1 A Simple Method: Forward Euler 35
3.2 Convergence, Accuracy, Consistency and 0-Stability 38
3.3 Absolute Stability 42
3.4 Stiness: Backward Euler 47
3.5 A-Stability, Sti Decay 56
3.6 Symmetry: Trapezoidal Method 58
3.7 Rough Problems 61
3.8 Software, Notes and References 64
3.8.1 Notes 64
3.8.2 Software 65
3.9 Exercises 67
4 One Step Methods 73 4.1 The First Runge-Kutta Methods 75
4.2 General Formulation of Runge-Kutta Methods 81
vii
Trang 84.3 Convergence, 0-Stability and Order for Runge-Kutta Methods 83
4.4 Regions of Absolute Stability for Explicit Runge-Kutta Methods 89
4.5 Error Estimation and Control 91
4.6 Sensitivity to Data Perturbations 96
4.7 Implicit Runge-Kutta and Collocation Methods 101
4.7.1 Implicit Runge-Kutta Methods Based on Collocation 102 4.7.2 Implementation and Diagonally Implicit Methods 105
4.7.3 Order Reduction 108
4.7.4 More on Implementation and SIRK Methods 109
4.8 Software, Notes and References 110
4.8.1 Notes 110
4.8.2 Software 112
4.9 Exercises 113
5 Linear Multistep Methods 125 5.1 The Most Popular Methods 127
5.1.1 Adams Methods 128
5.1.2 Backward Dierentiation Formulae 131
5.1.3 Initial Values for Multistep Methods 132
5.2 Order, 0-Stability and Convergence 134
5.2.1 Order 134
5.2.2 Stability: Dierence Equations and the Root Condition 137 5.2.3 0-Stability and Convergence 139
5.3 Absolute Stability 143
5.4 Implementation of Implicit Linear Multistep Methods 146
5.4.1 Functional Iteration 146
5.4.2 Predictor-Corrector Methods 146
5.4.3 Modied Newton Iteration 148
5.5 Designing Multistep General-Purpose Software 149
5.5.1 Variable Step-Size Formulae 150
5.5.2 Estimating and Controlling the Local Error 152
5.5.3 Approximating the Solution at O-Step Points 155
5.6 Software, Notes and References 155
5.6.1 Notes 155
5.6.2 Software 156
5.7 Exercises 157
6 More BVP Theory and Applications 163 6.1 Linear Boundary Value Problems and Green's Function 166
6.2 Stability of Boundary Value Problems 168
6.3 BVP Stiness 171
6.4 Some Reformulation Tricks 172
6.5 Notes and References 174
Trang 96.6 Exercises 175
7 Shooting 177 7.1 Shooting: a Simple Method and its Limitations 177
7.2 Multiple Shooting 183
7.3 Software, Notes and References 186
7.3.1 Notes 186
7.3.2 Software 187
7.4 Exercises 187
8 Finite Dierence Methods for BVPs 193 8.1 Midpoint and Trapezoidal Methods 194
8.1.1 Solving Nonlinear Problems: Quasilinearization 197
8.1.2 Consistency, 0-stability and Convergence 201
8.2 Solving the Linear Equations 204
8.3 Higher Order Methods 206
8.3.1 Collocation 206
8.3.2 Acceleration Techniques 208
8.4 More on Solving Nonlinear Problems 210
8.4.1 Damped Newton 210
8.4.2 Shooting for Initial Guesses 211
8.4.3 Continuation 211
8.5 Error Estimation and Mesh Selection 213
8.6 Very Sti Problems 215
8.7 Decoupling 220
8.8 Software, Notes and References 222
8.8.1 Notes 222
8.8.2 Software 223
8.9 Exercises 223
9 More on Dierential-Algebraic Equations 231 9.1 Index and Mathematical Structure 232
9.1.1 Special DAE Forms 238
9.1.2 DAE Stability 245
9.2 Index Reduction and Stabilization: ODE with Invariant 247
9.2.1 Reformulation of Higher-Index DAEs 248
9.2.2 ODEs with Invariants 250
9.2.3 State Space Formulation 253
9.3 Modeling with DAEs 254
9.4 Notes and References 256
9.5 Exercises 257
Trang 1010 Numerical Methods for Dierential-Algebraic Equations 263
10.1 Direct Discretization Methods 264
10.1.1 A Simple Method: Backward Euler 265
10.1.2 BDF and General Multistep Methods 268
10.1.3 Radau Collocation and Implicit Runge-Kutta Methods 270 10.1.5 Specialized Runge-Kutta Methods for Hessenberg Index-2 DAEs Index-280
10.2 Methods for ODEs on Manifolds 282
10.2.1 Stabilization of the Discrete Dynamical System 283
10.2.2 Choosing the Stabilization MatrixF 287
10.3 Software, Notes and References 290
10.3.1 Notes 290
10.3.2 Software 292
10.4 Exercises 293
Trang 11List of Tables
3.1 Maximum errors for Example 3.1 603.2 Maximum errors for long interval integration of y0= (cost)y 714.1 Errors and calculated convergence rates for the forward Euler,the explicit midpoint (RK2) and the classical Runge-Kutta(RK4) methods 80
5.4 Example5.3: Errors and calculated convergencerates for Bashforth methods 1335.5 Example5.3: Errors and calculated convergencerates for Adams-Moulton methods 1345.6 Example 5.3: Errors and calculated convergence rates for BDFmethods 1358.1 Maximum errors for Example 8.1 using the midpoint method:uniform meshes 1958.2 Maximum errors for Example 8.1 using the midpoint method:nonuniform meshes 1958.3 Maximum errors for Example 8.1 using collocation at 3 Gaus-sian points: uniform meshes 2078.4 Maximum errors for Example 8.1 using collocation at 3 Gaus-sian points: nonuniform meshes 20710.1 Errors for Kepler's problem using various 2nd order methods 28510.2 Maximum drifts for the robot arm denotes an error overow 291
Adams-xi
Trang 13List of Figures
1.1 u vs t for u(0) = 1 and various values of u0(0) 2
1.2 Simple pendulum 3
1.3 Periodic solution forming a cycle in the y1 y2 plane 5
1.4 Method of lines The shaded strip is the domain on which the diusion PDE is dened The approximations yi(t) are dened along the dashed lines 7
2.1 Errors due to perturbations for stable and unstable test equa-tions The original, unperturbed trajectories are in solid curves, the perturbed in dashed Note that they-scales in Figures (a) and (b) arenot the same 23
3.1 The forward Euler method The exact solution is the curved solid line The numerical values are circled The broken line interpolating them is tangential at the beginning of each step to the ODE trajectory passing through that point (dashed lines) 37
3.2 Absolute stability region for the forward Euler method 43
3.3 Approximate solutions for Example 3.1 using the forward Eu-ler method, withh = :19 and h = :21 The oscillatory prole corresponds to h = :21 for h = :19 the qualitative behavior of the exact solution is obtained 44
3.4 Approximate solution and plausible mesh, Example 3.2 48
3.5 Absolute stability region for the backward Euler method 50
3.6 Approximate solution on a coarse uniform mesh for Example 3.2, using backward Euler (the smoother curve) and trape-zoidal methods 58
3.7 Sawtooth function for = 0:2 62
4.1 Classes of higher order methods 74
4.2 Approximate area under curve 77
4.3 Midpoint quadrature 77
xiii
Trang 144.4 Stability regions for p-stage explicit Runge-Kutta methods of
orderp, p = 1234 The inner circle corresponds to forward
Euler, p = 1 The larger p is, the larger the stability region
Note the \ear lobes" of the 4th order method protruding into
the right half plane 90
4.5 Schematic of a mobile robot 98
4.6 Toy car routes under constant steering: unperturbed (solid
line), steering perturbed by (dash-dot lines), and
corre-sponding trajectories computed by the linear sensitivity
anal-ysis (dashed lines) 100
4.7 Energy error for the Morse potential using leapfrog with h =
2:3684 117
4.8 Astronomical orbit using the Runge-Kutta Fehlberg method 118
4.9 Modied Kepler problem: approximate and exact solutions 124
5.1 Adams-Bashforth methods 128
5.2 Adams-Moulton methods 130
5.3 Zeros of () for a 0-stable method 141
5.4 Zeros of () for a strongly stable method It is possible to
draw a circlecontained in the unit circleabout each extraneous
root 142
5.5 Absolute stability regions of Adams methods 144
5.6 BDF absolute stability regions The stability regions are
out-side the shaded area for each method 145
5.7 Lorenz \buttery" in the y1
y3 plane 1596.1 Two solutions u(t) for the BVP of Example 6.2 165
6.2 The function y1(t) and its mirror image y2(t) = y1(b;t), for
8.1 Example 8.1: Exact and approximate solutions
(indistinguish-able) for = 50, using the indicated mesh 196
8.2 Zero-structure of the matrix A, m = 3N = 10 The matrix
size is m(N + 1) = 33 204
8.3 Zero-structure of the permutedmatrixAwith separated
bound-ary conditions,m = 3k = 2N = 10 205
8.4 Classes of higher order methods 206
8.5 Bifurcation diagram for Example 8.5 : u vs 214
Trang 158.6 Solution for Example 8.6 with = ;1000 using an upwind
discretization with a uniform step size h = 0:1 (solid line)
The \exact" solution is also displayed (dashed line) 219
9.1 A function and its less smooth derivative 232
9.2 Sti spring pendulum, " = 10;3, initial conditions q(0) =
(1;"1 = 40)Tv(0) =0 244
9.3 Perturbed (dashed lines) and unperturbed (solid line)
solu-tions for Example 9.9 252
9.4 A matrix in Hessenberg form 258
10.1 Methods for the direct discretization of DAEs in general form 265
10.2 Maximum errors for the rst 3 BDF methods for Example
10.2 270
10.3 A simple electric circuit 274
10.4 Results for a simple electric circuit: U2(t) (solid line) and the
inputUe(t) (dashed line) 276
10.5 Two-link planar robotic system 289
10.6 Constraint path for (x2y2) 290
Trang 16Ordinary Dierential Equations
Ordinary dierential equations (ODEs) arise in many instances when ing mathematical modeling techniques for describing phenomena in science,engineering, economics, etc In most cases the model is too complex to al-low nding an exact solution or even an approximate solution by hand: anMathematically, and computationally, a rst cut at classifying ODE prob-lems is with respect to the additional or side conditions associated with them
us-To see why, let us look at a simple example Consider
u00(t) + u(t) = 0 0tbwhere t is the independent variable (it is often, but not always, convenient
to think of t as \time"), and u = u(t) is the unknown, dependent variable.Throughout this book we use the notation
u0 = dudt u00 = d2u
dt2etc We shall often omit explicitly writing the dependence of u on t
The general solution of the ODE for u depends on two parameters and ,
u(t) = sin(t + ):
We can therefore impose two side conditions:
Initial value problem: Given values u(0) = c1 and u0(0) = c2, the pair
Trang 17Figure 1.1: u vs t for u(0) = 1 and various values of u0(0).
has a unique solution for any initial data c = (c1c2)T Such solution
curves are plotted forc1 = 1 and dierent values of c2 in Fig 1.1
Boundary value problem: Given values u(0) = c1 and u(b) = c2, it
appears from Fig 1.1 that forb = 2, say, if c1andc2are chosen carefully
then there is a unique solution curve that passes through them, just
like in the initial value case However, consider the case where
Now dierent values of u0(0) yield the same value ;u(0) (see
again Fig 1.1) So, if the given value of u(b) = c2 =;c1 then we have
innitely many solutions, whereas if c2
=;c1 then no solution exists
This simple illustration already indicates some important general issues
For initial value problems, one starts at the initial point with all the
solu-tion informasolu-tion and marches with it (in \time") { the process is local For
boundary value problems the entire solution information (for a second order
problem this consists of u and u0) is not locally known anywhere, and the
process of constructing a solution is global in t Thus we may expect many
numerical procedures discussed in this book
Trang 181.1 Initial Value Problems
The general form of an initial value problem (IVP) that we shall discuss is
y0 = f(ty) 0t b
Here y and f are vectors with m components, y =y(t), and f is in general
a nonlinear function of t and y When f does not depend explicitly on t, we
speak of the autonomous case When describing general numerical methods
we shall often assume the autonomous case simply in order to carry less
notation around The simple example from the beginning of this chapter is
in the form (1.1) with m = 2, y= (uu0)T,f = (u0;u)T
In (1.1) we assume, for simplicity of notation, that the starting point for
t is 0 An extension to an arbitrary interval of integration ab] of everything
Before proceeding further, we give three examples which are famous for
being very simple on one hand and for representing important classes of
applications on the other hand
Example 1.1 (Simple pendulum) Consider a tiny ball of mass1attached
to the end of a rigid, massless rod of length 1 At its other end the rod's
position is xed at the origin of a planar coordinate system (see Fig 1.2)
Θ
Figure 1.2: Simple pendulum
Denoting by the angle between the pendulum and they-axis, the
friction-free motion is governed by the ODE (cf Example 1.5 below)
where g is the (scaled) constant of gravity This is a simple, nonlinear ODE
for The initial position and velocity conguration translates into values
Trang 19for (0) and 0(0) The linear, trivial example from the beginning of this
chapter can be obtained from an approximation of (a rescaled) (1.2) for small
The pendulum problem is posed as a second order scalar ODE Much of
the software for initial value problems is written for rst order systems in the
form (1.1) A scalar ODE of order m,
u( m ) =g(tuu0::: u( m ;1))can be rewritten as a rst-order system by introducing a new variable for
each derivative, with y1=u:
Example 1.2 (Predator-prey model) Following is a basic, simple model
from population biology which involves di erential equations Consider an
ecological system consisting of one prey species and one predator species The
prey population would grow unboundedly if the predator were not present, and
the predator population would perish without the presence of the prey Denote
y1(t)| the prey population at time t
y2(t)| the predator population at time t
| (prey's birthrate){(prey's natural death rate) ( > 0)
| probability of a prey and a predator to come together
| predator's natural growth rate (without prey )
| increase factor of growth of predator if prey and predator meet
Typical values for these constants are = :25, = :01, ;1:00, = :01
Trang 2015 20 25 30 35 40
70 80 90 100 110 120 130
Figure 1.3: Periodic solution forming a cycle in the y1
y2 plane
we obtain an ODE in the form (1.1) with m = 2components, describing the
time evolution of these populations
The qualitative question here is, starting from some initial values y(0)
out of a set of reasonable possibilities, will these two populations survive
or perish in the long run? As it turns out, this model possesses periodic
solutions: starting, say, fromy(0) = (8030)T, the solution reaches the same
pair of values again after some time period T, i.e y(T) =y(0) Continuing
to integrate past T yields a repetition of the same values, y(T + t) = y(t)
Thus, the solution forms a cycle in the phase plane (y1y2) (see Fig 1.3)
Starting from any point on this cycle the solution stays on the cycle for all
time Other initial values not on this cycle yield other periodic solutions with
a generally di erent period So, under these circumstances the populations of
the predator and prey neither explode nor vanish for all future times, although
their number never becomes constant 1
2
time, and starting from points nearby the solution tends in time towards the limit cycle.
The neutral stability of the cycle in our simple example, in contrast, is one reason why
this predator-prey model is discounted among mathematical biologists as being too simple.
Trang 21Example 1.3 (A diusion problem) A typical di usion problem in one
space variable x and time t leads to the partial di erential equation (PDE)
for an unknown function u(tx) of two independent variables dened on a
strip 0 x 1 t0 For simplicity, assume that p = 1 and g is a known
function Typical side conditions which make this problem well-posed are
u(0x) = q(x) 0 x 1 Initial conditionsu(t0) = (t) u(t1) = (t) t0 Boundary conditions
To solve this problem numerically, consider discretizing in the space
vari-able rst For simplicity assume a uniform mesh with spacing x = 1=(m +
1), and let yi(t) approximate u(xit), where xi = ix i = 01::: m + 1
with y0(t) = (t) and ym +1(t) = (t) given We have obtained an initial
value ODE problem of the form (1.1) with the initial data ci =q(xi)
This technique of replacing spatial derivatives by nite di erence
approx-imations and solving an ODE problem in time is referred to as the method
of lines Fig 1.4 illustrates the origin of the name Its more general form is
discussed further in Example 1.7 below 2
We now return to the general initial value problem for (1.1) Our
inten-tion in this book is to keep the number of theorems down to a minimum:
the references which we quote have them all in much detail But we will
nonetheless write down those which are of fundamental importance, and the
one just below captures the essence of the (relative) simplicity and locality
of initial value ODEs For the notation that is used in this theorem and
throughout the book, we refer to x1.6
Theorem 1.1 Let f(ty) be continuous for all (ty) in a region D =f0
t b;1<jyj<1g Moreover, assume Lipschitz continuity in y: there
exists a constant L such that for all (ty)and (t ^y) in D,
jf(ty);f(t ^y)j Ljy;y^j: (1.4)
Then
1 For any c 2 R m there exists a unique solution y(t) throughout the
interval 0b] for the initial value problem (1.1) This solution is
dif-ferentiable
Trang 22t
Figure 1.4: Method of lines The shaded strip is the domain on which the
diusion PDE is dened The approximations yi(t) are dened along the
dashed lines
2 The solution y depends continuously on the initial data: if y^ also
sat-ises the ODE (but not the same initial values) then
on the data, in other words a well-posed problem, provided that the
condi-tions of the theorem hold Let us check these condicondi-tions: Iff is dierentiable
iny(we shall automatically assume this throughout) then theL can be taken
as a bound on the rst derivatives of f with respect to y Denote byfy the
Jacobian matrix,
(fy)ij = @f@yji 1ij m:
Trang 23We can write
f(ty);f(t ^y) =
Z 1 0
d
dsf(t ^y+s(y;y^)) ds
=
Z 1 0
fy(t ^y+s(y;y^)) (y;y^) ds:
Therefore, we can choose L = sup( t y )2D
kfy(ty)k
In many cases we must restrict D in order to be assured of the existence
of such a (nite) bound L For instance, if we restrict D to include bounded
y such that jy;cj D both the Lipschitz bound (1.4) holds
and jf(ty)j M, then a unique existence of the solution is guaranteed for
Reader's advice: Before continuing our introduction, let us
re-mark that a reader who is interested in getting to the numerics
of initial value problems as soon as possible may skip the rest of
this chapter and the next, at least on rst reading
1.2 Boundary Value Problems
The general form of a boundary value problem (BVP) which we consider is a
nonlinear rst order system ofm ODEs subject to m independent (generally
nonlinear) boundary conditions,
We have already seen in the beginning of the chapter that in those cases
where solution information is given at both ends of the integration interval
(or, more generally, at more than one point in time), nothing general like
Theorem 1.1 can be expected to hold Methods for nding a solution, both
analytically and numerically, must be global and the task promises to be
generally harder than for initial value problems This basic dierence is
manifested in the current status of software for boundary value problems,
which is much less advanced or robust than that for initial value problems
Of course, well-posed boundary value problems do arise on many
occa-sions
Trang 24Example 1.4 (Vibrating spring) The small displacementuof a vibrating
spring obeys a linear di erential equation
;(p(t)u0)0+q(t)u = r(t)
where p(t) > 0 and q(t) 0 for all 0 t b (Such an equation describes
also many other physical phenomena in one space variable t.) If the spring
is xed at one end and is left to oscillate freely at the other end then we get
the boundary conditions
the energy in the spring), as shown and discussed in many books on nite
element methods, e.g Strang & Fix 90] 2
Another exampleof a boundary value problem is provided by the
predator-prey system of Example 1.2, if we wish to nd the periodic solution (whose
existence is evident from Fig 1.3) We can specify y(0) = y(b) However,
note thatb is unknown, so the situation is more complex Further treatment
is deferred to Chapter 6 and Exercise 7.5 A complete treatment of nding
periodic solutions for ODE systems falls outside the scope of this book
What can be generally said about existence and uniqueness of solutions
to a general boundary value problem (1.7)? We may consider the associated
initial value problem (1.1) with the initial valuesc as a parameter vector to
be found Denoting the solution for such an IVP y(tc), we wish to nd the
solution(s) for the nonlinear algebraic system of m equations
However, in general there may be one, many or no solutions for a system like
(1.8) We delay further discussion to Chapter 6
1.3 Dierential-Algebraic Equations
Both the prototype IVP (1.1) and the prototype BVP (1.7) refer to anexplicit
ODE system
Trang 25A more general form is an implicit ODE
where the Jacobian matrix @F( t u v )
@ v is assumed nonsingular for all argumentvalues in an appropriate domain In principle it is then often possible to solve
for y0 in terms of t and y, obtaining the explicit ODE form (1.9) However,
this transformation may not always be numerically easy or cheap to realize
(see Example 1.6 below) Also, in general there may be additional questions
of existence and uniqueness we postpone further treatment until Chapter 9
Consider next another extension of the explicit ODE, that of an ODE
with constraints:
Here the ODE (1.11a) for x(t) depends on additional algebraic variables
z(t), and the solution is forced in addition to satisfy the algebraic constraints
(1.11b) The system (1.11) is a semi-explicit system of di erential-algebraic
equation(DAE) Obviously, we can cast (1.11) in the form of an implicitODE
(1.10) for the unknown vector y =
0
@
x z
is no longer nonsingular
Example 1.5 (Simple pendulum revisited) The motion of the simple
pendulum of Fig 1.2 can be expressed in terms of the Cartesian
coordi-nates (x1x2) of the tiny ball at the end of the rod With z(t) a Lagrange
multiplier, Newton's equations of motion give
Trang 26After rewriting the two second-order ODEs as four rst order ODEs, we
obtain a DAE system of the form (1.11) with four equations in (1.11a) and
one in (1.11b)
In this very simple case of a multibody system, the change of variables
x1 = sin x2 = ;cos allows elimination of z by simply multiplying the
ODE for x1 by x2 and the ODE for x2 by x1 and subtracting This yields
the simple ODE (1.2) of Example 1.1 Such a simple elimination procedure
is usually impossible in more general situations, though 2
The dierence between an implicit ODE (with a nonsingular Jacobian
matrix) and a DAE is fundamental Consider the simple example
x0 = z
0 = x;t:
Clearly, the solution is x = t z = 1, and no initial or boundary conditions
are needed In fact, if an arbitrary initial condition x(0) = c is imposed it
may well be inconsistent with the DAE (unless c = 0, in which case this
initial condition is just superuous) We refer to Chapter 9 for more on this
Another fundamental point to note is that even if consistent initial values are
given we cannot expect a simple, general existence and uniqueness theorem
like Theorem 1.1 to hold for (1.11) The nonlinear equations (1.11b) alone
may have any number of solutions Again we refer the reader to Chapter 9
for more details
1.4 Families of Application Problems
Initial-value and boundary-value problems for ODE and DAE systems arise
in a wide variety of applications Often an application generates a family of
problems which share a particular system structure and/or solution
require-ments Here we briey mention three families of problems from important
applications The notation we use is typical for these applications, and is
not necessarily consistent with (1.1) or (1.11) Youdon't need to understand
the details given in this section in order to follow the rest of the text { this
material is supplemental
Example 1.6 (Mechanical systems) When attempting to simulate the
mo-tion of a vehicle for design or in order to simulate safety tests, or in
phys-ically based modeling in computer graphics, or in a variety of instances in
Trang 27robotics, one encounters the need for a fast, reliable simulation of the
dy-namics of multibody systems The system considered is an assembly of rigid
bodies (e.g comprising a car suspension system) The kinematics dene how
these bodies are allowed to move with respect to one another Using
general-ized position coordinates q = (q1::: qn)T for the bodies, with m (so-called
holonomic) constraintsgj(tq(t)) = 0 j = 1::: m, the equations of motion
can be written as
ddt
jgj is the Lagrangian, T is the kinetic energy and
U is the potential energy See almost any book on classical mechanics, for
example Arnold 1], or the lighter Marion & Thornton 65] The resulting
equations of motion can be written as
M(tq)v0 = f(tqv);GT(tq) (1.12b)
where G = @ g
@ q, M is a positive denite generalized mass matrix, f are the
applied forces (other than the constraint forces) and v are the generalized
velocities The system sizes n and m depend on the chosen coordinates q
Typically, using relative coordinates (describing each body in terms of its near
neighbor) results in a smaller but more complicated system If the topology
of the multibody system (i.e the connectivity graph obtained by assigning a
node to each body and an edge for each connection between bodies) does not
have closed loops, then with a minimal set of coordinates one can eliminate
all the constraints (i.e m = 0) and obtain an implicit ODE in (1.12) For
instance, Example 1.1 uses a minimal set of coordinates, while Example 1.5
does not, for a particular multibody system without loops If the multibody
system contains loops (e.g a robot arm, consisting of two links, with the path
of the \hand" prescribed) then the constraints cannot be totally eliminated in
general and a DAE must be considered in (1.12) even if a minimal set of
Example 1.7 (Method of lines) The di usion equation of Example 1.3 is
an instance of a time-dependent partial di erential equation (PDE) in one
Trang 28Time-dependent PDEs naturally arise also in more than one space
dimen-sion, with higher order spatial derivatives, and as systems of PDEs The
process described in Example 1.3 is general: such a PDE can be transformed
into a large system of ordinary di ... conditions for an optimum in this problem are found by
considering the Hamiltonian function
yield the state equations (1.14a), and in addition we have ordinary dierential
equations. .. be expected to hold Methods for nding a solution, both
analytically and numerically, must be global and the task promises to be
generally harder than for initial value problems...
be found Denoting the solution for such an IVP y(tc), we wish to nd the
solution(s) for the nonlinear algebraic system of m equations
However, in general