1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Numerical Methods for Ordinary Dierential Equations Episode 11 pptx

35 304 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Numerical Methods For Ordinary Differential Equations
Trường học University of Example
Chuyên ngành Numerical Methods
Thể loại Bài báo
Năm xuất bản 2023
Thành phố Example City
Định dạng
Số trang 35
Dung lượng 357,4 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

423 Weakly stable methodsThe stability requirement for linear multistep methods specifies that all zeros of the polynomial ρ should lie in the closed unit disc with only simple zeros on t

Trang 1

also the accumulated effect of errors generated in previous steps We present

a simplified discussion of this phenomenon in this subsection, and discuss thelimitations of this discussion in Subsection 421

Suppose a sequence of approximations

y1≈ y(x1),

y2≈ y(x2),

.

y n −1 ≈ y(x n −1 ),

has been computed, and we are now computing step n If, for the moment,

we ignore errors in previous steps, the value of y n can be evaluated using aTaylor expansion where, for implicit methods, we need to take account of the

fact that f (y n) is also being calculated We have

y(x n)− y n − hβ0(f (y(x n))− f(y n))

In this informal discussion, we not only ignore the term O(h p+2) but also treat

the value of h p+1 y (p+1) (x n −i) as constant This is justified in a local sense.That is, if we confine ourselves to a finite sequence of steps preceding step

n, then the variation in values of this quantity will also be O(h p+2), and weignore such quantities Furthermore, if

y(x n)− y n − hβ0(f (y(x n))− f(y n))≈ C p+1 h p+1 y (p+1) (x n ),

then the assumption that f satisfies a Lipschitz condition will imply that

y(x n)− y n ≈ C p+1 h p+1 y (p+1) (x n)and that

h(f (y(x n))− f(y n )) = O(h p+2 ).

With the contributions of terms of this type thrown into the O(h p+2)category, and hence capable of being ignored from the calculation, we can

write a difference equation for the error in step n, which will be written as

n = y(x n)− y n, in the form

n −

k



α i n −i = Kh p+1 ,

Trang 2

where K is a representative value of C p+1 y (p+1).

For a stable consistent method, the solution of this equation takes the form

where the coefficients η i , i = 1, 2, , k, depend on initial values and λ i,

i = 1, 2, , k, are the solutions to the polynomial equation α(λ −1) = 0.

The factor −α (1)−1 that occurs in (420a) can be written in a variety of

forms, and we have

−α  (1) = ρ  (1) = β(1) = σ(1) = α

1+ 2α2+· · · + kα k

The value of−Cα (1)−1is known as the ‘error constant’ for the method and

represents the factor by which h p+1 y (p+1) must be multiplied to give thecontribution from each step to the accumulated error Since the method is

assumed to be stable, the terms of the form η i λ n

i can be disregarded comparedwith the linearly growing term−α (1)−1 h p+1 nK If the integration is carried out to a specific output value x, and n steps are taken to achieve this result, then hn = x − x0 In this case we can make a further simplification and writethe accumulated error as approximately

−(x − x0)α(1)−1 h p Cy (p+1) (x).

In the next subsection, these ideas will be discussed further

421 Further remarks on error growth

In Subsection 420 we gave an informal argument that, over many steps, there

is a contribution to the accumulated error from step n of approximately

−α (1)−1 C p+1 y (p+1) (x n )h p+1 Since we are interested in the effect of this

contribution at some future point x, we can consider the differential equation

and correspond respectively to the exact solution and to the solution

perturbed by the error introduced in step n.

This suggests the possibility of analysing the development of numericalerrors through the differential equation

z  (x) = ∂f (y(x))

∂y z(x) + y

(p+1) (x), z(x0) = 0. (421a)

Trang 3

δ n

y(x) y(x)

y(x)

Figure 421(i) Development of accumulated errors in a single step

Using this equation, we might hope to be able to approximate the error after

n steps have been performed as

−α (1)−1 C

p+1 h p z(x n ),

because the linear term in (421a) expresses the rate of growth of the separation

of an already perturbed approximation and the non-linear term, when scaled

by −α (1)−1 C

p+1 h p, expresses the rate at which new errors are introduced

as further steps are taken The negative sign is consistent with the standardconvention that errors are interpreted to mean the exact solution minus theapproximation

To turn this idea into a formal result it is possible to proceed in two steps Inthe first step, asymptotic approximations are made In the second, the errors

in making these approximations are bounded and estimated so that they canall be bundled together in a single term which tends to zero more rapidly as

h → 0 than the asymptotic approximation to the error.

The second of these steps will not be examined in detail and the first stepwill be described in terms of the diagram given in Figure 421(i) In this figure,

y(x) is the exact solution and y(x) is the function y(x) + α (1)−1 C

p+1 h p z(x) The function y(x) is the exact solution to the differential equation but with initial value at x n −1 set to y(x n −1 ) In the single step from x n −1 to

x n , the perturbed approximation y drifts away from y at an approximate

O(h p+2)

Trang 4

422 The underlying one-step method

Although linear multistep methods seem to be at the opposite end of thespectrum from Runge–Kutta methods, there is a very close link between

them Suppose the method [α, β] is preconsistent and stable, and consider

Although η does not represent a Runge–Kutta method, it does represent a

process for progressing a numerical approximation through a single time step.Suppose that the method is started using

y i = y(x0) +

t ∈T

η i (t)h r(t)

σ(t) F (t)(y(x0)), i = 0, 1, 2, , k − 1,

corresponding to the group element η i ; then this value of y i will persist for

i = k, k + 1, We will show this formally in Theorem 422C.

In the meantime, we remark that convergence of the formal series associated

with η i is not assured, even for i = 1, unless the function f and the value of

h are restricted in some appropriate way In this sense we can regard these

‘B-series’ as formal Taylor series

What we really want is not η satisfying (422a) but the mapping Φ, say, which

corresponds to it If exponentiation of Φ is taken to denote compositions, or,for negative powers, compositions of the inverse mapping, then we want to beable to define Φ by

id− α−1 − α−2 − · · · − α kΦ−k

− hβ0f − hβ1(f ◦ Φ −1)− hβ2(f ◦ Φ −2)− · · · − hβ k (f ◦ Φ −k ) = 0. (422b)

Because the corresponding member of G1 can be evaluated up to anyrequired order of tree, it is regarded as satisfactory to concentrate on thisrepresentation

Theorem 422A For any preconsistent, stable linear multistep method [α, β],

there exists a member of the group G1 satisfying (422a).

Proof By preconsistency,"k

i=1 α i= 1 Hence, (422a) is satisfied in the case

of t = ∅, in the sense that if both sides are evaluated for the empty tree, then they each evaluate to zero Now consider a tree t with r(t) > 0 and assume

Trang 5

1(u) − α1η −1 (u) − α2η −2 (u) − · · · − α k η −k (u)

− β0D(u) − β1η −1 D(u) − β2η −2 D(u) − · · · − β k η −k D(u) = 0,

is satisfied for every tree u satisfying r(u) < r(t) We will prove that there exists a value of η(t) such that this equation is also satisfied if u is replaced by

t The coefficient of η(t) in η −i (t) is equal to i( −1) r(t)and there are no other

terms in η −i (t) with orders greater than r(t) − 1 Furthermore, all terms on the right-hand side contain only terms with orders less than r(t) Hence, to satisfy (422a), with both sides evaluated at t, it is only necessary to solve the

Definition 422B Corresponding to a linear multistep method [α, β], the

member of G1 represents the ‘underlying one-step method’.

As we have already remarked, the mapping Φ in (422b), if it exists in morethan a notional sense, is really the object of interest and this really is theunderlying one-step method

Theorem 422C Let [α, β] denote a preconsistent, stable linear multistep

method and let η denote a solution of (422a) Suppose that y i is represented

by η i for i = 0, 1, 2, , k − 1; then y i is represented by η i for i = k, k + 1,

Proof The proof is by induction, and it will only be necessary to show that

y k is represented by η k, since this is a typical case Multiply (422a) on the left

Trang 6

423 Weakly stable methods

The stability requirement for linear multistep methods specifies that all zeros

of the polynomial ρ should lie in the closed unit disc with only simple zeros on

the boundary There is always a zero at 1, because of consistency, and theremay or may not be other zeros on the boundary We show in Subsection 441

that for a k-step method, with k even, the maximum possible order is k + 2 For methods with this maximal order, it turns out that all zeros of ρ lie on the

unit circle and we are forced to take these methods seriously We will write

methods in the [α, β] terminology A classic example is

and this is known as the ‘leapfrog method’ Methods based on Newton–Cotesformulae were promoted by Milne (1953), and these all fall into this family.The presence of additional zeros (that is, in addition to the single zerorequired by consistency) on the unit circle leads to the phenomenon known

as ‘weak stability’

A characteristic property of weakly stable methods is their difficulty indealing with the long term integration of dissipative problems For example,

if an approximation to the solution of y =−y is attempted using (423a), the

difference equation for the computed results is

of λ and µ into (423d) and we find

y n ≈ A exp(−nh) + B(−1) n exp(nh).

For high values of n, the second term, which represents a parasitic solution,

eventually dominates the solution and produces a very poor approximation

This is in contrast to what happens for the differential equation y  = y,

for which the solution to the corresponding difference equation takes the

form y n ≈ A exp(nh) + B(−1) nexp(−nh) In this case, the first term again

corresponds to the true solution, but the second term will always be lesssignificant

Trang 7

be available without further computation Halving the stepsize is not so

convenient because new approximations to y(x) and y  (x) are required at

points intermediate to the information that has already been computed.However, both these are special cases and it is usually required to change

the stepsize by a ratio that is perhaps greater than 0.5 and less than 2.0.

We consider a very simple model example in which new values are simplyfound by interpolation and the integration resumed using the modified data.Another approach which we will also consider is where a generalized version

of the numerical method is defined specific to whatever sequence of stepsizesactually arises

We now examine some basic stability questions arising from the

interpolation option applied to an Adams method At the end of step n, besides an approximation to y(x n ), approximations are available for hy  (x n),

hy  (x n − h), , hy  (x n − (p − 1)h) We need to replace these derivative approximations by approximations to rhy  (x

n ), rhy  (x

n − rh), , rhy  (x

n − (p − 1)rh), and these can be evaluated by the interpolation formula

Trang 8

the influence matrix

and because J is nilpotent, the dependence of quantities computed in

a particular step eventually becomes insignificant However, whenever the

stepsize is altered by a factor r, the influence matrix becomes

and this is, in general, not nilpotent If, for example, the interpolation

approach with stepsize ratio r is repeated over many steps, then (424a) might not be power-bounded and unstable behaviour will result In the case p = 3,

As an example of the alternative technique, in which the numerical method

is modified to allow for irregular mesh spacing, consider the BDF3 method

Suppose that approximate solution values are known at x n −1 , x n − h(1 + r −1

2 )

and x n − h(1 + r −1

2 + (r2 r1)−1 ), where r2 and r1 are the most recent stepsize

ratios We now wish to compute y(x n) using a formula of the form

Trang 9

Stability of this variable stepsize version of the BDF3 method will hinge onthe boundedness of products of matrices of the form

An extreme case will be where r1 and r2are equal and as large as possible,

subject to M having bounded powers It is easy to verify that this greatest

rate of continual increase in stepsize corresponds to

r1= r2= r ∗=1 +

5

2 .

It is interesting that an arbitrary sequence of stepsize change ratios, in the

interval (0, r ∗], still guarantees stable behaviour.

42.3 Show that the norm of the product of an arbitrary sequence of matrices

of the form (424b) is bounded as long as each r lies in the interval [0, r ∗],

Trang 10

Figure 430(i) Stability region for the second order Adams–Bashforth method

so that stable behaviour occurs if hq = z, where z is such that the equation

Just as for Runge–Kutta methods, a consistent explicit linear multistepmethod has a bounded stability region and therefore cannot be A-stable

We therefore explore implicit methods as a source of appropriate algorithmsfor the solution of stiff problems It will be found that A-stability is a veryrestrictive property in that it is incompatible with an order greater than 2.Also in this section, we consider a non-linear stability property, known as G-stability, which is a multistep counterpart of algebraic stability introduced inChapter 3

Trang 11

set as the open stability region Write the difference equation in the form

As a starting point in determining the stability region, it is convenient toevaluate the points on the boundary of the unit circle and to note that themapping

w α(w

−1)

traces out a set of points which includes the boundary of the stability region

In particular cases it is easy to determine the exact boundary Since w −1

maps the unit circle to itself, while changing the sense of rotation, it isequivalent to replace (431c) by

w α(w)

This procedure is known as the ‘boundary locus method’ for determiningstability regions, and we give some examples of its use in the next subsection

A second procedure for determining stability regions is based on the idea

of the ‘type of a polynomial’ That is, if P is a polynomial of degree n then the type is a triple (n1, n2, n3), where n1, n2and n3are non-negative integers

with sum exactly n The interpretation is that n1 is the number of zeros of P

Trang 12

in the open unit disc, n2 is the number of zeros on the unit circle and n3is thenumber of zeros outside the closed unit disc If we are willing to concentrate onthe open stability region of a specific method, we can simplify the discussion

to the question of determining whether or not the type of P is (n, 0, 0) We

will refer to such a polynomial as being ‘strongly stable’ Polynomials can betested for this property recursively, using the following result:

Theorem 431A A polynomial P n , given by

P n (w) = a0 w n + a1 w n −1+· · · + a n −1 w + a n , where a0= 0 and n ≥ 2, is strongly stable if and only if

Proof First note that (431e) is necessary for strong stability because if it

were not true, the product of the zeros could not have a magnitude less than

1 Hence, we assume that this is the case and it remains to prove that P n is

strongly stable if and only if the same property holds for P n −1 It is easy to

verify that

wP n −1 (w) = a0 P n (w) − a n w n P n (w −1 ).

By Rouch´e’s theorem, wP n −1 (w) has n zeros in the open unit disc if and only

if the same property is true for P n (w), and the result follows. The result of this theorem is often referred to as the Schur criterion In the

case of n = 2, it leads to the two conditions

|a0|2− |a2|2> 0, (431f)(|a0|2− |a2|2)2− |a0a1− a2a1|2> 0. (431g)

To apply the Schur criterion to the determination of the stability region for

a k-step method, we need to ask for which z the polynomial given by

P (w) = w k (α(w −1)− zβ(w −1))

is strongly stable We present some examples of the use of this test inSubsection 433

Trang 13

Algorithm 432α Boundary locus method for low order Adams–Bashforth

432 Examples of the boundary locus method

The first example is for the second order Adams–Bashforth method (430a)for which (431c) takes the form

w 1− w −1

3

2w −1 −1

2w −2 .

For w = exp(iθ) and θ ∈ [0, 2π], for points on the unit circle, we have z values

on the (possibly extended) boundary of the stability region given by

The MATLAB code given in Algorithm 432α shows how this is done, and the

boundary traced out is exactly as in Figure 430(i)

No confusion is possible as to which part of the complex plane divided bythe boundary locus is the inside and which is the outside because, using anargument based on the Cauchy–Riemann equations, we note that the inside

is always to the left of the path traced out as w increases from 0 to 2π If

we had used (431d) in place of (431c) then, of course, the path would havebeen traced in the opposite direction and the inside of the stability region

would have been on the right Note that in Algorithm 432α the third and

Trang 14

Figure 432(ii) Stability region for the fourth order Adams–Bashforth method

fourth order cases are traced in the reverse direction The stability region ofthe third Adams–Bashforth method, as computed by this algorithm, is given

as the unshaded region of Figure 432(i)

In the case of the fourth order method in this family, the root locus methodtraces out more than the boundary of the stability region, as we see in Figure432(ii) Because crossing the locus corresponds to the shift of one of the growthfactors from stable to unstable, the more heavily shaded region is doublyunstable in that it contains two unstable terms

Trang 15

Figure 432(iv) Stability region for the second order backward difference method

We present three final examples The Adams–Moulton method of order 3

is given in Figure 432(iii); we see that even though this method is implicit ithas a bounded stability region

Now look at the stability regions of the backward difference methods oforders 2 and 3 The first of these, shown in Figure 432(iv), indicates that thesecond order method is A-stable and the second, Figure 432(v), shows that

the third order method is not A-stable.

Trang 16

0 2 4 6

2i

−2i

Figure 432(v) Stability region for the third order backward difference method

433 An example of the Schur criterion

We first recompute the stability region of the second order Adams–Bashforth

method We need to find for what values of the complex number z the polynomial a0 w2+ a1 w + a2 has its zeros in the open unit disc, where

a0= 1, a1=−1 −3

2z, a2=

z

2.The condition|a0|2− |a2|2> 0 is equivalent to

434 Stability of predictor–corrector methods

We consider examples of PEC and PECE methods For the PEC methodbased on second order Adams–Bashforth as predictor and Adams–Moulton

as corrector, we have the following equations for the predicted and correctedvalues:

Trang 17

2

312

i

−i

Figure 434(i) Stability regions for Adams–Moulton methods (solid lines) and

PEC methods (dashed lines)

Superficially, this system describes two sequences, the y and the y ∗ which

develop together However, it is only the y ∗sequence that has derivative values

associated with it Hence, the y sequence can conveniently be eliminated from consideration Replace n by n + 1 in (434a), and we find

y ∗ n+1 = y n+3

β ∗ and β are the respective generating polynomials for an order p Adams–

Bashforth method and the corresponding Adams–Moulton method, then the

general form of the generating polynomial for y ∗in a PEC method is equal to



β, where



β(z) = β ∗ (z) + β0 z(1 − z) p The value of β0 could be replaced by any value we wish without sacrificing

the order p In fact, it could be replaced by the value of ( −1) p β ∗

p+1 so that

the method would actually be of order p + 1 It would in this case be precisely

Ngày đăng: 13/08/2014, 05:21

TỪ KHÓA LIÊN QUAN