1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Numerical Methods for Ordinary Dierential Equations Episode 12 pptx

35 389 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Linear Multistep Methods Coefficients and Nordsieck Methods
Trường học University of Mathematics Education, [Home Page](https://www.university.edu)
Chuyên ngành Numerical Methods for Ordinary Differential Equations
Thể loại Lecture Notes
Định dạng
Số trang 35
Dung lượng 423,85 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

501 Transformations of methodscharacterized by the matrices A, U, B, V , we consider the construction of a second method for which the input quantities, and the corresponding output quan

Trang 1

Table 461(I) Coefficients, γ0, γ1, , γ p, for Nordsieck methods

23 2160

If an approximation is also required for the scaled derivative at x n, this can

be found from the formula, also based on a Taylor expansion,

To find the Nordsieck equivalent to the Adams–Moulton corrector formula,

it is necessary to add β0 multiplied by the difference between the correctedvalue of the scaled derivative and the extrapolated value computed by (461d)

That is, the corrected value of η0[n] becomes

by simple extrapolation from the incoming Nordsieck components Thus wecan write the result computed in a step as

Trang 2

The quantities γ i , i = 0, 1, 2, , p, have values determined by the equivalence

with the standard fixed stepsize method and we know at least that

γ0= β0, γ1= 1.

The value selected for γ1 ensures that η1[n] is precisely the result evaluated

from η0[n] using the differential equation We can arrive at the correct values

of γ2, , γ p, by the requirement that the matrix

has zero spectral radius

Values of the coefficients γ i , i = 0, 1, , p, are given in Table 461(I) for

p = 2, 3, , 8.

Adjustment of stepsize is carried out by multiplying the vector of output

approximations formed in (461e) at the completion of step n, by the diagonal matrix D(r) before the results are accepted as input to step n + 1, where

D(r) = diag(1, r, r2, , r p ).

It was discovered experimentally by Gear that numerical instabilities can

result from using this formulation This can be seen in the example p = 3, where we find the values γ2= 34, γ3= 16 Stability is determined by products

and for r ≥ 1.69562, this matrix is no longer power-bounded.

Gear’s pragmatic solution was to prohibit changes for several further stepsafter a stepsize change had occurred An alternative to this remedy will beconsidered in the next subsection

Trang 3

462 Variable stepsize for Nordsieck methods

The motivation we have presented for the choice of γ1, γ2, in the

formulation of Nordsieck methods was to require a certain matrix to have

zero spectral radius Denote the vector γ and the matrix V by

and denote by e1the basis row vector e1= [ 1 0 · · · 0 ] The characteristic

property of γ is that the matrix

has zero spectral radius When variable stepsize is introduced, the matrix in

(462a) is multiplied by D(r) = diag(r, r2, r3, , r p) and, as we have seen,

if γ is chosen on the basis of constant h, there is a deterioration in stable behaviour We consider the alternative of choosing γ as a function of r so that

, r n then the product of matrices of the form given by (462b) for these

values of r to be analysed to determine stability The spectral radius of such

Trang 4

and this will be bounded by 1 as long as r i ∈ [0, r  ], where r has the propertythat

463 Local error estimation

The standard estimator for local truncation error is based on the Milne device.That is, the difference between the predicted and corrected values provides

an approximation to some constant multiplied by h p+1 y (p+1) (x n), and thelocal truncation error can be estimated by multiplying this by a suitable scalefactor

This procedure has to be interpreted in a different way if, as in some moderncodes, the predictor and corrector are accurate to different orders We nolonger have an asymptotically correct approximation to the local truncationerror but to the error in the predictor, assuming this has the lower order.Nevertheless, stepsize control based on this approach often gives reliable anduseful performance

To allow for a possible increase in order, estimation is also needed for thescaled derivative one order higher than the standard error estimator It isvery difficult to do this reliably, because any approximation will be based on

a linear combination of hy  (x) for different x arguments These quantities in

turn will be of the form hf (x, y(x) + Ch p+1 + O(h p+2)), and the terms of the

form Ch p+1 + O(h p+2) will distort the result obtained However, it is possible

to estimate the scaled order p+2 derivative reliably, at least if the stepsize has

been constant over recent steps, by forming the difference of approximations

to the order p+1 derivative over two successive steps If the stepsize has varied

moderately, the approximation this approximation will still be reasonable Inany case, if the criterion for increasing order turns out to be too optimistic forany specific problem, then after the first step with the new order a rejection islikely to occur, and the order will either be reduced again or else the stepsizewill be lowered while still maintaining the higher order

Trang 5

Chapter 5

General Linear Methods

500 Multivalue–multistage methods

The systematic computation of an approximation to the solution of an initialvalue problem usually involves just two operations: evaluation of the function

f defining the differential equation and the forming of linear combinations

of previously computed vectors In the case of implicit methods, furthercomplications arise, but these can also be brought into the same general linearformulation

We consider methods in which a collection of vectors forms the input atthe beginning of a step, and a similar collection is passed on as output fromthe current step and as input into the following step Thus the method is a

multivalue method, and we write r for the number of quantities processed in

this way In the computations that take place in forming the output quantities,

there are assumed to be s approximations to the solution at points near the current time step for which the function f needs to be evaluated As for Runge–Kutta methods, these are known as stages and we have an s-stage or,

in general, multistage method

The intricate set of connections between these quantities make up what isknown as a general linear method Following Burrage and Butcher (1980), we

represent the method by four matrices which we will generally denote by A,

U , B and V These can be written together as a partitioned (s + r) × (s + r)

Y2 , , Y s , are computed and derivative values F i = f (Y i ), i = 1, 2, , s,

are computed in terms of these Finally, the output values are computed and,

because these will constitute the input at step n + 1, they will be denoted by

Numerical Methods for Ordinary Differential Equations, Second Edition J C Butcher

© 2008 John Wiley & Sons, Ltd ISBN: 978-0-470-72335-7

Trang 6

y [n] i , i = 1, 2, , r The relationships between these quantities are defined in terms of the elements of A, U , B and V by the equations

It will be convenient to use a more concise notation, and we start by defining

vectors Y, F ∈ R sN and y [n −1] , y [n] ∈ R rN as follows:

to step

Trang 7

501 Transformations of methods

characterized by the matrices (A, U, B, V ), we consider the construction of a

second method for which the input quantities, and the corresponding output

quantities, are replaced by linear combinations of the subvectors in y [n −1] (or

in y [n] , respectively) In each case the rows of T supply the coefficients in

the linear combinations These ideas are well known in the case of Adamsmethods, where it is common practice to represent the data passed betweensteps in a variety of configurations For example, the data imported into step

n may consist of approximations to y(x n −1) and further approximations to

hy  (x n

−i ), for i = 1, 2, , k Alternatively it might, as in Bashforth and

Adams (1883), be expressed in terms of y(x n −1) and of approximations to a

sequence of backward differences of the derivative approximations It is alsopossible, as proposed in Nordsieck (1962), to replace the approximations tothe derivatives at equally spaced points in the past by linear combinations

which will approximate scaled first and higher derivatives at x n −1

Hence the method which uses the y data and the coefficients (A, U, B, V ),

could be rewritten to produce formulae for the stages in the form

Y = hAF + U y [n −1] = hAF + U T −1 z [n −1] . (501a)

The formula for y [n] = hBF + V y [n −1], when transformed to give the value of

Thus, the method with coefficient matrices (A, U T −1 , T B, T V T −1) is related

to the original method (A, U, B, V ) by an equivalence relationship with a

natural computational significance The significance is that a sequence ofapproximations, using one of these formulations, can be transformed into thesequence that would have been generated using the alternative formulation

Trang 8

It is important to ensure that any definitions concerning the properties of ageneric general linear method transform in an appropriate manner, when thecoefficient matrices are transformed.

Even though there may be many interpretations of the same general linearmethod, there may well be specific representations which have advantages ofone sort or another Some examples of this will be encountered later in thissection

502 Runge–Kutta methods as general linear methods

Since Runge–Kutta methods have a single input, it is usually convenient to

represent them, as general linear methods, with r = 1 Assuming the input vector is an approximation to y(x n −1 ), it is only necessary to write U = 1,

V = 1, write B as the single row b of the Runge–Kutta tableau and, finally,

identify A with the s × s matrix of the same name also in this tableau.

A very conventional and well-known example is the classical fourth ordermethod

0

1 2 1 2 1

2 0 12

1 6 1 3 1 3 1 6

which, in general linear formulation, is represented by the partitioned matrix

3 1 24

6 2 3 1 6 1

6 2 3 1 6

for which the straightforward representation, with s = 3 and r = 1, is

misleading The reason is that the method has the ‘FSAL property’ in the

sense that the final stage evaluated in a step is identical with the first stage

of the following step It therefore becomes possible, and even appropriate, to

Trang 9

use a representation with s = r = 2 which expresses, quite explicitly, that the

FSAL property holds This representation would be

6 1 16

2 3 1

4 1 4

1

6 0 23 16 .

(502b)

As we pointed out when the method was introduced, it can be implemented

as a two-value method by replacing the computation of the second stagederivative by a quantity already computed in the previous step The method

is now not equivalent to any Runge–Kutta method but, as a general linearmethod, it has coefficient matrix

503 Linear multistep methods as general linear methods

For a linear k-step method [α, β] of the special form α(z) = 1 − z, the natural

way of writing this as a general linear method is to choose r = k + 1, s = 1

and the input approximations as

Trang 10

The matrix representing the method now becomes

and we see that it is possible to reduce r from k + 1 to k, because the (k + 1)th

input vector is never used in the calculation

The well-known technique of implementing an implicit linear multistepmethod by combining it with a related explicit method to form a predictor–corrector pair fits easily into a general linear formulation Consider,for example, the PECE method based on the third order Adams–Bashforth and Adams–Moulton predictor–corrector pair Denote the predicted

Trang 11

The r = 4 input approximations are the values of y n −1 , hf (x n −1 , y n −1),

hf (x n −2 , y n −2 ) and hf (x n −3 , y n −3 ) The (s + r) × (s + r) coefficient matrix is

The one-leg methods, introduced by Dahlquist (1976) as counterparts

of linear multistep methods, have their own natural representations asgeneral linear methods For the method characterized by the polynomial pair

[α(z), β(z)], the corresponding one-leg method computes a single stage value

Y , with stage derivative F , using the formula

"k i=0 β i

This does not fit into the standard representation for general linear methods

Trang 12

As a general linear method, it has the form

504 Some known unconventional methods

Amongst the methods that do not fit under the conventional Runge–Kutta

or linear multistep headings, we consider the cyclic composite methods ofDonelson and Hansen (1971), the pseudo Runge–Kutta methods of Byrneand Lambert (1966) and the hybrid methods of Gragg and Stetter (1964),Butcher (1965) and Gear (1965) We illustrate, by examples, how methods ofthese types can be cast in general linear form

To overcome the limitations of linear multistep methods imposed by theconflicting demands of order and stability, Donelson and Hansen proposed

a procedure in which two or more linear multistep methods are used in

rotation over successive steps Write the constituent methods as (α(1), β(1)),

(2), β(2)), , (α (m) , β (m) ), so that the formula for computing y n will be

where j ∈ {1, 2, , m} is chosen so that n − j is a multiple of m.

may vary amongst the m constituent methods, but they can be assumed to have a common value k equal to the maximum over all the basic methods.

We illustrate these ideas in the case k = 3, m = 2 As a consequence of the Dahlquist barrier, order p = 5 with k = 3 is inconsistent with stability and

therefore convergence Consider the following two linear multistep methods:

(1)(z), β(1)(z)] = [1 +118z −19

11z2,1033+1911z +118z2 1

33z3], [α(2)(z), β(2)(z)] = [1 −449z −19z2+361z3,251+19z −449z235z3].

Trang 13

Each of these has order 5 and is, of course, unstable To combine them, usedalternately, into a single step of a general linear method, it is convenient to

regard h as the stepsize for the complete cycle of two steps We denote the incoming approximations as y n −3/2 , y n −1 , hf n −2 , hf n −3/2 and hf n −1 The first

half-step, relating y n −1/2 and hf n −1/2 to the input quantities, gives

7920 251 1440

449 660 5

33 0 1911 8

66

4 11 19 22 4753

7920 251 1440

449 660

This formulation can be simplified, in the sense that r can be reduced, and

we have, for example, the following alternative coefficient matrices:

449 660

Trang 14

We now turn to pseudo Runge–Kutta methods Consider the method given

by (261a) Even though four input values are used in step n (y n −1 , hF [n −1]

1 ,

hF [n −1]

addition to y n −1, only the combination 121hF [n −1]

replaced by n, has to be computed in step n for use in the following step The

(3 + 2)× (3 + 2) matrix representing this method is

11 12

1 3 1

For a seventh order method taken from Butcher (1965), the solution at

the end of the step is approximated using ‘predictors’ at x n −1

2h and at x n,

in preparation for a final ‘corrector’ value, also at x n The input quantities

correspond to solution approximations y [n −1]

153 128

225 128

300 128

45 128 384

505 Some recently discovered general linear methods

The methods already introduced in this section were inspired as modifications

of Runge–Kutta or linear multistep methods We now consider two examplemethods motivated not by either of the classical forms, but by the generallinear structure in its own right

Trang 15

The first of these is known as an ‘Almost Runge–Kutta’ method That is,although it uses three input and output approximations, it behaves like aRunge–Kutta method from many points of view The input vectors can be

thought of as approximations to y(x n −1 ), hy  (x n −1 ) and h2y  (x n −1) and the

output vectors are intended to be approximations to these same quantities,

but evaluated at x n rather than at x n −1:

The second example is given by the coefficient matrix

6 2 3 4 3 1 3 35

24 1 3 1

8 2 3 4 3 1 3 17

12 0 121 2

3 4 3 1 3

y(x n −1)1

4hy  (x

n −1) +241h3y  (x

n −1),

and the output consists of the same three quantities, to within O(h4), with

example of a ‘type 1 DIMSIM method’, to be introduced in Subsection 541.Both (505a) and (505b) possess the property of RK stability, whichguarantees that the method behaves, at least in terms of linear stability, like

a Runge–Kutta method While their multivalue structure is a disadvantagecompared with Runge–Kutta methods, they have some desirable properties.For (505a) the stage order is 2, and for (505b) the stage order is 3

Trang 16

Exercises 50 50.1 Write the general linear method given by (503a) in transformed form

using the matrix

.Note that this converts the method into Nordsieck form

50.2 Write the general linear method given by (502a) in transformed form

using the matrix

as a general linear method with r = 2, s = 1, by taking advantage of

the FSAL property

50.4 Show that it is possible, by using a suitable transformation, to reduce the

general linear method derived in Exercise 50.3 to an equivalent method

with r = s = 1 Show that this new method is equivalent to the implicit

mid-point rule Runge–Kutta method

50.5 Write the PEC predictor–corrector method based on the order 2 Adams–

Bashforth method and the order 2 Adams–Moulton method in generallinear form

50.6 The following two methods were once popular, but are now regarded as

flawed because they are ‘weakly stable’:

y n = y n −2 + 2hf (x n −1 , y n −1 ),

y n = y n −3+32h(f (x n −1 , y n −1 ) + f (x n −2 , y n −2 )).

This means that, although the methods are stable, the polynomial α for

each of them has more than one zero on the unit circle Show how towrite them as a cyclic composite pair, using general linear formulation,and that they no longer have such a disadvantage

Trang 17

50.7 Consider the Runge–Kutta method

0

−1 −1

1 2 5

8 1 8

2 1

1

3 1 6

Modify this method in the same way as was proposed for (502b), andwrite the resulting two-value method in general linear form

510 Definitions of consistency and stability

Since a general linear method operates on a vector of approximations to somequantities computed in the preceding step, we need to decide something aboutthe nature of this information For most numerical methods, it is obvious whatform this takes, but for a method as general as the ones we are considering

here there are many possibilities At least we assume that the ith subvector

in y [n −1] represents u i y(x n

−1 ) + v i hy  (x n

−1 ) + O(h2) The vectors u and v are

characteristic of any particular method, subject to the freedom we have to

alter v by a scalar multiple of u; because we can reinterpret the method by

the stage values are each equal to y(x n ) + O(h) This means that U u = 1 We

always require the output result to be u i y(x n ) + v i hy  (x n ) + O(h2) and this

means that V u = u and that V v + B1 = u + v If we are given nothing about

a method except the four defining matrices, then V must have an eigenvalue equal to 1 and u must be a corresponding eigenvector It then has to be checked

that the space of such eigenvectors contains a member such that U u = 1 and such that B1 − u is in the range of V − I.

If a method has these properties then it is capable of solving y = 1, with

y(0) = a exactly, in the sense that if y[0]i = u i a + v i h, then for all n = 1, 2, , y [n] i = u i (a + nh) + v i h This suggests the following definitions:

Definition 510A A general linear method (A, U, B, V ) is ‘preconsistent’ if

there exists a vector u such that

The vector u is the ‘preconsistency vector’.

Definition 510B A general linear method (A, U, B, V ) is ‘consistent’ if it is

preconsistent with preconsistency vector u and there exists a vector v such that

Ngày đăng: 13/08/2014, 05:21

TỪ KHÓA LIÊN QUAN