1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Runge-Kutta-Nystrom-type parallel block predictor-corrector methods

19 83 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 180,9 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Burrage This paper describes the construction of block predictor–corrector methods based on Runge–Kutta–Nystr¨om correctors.. Since the block approximations at the off-step points can be

Trang 1

Runge–Kutta–Nystr¨om-type parallel block

Nguyen Huu Conga, Karl Strehmelb, R¨udiger Weinerb and Helmut Podhaiskyb

a

Faculty of Mathematics, Mechanics and Informatics, Hanoi University of Sciences, 334 Nguyen Trai,

Thanh Xuan, Hanoi, Vietnam

b FB Mathematik und Informatik, Martin-Luther-Universit¨at Halle-Wittenberg, Theodor-Lieser-Str 5,

D-06120 Halle, Germany

Received July 1997; revised November 1998 Communicated by K Burrage

This paper describes the construction of block predictor–corrector methods based on Runge–Kutta–Nystr¨om correctors Our approach is to apply the predictor–corrector method

not only with stepsize h, but, in addition (and simultaneously) with stepsizes a i h,

i = 1, , r In this way, at each step, a whole block of approximations to the exact

so-lution at off-step points is computed In the next step, these approximations are used to obtain a high-order predictor formula using Lagrange or Hermite interpolation Since the block approximations at the off-step points can be computed in parallel, the sequential costs

of these block predictor–corrector methods are comparable with those of a conventional predictor–corrector method Furthermore, by using Runge–Kutta–Nystr¨om corrector meth-ods, the computation of the approximation at each off-step point is also highly parallel Numerical comparisons on a shared memory computer show the efficiency of the methods for problems with expensive function evaluations.

Keywords: Runge–Kutta–Nystr¨om methods, predictor–corrector methods, stability,

paral-lelism

AMS subject classification: 65M12, 65M20

1 Introduction

Consider the numerical solution of nonstiff initial value problems (IVPs) for systems of special second-order, ordinary differential equations (ODEs)

y00 (t) = f t, y(t)

, t06 t 6 T , y(t0) = y0, y0 (t

0) = y0

0, (1.1)

where y :R → R d, f :R × R d → R d Problems of the form (1.1) are encountered in, e.g., celestial mechanics A (simple) approach for solving this problem is to convert it into a system of first-order ODEs with double dimension and to apply, e.g., a (parallel) Runge–Kutta-type method (RK-type method), ignoring the special form of (1.1) (the

This work was supported by a three-month DAAD research grant.

 J.C Baltzer AG, Science Publishers

Trang 2

indirect approach) However, taking into account the fact that f does not depend on the

first derivative, the use of a direct method tuned to the special form of (1.1) is usually

more efficient (the direct approach) Such direct methods are generally known as

Runge–Kutta–Nystr¨om-type methods (RKN-type methods) Sequential explicit RKN methods up to order 10 can be found in [9–11,14] The performance of the tenth-order

explicit RK method requiring 17 sequential f-evaluations in [12] and the tenth-order explicit RKN method requiring only 11 sequential f-evaluations in [14] is an example

showing the advantage of the direct approach for sequential explicit RK and RKN methods It is highly likely that in the class of parallel methods, the direct approach also leads to an improved efficiency

In the literature, several classes of parallel explicit RKN-type methods have been investigated in [2–5,17] A common challenge in these papers is to reduce, for a

given order, the required number of sequential f-evaluations per step, using parallel

processors In the present paper, we investigate a particular class of explicit RKN-type block predictor–corrector methods (PC methods) for use on parallel computers Our approach consists of applying the PC method not only at step points, but also at off-step points (block points), so that, in each step, a whole block of approximations to the exact solutions is computed This approach was first used in [8] for increasing reliability in explicit RK methods It was also successfully applied in [19] for improving efficiency

of RK-type PC methods We shall use this approach to construct PC methods of

RKN-type requiring few numbers of sequential f-evaluations per step with acceptable stability

properties In this case, as in [19], the block of approximations is used to obtain

a highly accurate predictor formula in the next step by using Lagrange or Hermite interpolation The precise location of the off-step points can be used for minimizing the interpolation errors and also for developing various cheap strategies for stepsize control Since the approximations to the exact solutions at off-step points to be computed in each step can be obtained in parallel, the sequential costs of the resulting RKN-type block PC methods are equal to those of conventional PC methods Furthermore, by using Runge–Kutta–Nystr¨om corrector methods, the PC iteration method computing the approximation to the exact solution at each off-step point, itself is also highly parallel (cf [3,17]) The parallel RKN-type PC methods investigated in this paper may be considered as block versions of the parallel-iterated RKN methods (PIRKN

methods) in [3,17], and will therefore be termed block PIRKN methods (BPIRKN

methods) Moreover, by using direct RKN correctors, we have obtained BPIRKN methods possessing both faster convergence and smaller truncation error resulting in better efficiency than by using indirect RKN correctors (cf., e.g., [3])

Starting with section 2 where the definition of Runge–Kutta–Nystr¨om methods is given, we formulate the BPIRKN methods in section 3 Furthermore, we consider or-der conditions for the predictor, convergence and stability boundaries, and the choice of block abscissas In section 4 we report the numerical results obtained by the BPIRKN methods of orders 5, 7, 9 and the highly efficient sequential code ODEX2[15] In the following sections, for the sake of simplicity of notation, we assume that the IVP (1.1)

Trang 3

is a scalar problem However, all considerations below can be straightforwardly ex-tended to a system of ODEs, and therefore also to nonautonomous equations

2 RKN methods

The starting point is the following two classes of RKN methods The first one is the class of direct and indirect RKN methods The second one is the class of parallel explicit RKN methods

2.1 Direct and indirect RKN methods

A general s-stage RKN method for numerically solving the scalar problem (1.1)

is defined by (see, e.g., [21,17], and also [1, p 272])

Un= u ne + hu 0

n c + h2Af (U n),

u n+1 = u n + hu 0

u 0 n+1 = u 0

n + hd T f (U n),

where u n ≈ y(t n ), u 0

n ≈ y 0 (t n ), h is the stepsize, s ×s matrix A, s-dimensional vectors

b, c, d are the method parameters matrix and vectors, e being the s-dimensional vector

with unit entries (in the following, we will use the notation e for any vector with unit entries, and ej for any jth unit vector, however, its dimension will always be

clear from the context) Vector Un denotes the stage vector representing numerical

approximations to the exact solution vector y(t ne + ch) at the nth step Furthermore,

in (2.1), we use for any vector v = (v1, , v s)T and any scalar function f , the notation

f (v) := (f (v1), , f (v s))T Similarly to an RK method, the RKN method (2.1) is also conveniently presented by the Butcher array (see, e.g., [1, p 272; 17]):

c A

bT

dT

This RKN method will be referred to as the corrector method We distinguish two

types of RKN methods: direct and indirect Indirect RKN methods are derived from

RK methods for first-order ODEs Writing (1.1) in first-order form, and applying a

RK method with Butcher array

c ARK

bT RK

yield the indirect RKN method defined by

c [A RK]2

bT RK A RK

bT RK

Trang 4

If the originating implicit RK method is of collocation type, then the resulting

indi-rect implicit RKN method will be termed indiindi-rect collocation implicit RKN method.

Direct implicit RKN methods are directly constructed for second-order ODEs of the form (1.1) A first family of these direct implicit RKN methods is obtained by means

of collocation techniques (see [21]) and will also be called direct collocation implicit

RKN method In this paper, we will confine our considerations to collocation high-order

implicit RKN methods, that is, the Gauss–Legendre and Radau IIA methods (briefly called direct or indirect Gauss–Legendre, Radau IIA) This class contains methods of arbitrarily high order Indirect collocation unconditionally stable methods can be found

in [13] Direct collocation conditionally stable methods were investigated in [3,21]

2.2 Parallel explicit RKN methods

A first class of parallel explicit RKN methods called PIRKN is considered in [17] These PIRKN methods are closely related to the block PIRKN methods to be consid-ered in the next section Using indirect collocation implicit RKN of the form (2.1) as corrector method, a general PIRKN method of [17] assumes the following form (see also [3]):

U(0)n = y ne + hy 0

nc,

U(j) n = y n e + hy 0

n c + h2Af U (j −1)

n



, j = 1, , m,

(2.2)

y n+1 = y n + hy 0

n + h2bT f U (m) n 

,

y 0 n+1 = y 0

n + hd T f U (m) n 

,

where y n ≈ y(t n ), y 0

n ≈ y 0 (t

n ) Let p be the order of the corrector (2.1), by setting

m = [(p − 1)/2], [·] denoting the integer function, the PIRKN method (2.2) is indeed

an explicit RKN method of order p with the Butcher array (cf [3,17])

0 O

.

c O O A O

0T 0T 0T bT

0T 0T 0T dT

where O and 0 denote s × s matrix and s-dimensional vector with zero entries,

re-spectively From the Butcher array (2.3), it is clear that the PIRKN method (2.2) has

a number of stages equal to (m + 1) · s However, in each iteration, the evaluation

of s components of vector f (U (j −1)

n ) can be obtained in parallel, provided that we

have an s-processor computer Consequently, the number of sequential f-evaluations

equals s ∗ = m + 1 This class also contains the methods of arbitrarily high order and

belongs to the set of efficient parallel methods for nonstiff problems of the form (1.1)

Trang 5

For the detailed performance of these PIRKN methods, we refer to [17] A further development of parallel RKN-type methods was considered in, e.g., [2–5] Notice that for the PIRKN method, apart from parallelism across the methods and across the problems, it does not have any further parallelism

3 Block PIRKN methods

Applying the RKN method (2.1) at t n with r distinct stepsizes a i h, where i =

1, , r and a1= 1, we have

Un,i= u ne + ai hu 0

n c + a2i h2Af (U n,i),

u n+1,i = u n + a i hu 0

n + a2i h2bT f (U n,i), (3.1)

u 0 n+1,i = u 0

n + a i hd T f (U n,i), i = 1, , r.

Let us suppose that at the (n −1)th step, a block of predictions U(0)

n −1,i , i = 1, , r, and

the approximations y n −1 ≈ y(t n −1 ), y 0

n −1 ≈ y 0 (t n −1 ) are given We shall compute r

approximations y n,i to the exact solutions y(t n −1 + a i h), i = 1, , r, defined by

U(j) n −1,i = y n −1 e + a i hy 0

n −1 c + a2i h2Af U (j −1)

n −1,i



, j = 1, , m,

y n,i = y n −1 + a i hy 0

n −1 + a2i h2bT f U (m) n −1,i



,

y 0

n,i = y 0

n −1 + a i hd T f U (m) n −1,i



, i = 1, , r.

In the next step, these r approximations are used to create high-order predictors By

denoting

Yn:= (y n,1 , , y n,r)T, y n,1 = y n,

Y0

n := y 0 n,1 , , y 0

n,r

T

, y 0 n,1 = y 0

we can construct the following predictor formulas:

U(0)n,i = V iYn + hW iY0

U(0)n,i = V iYn+ hW iY 0

n + hi f (Y n), i = 1, , r, (3.3c)

where V i , W iandΛi are s ×r extrapolation matrices which will be determined by order

conditions (see section 3.1) The predictors (3.3a), (3.3b) and (3.3c) are referred to as Lagrange, Hermite-I and Hermite-II, respectively Apart from (3.3), we can construct predictors of other types like, e.g., Adams type (cf [19]) Regarding (3.1) as block corrector methods and (3.3) as block predictor methods for the stage vectors, we leave the class of one-step methods and arrive at a block PC method

U(0)n,i = V iYn + hθ2W iY 0

n + h2

θ2(1− θ)/2i f (Y n), (3.4a)

Trang 6

U(j) n,i= eeT1Yn+ a i hce T1Y0

n + a2i h2Af U (j −1)

n,i



, j = 1, , m,

y n+1,i= eT1Yn+ a i he T1Y0

n + a2i h2bT f U (m) n,i

y 0

n+1,i= eT1Y0

n + a i hd T f U (m) n,i

, i = 1, , r, θ ∈ {0, 1, −1}.

Notice that for a general presentation, in (3.4), the three different predictor

formu-las (3.3) have been combined into a common one (3.4a), where θ = 0, 1 and −1

respectively indicate Lagrange, Hermite-I and Hermite-II predictor With θ = −1 the

block PC method (3.4) is in P E(CE) m E mode and, with θ = 0 or 1, the P E(CE) m E

mode is reduced to P (CE) m E mode.

It can be seen that the block PC method (3.4) consists of a block of PIRKN-type corrections using a block of predictions at the off-step points (block points) (cf

section 2.2) Therefore, we call the method (3.4) the r-dimensional block PIRKN

method (BPIRKN method) (cf [19]) In the case of Hermite-I predictor (3.3b), for

r = 1, the BPIRKN method (3.4) really reduces to a PIRKN method of the form (2.2)

studied in [3,17]

Once the vectors Yn and Y0

n are given, the r values y n,i can be computed in

parallel and, on a second level, the components of the ith stage vector iterate U (j) n,i can

also be evaluated in parallel (cf also section 2.2) Hence, the r-dimensional BPIRKN methods (3.4) based on s-stage RKN correctors can be implemented on a computer possessing r · s parallel processors The number of sequential f-evaluations per step

of length h in each processor equals s ∗ = m + θ2(1− θ)/2 + 1, where θ ∈ {0, 1, −1}.

3.1 Order conditions for the predictor

In this section we consider variable stepsize BPIRKN methods

U(0)n,i = V i nYn + h n θ2W i nY0

n + h2n

θ2(1− θ)/2n

i f (Y n), (3.5a)

U(j) n,i= eeT1Yn+ a i h nce T1Y0

n + a2i h2n Af U (j −1)

n,i



, j = 1, , m,

y n+1,i= eT1Yn + a i h neT1Y0

n + a2i h2nbT f U (m) n,i

y 0

n+1,i= eT1Y0

n + a i h nd T f U (m) n,i

, i = 1, , r, θ ∈ {0, 1, −1},

where h n = t n+1 − t n , V i n , W i n and Λn

i are the predictor matrices which will be determined by order conditions below as variable predictor matrices depending on the

stepsize ratio h n /h n −1 The order conditions for (3.5a) can be obtained by replacing

U(0)n,i and Yn in (3.5a) with the exact solution values y(t ne + ai h nc) and y(tn −1e +

h n −1 a) = y(tne + hn −1(a− e)), respectively The substitution of these exact values

into (3.5a) leads us to relations for predictors of order q

y(t ne + hn a ic) − V n

i y t ne + hn −1(a− e)− θ2h n W i n y 0 t ne + hn

−1(a− e)

θ2(1− θ)/2h2nΛn

i y 00 t ne + hn

−1(a− e)= O h q+1 n 

, i = 1, , r (3.6)

Trang 7

Let us suppose that the stepsize ratio ξ n = h n /h n −1is bounded from above, then using

Taylor expansions, we can expand the left-hand side of (3.6) in powers of h nand obtain

the following qth-order conditions for determining the variable predictor matrices: (ξ n a ic)j − V n

i (a− e) j − jθ2W i n ξ n(a− e) j −1 − j(j − 1)θ2(1− θ)/2n

i ξ n2(a− e) j −2

= 0, j = 0, , q, i = 1, , r. (3.7) The conditions (3.7) imply that

Un,i − U(0)

n,i = O h q+1 n 

, i = 1, , r,

and, therefore, the following order relations hold:

Un,i − U (m)

n,i = O h 2m+q+1 n 

,

u n+1,i − y n+1,i = a2i h2nbT

f (U n,i)− f U (m)

n,i



= O h 2m+q+3 n 

,

u 0

n+1,i − y n+1,i 0 = a i h nd T

f (U n,i)− f U (m)

n,i



= O h 2m+q+2 n 

,

i = 1, , r.

Furthermore, for the local truncation error of the BPIRKN method (3.4), we may also write

y(t n+1)− y n+1=

y(t n+1)− u n+1



+ [u n+1 − y n+1 ] = O h p+1 n 

+ O h 2m+q+3 n 

,

y 0 (t

n+1)− y 0 n+1=

y 0 (t n+1)− u 0 n+1

 +

u 0 n+1 − y n+1 0



= O h p+1 n 

+ O h 2m+q+2 n 

,

where p is the order of the generating RKN corrector (2.1) Thus, we have the

fol-lowing theorem:

Theorem 3.1 Suppose that the stepsize ratio ξ n is bounded from above If the

con-ditions (3.7) are satisfied and if the generating RKN corrector (2.1) has order p, then the variable stepsize BPIRKN method (3.5) has the order p ∗ = min{p, p

iter}, where

piter = 2m + q + 1.

In order to express V i n , W i nandΛn

i explicitly in terms of vectors a, c and stepsize

ratio ξ n , we suppose that q = [1 + θ2+ θ2(1− θ)/2]r − 1, θ ∈ {0, 1, −1}, and for

i = 1, , r, define the matrices

P i,n := e, (ξ n a i c), (ξn a ic)2, , (ξ n a ic)r −1

,

Q n:= e, (a− e), (a − e)2, , (a − e) r −1

,

R n := 0, ξ ne, 2ξn(a− e), 3ξ n(a− e)2, , (r − 1)ξ n(a− e) r −2

,

S n := 0, 0, 2ξ n2e, 6ξ2n(a− e), 12ξ2

n(a− e)2, , (r − 1)(r − 2)ξ2

n(a− e) r −3

,

P ∗

i,n := (ξ n a ic) r , (ξ n a ic) r+1 , (ξ n a ic)r+2 , , (ξ n a ic) 2r −1

,

Q ∗

n:= (a− e) r, (a− e) r+1, (a− e) r+2 , , (a − e) 2r −1

,

R ∗

n := rξ n(a− e) r −1 , (r + 1)ξ

n(a− e) r , , (2r − 1)ξ n(a− e) 2r −2

,

Trang 8

S ∗

n := r(r − 1)ξ2

n(a− e) r −2 , (r + 1)rξ2

n(a− e) r −1 , ,

(2r − 1)(2r − 2)ξ2

n(a− e) 2r −3

,

P ∗∗

i,n := (ξ n a ic)2r , (ξ n a ic)2r+1 , (ξ n a ic)2r+2 , , (ξ n a ic)q

,

Q ∗∗

n := (a− e) 2r, (a− e) 2r+1, (a− e) 2r+2 , , a q

,

R ∗∗

n := 2rξ n(a− e) 2r −1 , (2r + 1)ξ

n(a− e) 2r , , qξ n(a− e) q −1

,

S ∗∗

n := 2r(2r − 1)ξ2

n(a− e) 2r −2 , (2r + 1)2rξ2

n(a− e) 2r −1 , , q(q − 1)ξ2

n(a− e) q −2

,

where, for θ = 0, the matrices P ∗

i,n , Q ∗

n , R ∗

n , S ∗

n , P ∗∗

i,n , Q ∗∗

n , R ∗∗

n , S ∗∗

n are assumed to

be zero, and for θ = 1, only P ∗∗

i,n , Q ∗∗

n , R ∗∗

n , S ∗∗

n are assumed to be zero matrices The order conditions (3.7) can be presented in the form

P i,n − V n

i Q n − θ2W n

i R n −θ2(1− θ)/2n

i S n = O,

P ∗

i,n − V n

i Q ∗

n − θ2W i n R ∗

n −θ2(1− θ)/2n

i S ∗

n = O,

P ∗∗

i,n − V n

i Q ∗∗

n − θ2W i n R ∗∗

n −θ2(1− θ)/2n

i S ∗∗

(3.8)

Since the components a i are assumed to be distinct, the matrix Q n is nonsingular, and

from (3.8), for i = 1, , r, we may write

V i n=

P i,n − θ2W i n R −θ2(1− θ)/2n

i S n

Q −1

n ,

θ2W i n=

θ2(1− θ)/2n

i S ∗

n − S n Q −1

n Q ∗ n



+ P i,n Q −1

n Q ∗

n − P ∗ i,n



×R n Q −1

n Q ∗

n − R ∗ n

−1

,



θ2(1− θ)/2n

i =

P ∗∗

i,n − P i,n Q −1

n Q ∗∗

n



+ P i,n Q −1

n Q ∗

n − P i,n ∗



(3.9)

× R n Q −1

n Q ∗

n − R ∗ n

−1

R n Q −1

n Q ∗∗

n − R ∗∗ n



× S n Q −1

n Q ∗

n − S n ∗



R n Q −1

n Q ∗

n − R ∗ n

−1

R n Q −1

n Q ∗∗

n − R ∗∗ n



+ S ∗∗

n − S n Q −1

n Q ∗∗

n

−1

,

where the matrices (S n Q −1

n Q ∗

n −S ∗

n )(R n Q −1

n Q ∗

n −R ∗

n)−1 (R

n Q −1

n Q ∗∗

n ) + (S ∗∗

S n Q −1

n Q ∗∗

n ) and R n Q −1

n Q ∗

n −R ∗

nare assumed to be nonsingular In view of theorem 3.1

and the explicit expressions of the predictor matrices V i n , W i n and Λn

i, in (3.9), we have the following theorem:

Theorem 3.2 If q = [1 + θ2+ θ2(1− θ)/2]r − 1, and the predictor matrices V n

i , W i n,

Λn

i , i = 1, , r, satisfy the relations (3.9), then for the variable stepsize BPIRKN methods (3.5), piter = [1 + θ2 + θ2(1− θ)/2]r + 2m, p ∗ = min{p, piter} and s ∗ =

m + θ2(1− θ)/2 + 1, θ ∈ {0, 1, −1}.

In the application of BPIRKN methods, we have some natural combinations of the predictors (3.4a) with Gauss–Legendre and Radau IIA correctors Using Lagrange

and Hermite-I predictors has the advantage of possessing no additional f-evaluations

in the predictor An important disadvantage of using Lagrange predictor is that, for a

Trang 9

given order q of predictor formulas, its block dimension is twice as large as that of

using Hermite-I, yielding a double number of processors needed for the implementation

of BPIRKN methods In this paper, we therefore concentrate our considerations on the Hermite-I predictors

3.2 Convergence boundaries

In actual implementation of BPIRKN methods, the number of iterations m is

determined by some iteration strategy, rather than by order conditions using minimal number of iterations to reach the order of the corrector Therefore, it is of interest to know how the integration step affects the rate of convergence The stepsize should be such that a reasonable convergence speed is achieved

As in, e.g., [3], we shall determine the rate of convergence by using the model

test equation y 00 (t) = λy(t), where λ runs through the spectrum of the Jacobian matrix

∂f/∂y For this equation, we obtain the iteration error equation

U(j) n,i − U n,i = a2i zA

U(j −1) n,i − U n,i



, z := h2λ, j = 1, , m. (3.10) Hence, with respect to the model test equation, the convergence factor is determined

by the spectral radius ρ(a2i zA) of the iteration matrix a2i zA, i = 1, , r Requiring

that ρ(a2i zA) < 1, leads us to the convergence condition

a2i |z| < 1

ρ(A) or a

2

ρ(∂f/∂y)ρ(A) . (3.11)

We shall call 1/ρ(A) the convergence boundary In actual computation, the integra-tion stepsize h should be substantially smaller than allowed by condiintegra-tion (3.11) By requiring that ρ(a2i zA) is less than a given damping factor α (α  1), we are led to

the condition

a2i |z| 6 γ(α) or a2

ρ(∂f/∂y), γ(α) =

α ρ(A), (3.12)

where γ(α) presents the boundary of convergence with damping factor α of the method Table 1 lists the convergence boundaries γ(α) of the BPIRKN methods based on

indirect and direct collocation Gauss–Legendre and Radau IIA RKN correctors of

Table 1

Convergence boundaries γ(α) for various BPIRKN methods based on p-order correctors.

Correctors p = 3 p = 4 p = 5 p = 6 p = 7 p = 8 p = 9 p = 10

Trang 10

orders up to 10 (cf., e.g., [3,21] and also section 2.1) Notice that for a given stepsize h,

the maximal damping factor is defined by

α = a

2

i h2ρ(∂f/∂y)

γ(1) ,

so we can conclude that the direct collocation RKN correctors reported in table 1 give rise to faster convergence This conclusion leads us to restrict our considerations to the BPIRKN methods based on direct collocation RKN correctors (cf [3]) These direct RKN methods are not A-stable (see [21]), but their stability regions are sufficiently large for nonstiff problems

3.3 The choice of block abscissas a i

In this section we consider fixed stepsize BPIRKN methods The accuracy of Hermite-I interpolation formulas is improved if the interpolation abscissas are more narrowly spaced However, this will increase the magnitude of the entries of the

matrices V i and W i, causing serious round-off errors There are several ways to reduce this round-off effect, which were discussed in [19, section 2.1] for Lagrange interpolation formulas Also in [8] where Hermite interpolation formulas were used for deriving reliable error estimates for defect control, it was found that on a 15 digits

precision machine, the interpolation abscissas should be separated by 0.2 in order to

suppress rounding errors

In order to derive a further criterion for the choice of suitable values of the

abscissas a i , we need to get insight into the propagation of a perturbation ε of the

block vectors Yn and Y0

n within a single step (a similar analysis was given in [19])

We shall study this for the model test equation y 00 (t) = λy(t) First we shall express

y n+1,i and hy 0

n+1,i in terms of Yn and hY 0

n Since

Un,i=

I − a2

i zA−1

eeT1Yn + ceT1a i hY 0

n



,

U(0)n,i − U n,i=

V i −I − a2

i zA−1

eeT1

Yn+

W i −I − a2

i zA−1

ceT1a i

hY 0

n,

applying (3.4), (3.10) to the model equation for a given number m, we obtain

y n+1,i= eT1Yn + a i he T1Y0

n + a2i zb T

U(m) n,i − U n,i



+ a2i zb TUn,i

= eT1Yn + a i he T1Y0

n + a2i zb T

I − a2

i zA−1

eeT1Yn+ ceT1a i hY 0

n



+ a2i zb T

a2i zAm

V i −I − a2

i zA−1

eeT1

Yn

+

W i −I − a2

i zA−1

ceT1a i

hY 0 n



=

eT1 + a2i zb T

I − a2

i zA−1

eeT1 + a2i zb T

a2i zAm

V i −I − a2

i zA−1

eeT1

Yn

+

a ie T1 + a2i zb T

I − a2

i zA−1

a ice T1

+ a2i zb T

a2i zAm

W i −I − a2

i zA−1

a ice T1

hY 0

Ngày đăng: 14/12/2017, 17:07

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN