Continuous parallel-iterated RKN-type PC methodsfor nonstiff IVPs Nguyen Huu Conga,∗, Nguyen Van Minhb aFaculty of Mathematics, Mechanics and Informatics, Hanoi University of Science, Vi
Trang 1Continuous parallel-iterated RKN-type PC methods
for nonstiff IVPs Nguyen Huu Conga,∗, Nguyen Van Minhb
aFaculty of Mathematics, Mechanics and Informatics, Hanoi University of Science, Vietnam
bFaculty of Natural Science, Thai Nguyen University, Vietnam
Available online 15 November 2006
Abstract
This paper investigates parallel predictor–corrector (PC) iteration schemes based on direct collocation Runge–Kutta–Nyström (RKN) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of special
second-order differential equations y(t) = f(t, y(t)) Consequently, the resulting parallel-iterated RKN-type PC methods are
pro-vided with continuous output formulas The continuous numerical approximations are also used for predicting the stage values in the PC iteration processes In this way, we obtain parallel PC methods with continuous output formulas and high-order predictors Applications of the resulting parallel PC methods to a few widely-used test problems reveal that these new parallel PC methods are much more efficient when compared with the parallel-iterated RKN (PIRKN) methods and the sequential ODEX2 and DOPRIN codes from the literature
©2006 IMACS Published by Elsevier B.V All rights reserved
Keywords: Runge–Kutta–Nyström methods; Predictor–corrector methods; Stability; Parallelism
1 Introduction
The arrival of parallel computers influences the development of numerical methods for the numerical solution of nonstiff initial-value problems (IVPs) for the systems of special second-order, ordinary differential equations (ODEs)
y(t )= ft, y(t )
, y(t0)= y0, y(t
where y, f∈ Rd Among various numerical methods proposed so far, the most efficient methods for solving these problems are the explicit Runge–Kutta–Nyström (RKN) methods In the literature, sequential explicit RKN methods
up to order 10 can be found in e.g., [16–20,22,23] In order to exploit the facilities of parallel computers, several class of parallel predictor–corrector (PC) methods based on RKN corrector methods have been investigated in e.g., [3–11,14,15,28,12,13] A common challenge in the latter-mentioned papers is to reduce, for a given order of accuracy,
the required number of sequential f-evaluations per step, using parallel processors In the present paper, we investigate
a particular class of parallel-iterated RKN-type PC methods based on direct collocation RKN corrector methods with continuous output formulas The continuous numerical approximations also are used as starting stage values in the
* Corresponding author Current address: School of Graduate Studies, Vietnam National University, 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam.
E-mail address: congnh@vnu.edu.vn (N.H Cong).
0168-9274/$30.00 © 2006 IMACS Published by Elsevier B.V All rights reserved.
doi:10.1016/j.apnum.2006.10.002
Trang 2PC iteration process In this way we obtain parallel PC methods that will be termed continuous parallel-iterated RKN-type PC methods (CPIRKN methods) Thus, we have achieved PC methods with dense output formulas and high-order
predictors As a consequence, the resulting new CPIRKN methods require few numbers of sequential f-evaluations
per step in the PC iteration process
In Section 2, we shall consider RKN corrector methods with continuous output formulas (continuous RKN meth-ods) Section 3 formulates and investigates the CPIRKN methods, where the order of accuracy, the rate of convergence and the stability property are considered Furthermore, in Section 4, we present numerical comparisons of CPIRKN methods with traditional parallel-iterated RKN methods (PIRKN methods) and sequential numerical codes
2 Continuous RKN methods
A numerical method is inefficient, if the number of output points becomes very large (cf [24, p 188]) Therefore,
in the literature, efficient numerical methods are often provided with a continuous output formula For constructing CPIRKN methods with such a continuous output formula in Section 3, in this section, we consider a continuous
extension of implicit RKN methods Our starting point is an s-stage direct collocation (discrete) RKN method (see
e.g., [4,12,25])
Yn,i= un + hc iu
n + h2
s
j=1
a ij f(t n + c j h, Y n,j ), i = 1, , s, (2.1a)
un+1= un + hun + h2
s
j=1
u
n+1= un + h
s
j=1
Let us consider a continuous output formula defined by
un +ξ= un + hξun + h2
s
j=1
Here, in (2.1), 0 ξ 2, u n +ξ ≈ y(t n +ξ ) , with t n +ξ = t n + ξh, u n+1≈ y(t n+1), un ≈ y(t n ), u
n+1≈ y(t
n+1),
u
n≈ y(t
n ) and h is the stepsize Furthermore, Y n,i , i = 1, , s, are the stage vector components representing
nu-merical approximations to the exact solution values y(t n + c i h), i = 1, , s at nth step The s × s matrix A = (a ij ),
s-dimensional vectors b= (b j ) , b(ξ ) = (b j (ξ ))and c= (c j )are the method parameters in matrix or vector form The method defined by (2.1) will be called the continuous RKN method The step point and stage order of the (discrete) RKN method defined by (2.1a)–(2.1c) will be referred to as the step point and stage order of the continuous RKN
method By the collocation principle, the continuous RKN corrector method (2.1) is of step point order p and stage order r both at least equal s (see [25]) This continuous RKN method can be conveniently presented by the Butcher
tableau (see e.g., [25,2])
y
n+1 dT
y n +ξ bT(ξ )
The matrix A and the vectors b and d are defined by the order conditions (see e.g., [12,25]) They can be explicitly
expressed in terms of the collocation vector c as (cf [12])
Trang 3P = (p ij )=
c j+1
i
j+ 1
, R = (r ij )=j c j−1
i
, S = (s ij )=c j−1
i
,
g= (g i )=
1
i+ 1
, ˆg = ( ˆg i )=
1
i
, i, j = 1, , s.
The vector b(ξ ) in the continuous output formula (2.1d) is a vector function of ξ It satisfies the continuity conditions
b(0) = 0 and b(1) = b and will be determined by order conditions For the fixed stepsize h, these order conditions can
be derived by replacing un +ξ , u n and Yn,j in (2.1d) with the exact solution values and by requiring that the residue
is of order s + 2 in h Using Taylor expansions for sufficiently smooth function y(t) in the neighbourhood of t n, we
obtain the order conditions for determining b(ξ )
D (j )=
ξ j+1
j+ 1− bT(ξ )j c j−1
The conditions (2.3a) can be seen to be of the form
bT(ξ )R− gT
diag
From (2.3b) the explicit expression of the vector b(ξ ) is
bT(ξ )= gT
diag
In view of (2.2) and (2.3c), it follows that the continuity conditions for the vector b(ξ ) are clearly verified We have
to note that the formula in (2.1b) is a special case of the continuous formula (2.1d) with ξ = 1 It is evident that if the conditions (2.3) are satisfied, then we have the local order relation
y(t n +ξ )− un +ξ= Oh s+2
For the global order of continuous approximation defined by (2.1d) (continuous order), we have the following theorem:
Theorem 2.1 If the function f is Lipschitz continuous and if the continuous RKN corrector method (2.1) is of step
point order p, then the continuous output formula defined by (2.1d) gives rise to a continuous approximation of order
(continuous order) p∗= min{p, s + 2}.
Proof Let us consider the global error estimate (without the local assumption: un = y(t n ), u
n= y(t
n ))
y(t n +ξ )− un +ξ = y(t n +ξ )− un − hξun − h2
s
j=1
b j (ξ )f(t n + c j h, Y n,j )
= y(t n +ξ ) − y(t n ) − hξy(t n ) − h2
s
j=1
b j (ξ )f
t n + c j h, y(t n + c j h) +y(t n )− un
+ hξy(t
n )− u
n
+ h2
s
j=1
b j (ξ )
f
t n + c j h, y(t n + c j h)
− f(t n + c j h, Y n,j )
Since the function f is Lipschitz continuous, the following global order estimates hold:
y(t n +ξ ) − y(t n ) − hξy(t n ) − h2
s
j=1
b j (ξ )f
t n + c j h, y(t n + c j h)
= Oh s+2
,
y(t n )− un
+ hξy(t
n )− un
= Oh p
,
h2
s
j=1
b j (ξ )
f
t n + c j h, y(t n + c j h)
− f(t n + c j h, Y n,j )
= Oh s+2
Trang 4Table 1
Values of NCD p |NCD p∗ for problem (2.7) obtained by various continuous RKN methods
Methods p p∗ Nstp= 200 Nstp = 400 Nstp = 800 Nstp = 1600 Nstp = 3200
The proof of Theorem 2.1 follows from (2.5) and (2.6) In view of Theorem 2.1, if the step point order p of the continuous RKN method (2.1) is not less than s + 2, then the continuous order p∗of the approximation defined by
(2.1d) is equal to s+ 2 2
Example 2.1 In order to show the order p∗of the continuous approximation (continuous order) as stated in Theo-rem 2.1, we consider continuous RKN methods based on direct collocation Radau IIA and Gauss–Legendre methods (see [4,25]) These methods will be called continuous Radau IIA (Cont.Radau) and continuous Gauss–Legendre
(Cont.Gauss) We restrict our consideration to the 2-stage and 3-stage methods and apply them to the nonlinear
Fehlberg problem (cf e.g., [16,17,19,20]):
d2y(t)
dt2 =
⎛
⎝
y2(t ) +y2(t )
2
y2(t ) +y2(t ) −4t2
⎞
⎠ y(t),
y(0) = (0, 1)T, y( 0) = (−2π/ 2, 0)T,
with highly oscillating exact solution given by y(t) = (cos(t2), sin(t2))T The absolute global error of the (discrete)
approximation of order p obtained at t n+1= 9 and of the continuous approximation of order p∗obtained at t n +1.5=
t n + 1.5h = t n+1+ 0.5h are defined by y(t n+1)− yn+1∞ and y(t n +1.5 )− yn +1.5∞, respectively Table 1 list
the average numbers of correct decimal digits, i.e., the values defined by NCD p= − log10y(t n+1)− yn+1∞ and
by NCD p∗= − log10y(t n +1.5 )− yn +1.5∞ The values NCD p |NCD p∗listed in Table 1 nicely show the theoretical
orders p and p∗of the continuous RKN methods.
3 CPIRKN methods
In this section, we consider a parallel PC iteration scheme based on the continuous RKN (corrector) methods This iteration scheme is given by
Y(0) n,i= yn−1+ h(1 + c i )y
n−1+ h2
s
j=1
b j (1+ c i )f
t n−1+ c j h, Y (m) n −1,j
, i = 1, , s, (3.1a)
Y(k) n,i= yn + hc iy
n + h2
s
j=1
a ijf
t n + c j h, Y k−1
n,j
, i = 1, , s, k = 1, , m, (3.1b)
yn+1= yn + hyn + h2
s
j=1
b jf
t n + c j h, Y (m) n,j
y
n+1= yn + h
s
j=1
d jf
t n + c j h, Y (m) n,j
yn +ξ= yn + hξyn + h
s
j=1
b j (ξ )f
t n + c j h, Y (m) n,j
Trang 5Regarding (3.1a) as a predictor method and (2.1) as a corrector method, we arrive at a PC method in PE(CE) m E
mode Since the evaluations of f(t n−1+ c j h, Y (m) n −1,j ) , j = 1, , s, are available from the preceding step, we have in fact, a PC method in P (CE) m Emode
In the PC method (3.1), the predictions (3.1a) are obtained by using continuous output formula (3.1e) from the
previous step If in (3.1a) we set Y(0) n,i = yn + hc iy
n , i = 1, , s, the PC method (3.1a)–(3.1d) becomes the original
parallel-iterated RKN methods (PIRKN methods) considered in [4,28] Therefore, we call the method (3.1), a
con-tinuous parallel-iterated RKN-type PC method (CPIRKN method) Notice that the s components f(t n + c j h, Y (k −1)
n,j ),
j = 1, , s, can be evaluated in parallel, provided that s processors are available, so that the number of sequential
f-evaluations per step of length h in each processor equals s∗= m + 1.
Theorem 3.1 If the function f is Lipschitz continuous and if the continuous RKN corrector method (2.1) has step point
order p, then the CPIRKN method (3.1) has step point order q = min{p, 2m + s + 2} and gives rise to a continuous
approximation of order (continuous order) q∗= min{p, s + 2}.
Proof The proof of this theorem is very simple Suppose that f is Lipschitz continuous, yn= un = y(t n )and y
n=
u
n= y(t n ) Since Yn,i− Y(0)
n,i = O(h s+2)(see (2.4)) and each iteration raises the order of the iteration error by 2, we obtain the following order relations
Yn,i− Y(m)
n,i = Oh 2m +s+2
, i = 1, , s,
un+1− yn+1= h2
s
j=1
b j
f(t n + c j h, Y n,j )− ft n + c j h, Y (m) n,j
= Oh 2m +s+4
,
u
n+1− yn+1= h
s
j=1
d j
f(t n + c j h, Y n,j )− ft n + c j h, Y (m) n,j
= Oh 2m +s+3
Hence, for the local truncation error of the CPIRKN method (3.1), we may write
y(t n+1)− yn+1=y(t n+1)− un+1
+ [un+1− yn+1] = Oh p+1
+ Oh 2m +s+4
,
y(t
n+1)− yn+1=y(t
n+1)− un+1
+ [un+1− yn+1] = Oh p+1
+ Oh 2m +s+3
The order relations (3.3) gives the step point order q as stated in Theorem 3.1 for the CPIRKN method Furthermore, for the continuous order q∗of the continuous approximations defined by (3.1e), we may also write
y(t n +ξ )− yn +ξ=y(t n +ξ )− un +ξ
+ [un +ξ− yn +ξ]
=y(t n +ξ )− un +ξ
+ [un− yn ] + hξ[un− yn]
+ h2
s
j=1
b j (ξ )
f(t n + c j h, Y n,j )− ft n + c j h, Y (m) n,j
From (3.2), (3.3) and Theorem 2.1 we have the following global order relations
y(t n +ξ )− un +ξ= Ohmin{p,s+2}
,
un− yn=un − y(t n )
+y(t n )− yn
= Ohmin{p,2m+s+2}
hξ[un− yn ] = hξu
n− y(t n )
+ hξy(t
n )− yn
= Ohmin{p+1,2m+s+3}
h2
s
j=1
b j (ξ )
f(t n + c j h, Y n,j )− ft n + c j h, Y (m) n,j
= Oh 2m +s+3
The relations (3.4) and (3.5) then complete the proof of Theorem 3.1 2
Remark From Theorem 3.1, we see that by setting m = [(p − s − 1)/2] ([·] denoting the integer part function), we have a CPIRKN method of maximum step point order q = p (order of the corrector method) with minimum number
of sequential f-evaluations per step s∗= [(p − s + 1)/2].
Trang 63.1 Rate of convergence
The rate of convergence of CPIRKN methods is defined by using the model test equation y(t ) = λy(t), where λ
runs through the eigenvalues of the Jacobian matrix ∂f/∂y (cf e.g., [4,6,7]) For this equation, we obtain the iteration
error equation
Y(j ) n − Yn = zAY(j −1)
n − Yn
Hence, with respect to the model test equation, the convergence rate is determined by the spectral radius ρ(zA) of the iteration matrix zA Requiring that ρ(zA) < 1, leads us to the convergence condition
|z| < 1
We shall call ρ(A) the convergence factor and 1/ρ(A) the convergence boundary of the CPIRKN method One
can exploit the freedom in the choice of the collocation vector c of continuous RKN correctors for minimizing the
convergence factor ρ(A), or equivalently, for maximizing the convergence region denoted bySconvand defined as
The convergence factors ρ(A) for the CPIRKN methods used in the numerical experiments can be found in Section 4.
3.2 Stability intervals
The linear stability of the CPIRKN methods (3.1) is investigated by again using the model test equation y(t )=
λy(t ) , where λ is assumed to be negative real By defining the matrix
B=b(1 + c1), , b(1+ c s )T
,
for the model test equation, we can present the starting vector Y(0) n = (Y (0)
n,1 , , Y (0) 1,s )Tdefined by (3.1a) in the form
Y(0) n = ey n−1+ h(e + c)y n−1+ zBY (m)
n−1,
where z := h2λ Applying (3.1a)–(3.1c) to the model test equation yields
Y(m) n = ey n + chy n + zAY (m −1)
n
=I + zA + · · · + (zA) m−1
(ey n + chy n) + (zA) mY(0) n
= z m+1A m BY (m) n−1+I + zA + · · · + (zA) m−1
ey n
+I + zA + · · · + (zA) m−1
chy
n + z m A m ey n−1+ z m A m (e + c)hy n−1, (3.9a)
y n+1= y n + hyn + zbT
Y(m) n
= z m+2bTA m BY (m) n−1+ 1+ zbT
I + zA + · · · + (zA) m−1
e y n
+ 1+ zbT
I + zA + · · · + (zA) m−1
c hy
n + z m+1bTA m ey n−1+ z m+1bTA m (e + c)hy
n−1, (3.9b)
hy
n+1= hy
n + zdT
Y(m) n
= z m+2dTA m BY (m) n−1+ zdT
I + zA + · · · + (zA) m−1
ey n
+ 1+ zdT
I + zA + · · · + (zA) m−1
c hy
n + z m+1dTA m ey n−1+ z m+1dTA m (e + c)hy n−1. (3.9c) From (3.9) we are led to the recursion
⎛
⎜
⎜
⎝
Y(m) n
y n+1
hy
n+1
y n
hy
n
⎞
⎟
⎟
⎠= M m (z)
⎛
⎜
⎜
⎝
Y(m) n−1
y n
hy
n
y n−1
hy
n−1
⎞
⎟
⎟
Trang 7where M m (z) is an (s + 4) × (s + 4) matrix defined by
M m (z)=
⎛
⎜
⎜
z m+2bTA m B 1+ zbTP m−1(z)e 1+ zbTP m−1(z)c z m+1bTA me z m+1bTA m (e + c)
z m+2dTA m B zdTP m−1(z)e 1+ zdTP m−1(z)c z m+1dTA me z m+1dTA m (e + c)
⎞
⎟
⎟,
(3.10b)
where P m−1(z) = I + zA + · · · + (zA) m−1 The matrix M m (z)defined by (3.10) which determines the stability of the
CPIRKN methods, will be called the amplification matrix, its spectral radius ρ(M m (z)) the stability function For a given number of iterations m, the stability interval denoted by ( −βstab(m), 0) of the CPIRKN methods is defined as
−βstab(m),0
:= z : ρ
M m (z)
< 1, z 0 .
We also call βstab(m) the stability boundary for a given m The stability boundaries βstab(m)for the CPIRKN methods used in the numerical experiments can be found in Section 4
4 Numerical experiments
This section will report numerical results for the CPIRKN methods We confine our considerations to the CPIRKN
methods with direct collocation continuous RKN corrector methods based on symmetric collocation vector c
inves-tigated in [6,11] The continuous s-stage RKN corrector methods (2.1) based on these symmetric collocation vectors have the orders p = p∗equal s + 1 or s depending on whether s is odd or even (cf [6] and Theorem 2.1 in this paper) The symmetric collocation vectors were chosen such that the spectral radius ρ(A) of the RKN metrix A is minimized,
so that the CPIRKN methods defined by (3.1) have “optimal” rate of convergence (see [6,11]) Table 2 below lists the stability boundaries of the CPIRKN methods with continuous RKN corrector methods based on symmetric
col-location vectors considered in [6,11] with s = 3, 4, 5, 6 and with corresponding orders p = 4, 4, 6, 6 The associated CPIRKN methods based on s-stage, p-order continuous RKN corrector methods will be denoted by CPIRKNsp For
s = 3, 4, 5, 6 and p = 4, 4, 6, 6 we have the methods CPIRKN34, CPIRKN44, CPIRKN56 and CPIRKN66,
respec-tively We observe that the stability boundaries of these CPIRKN methods show a rather irregular behaviour, however they are sufficiently large for nonstiff IVPs of the form (1.1)
In the following, we shall compare the above CPIRKN methods with explicit parallel RKN methods and sequential codes from the literature For the CPIRKN methods, in the first step, we always use the trivial predictions given by
Y(0) 0,i= y0+ hc iy
0, i = 1, , s.
The absolute error obtained at the end point of the integration interval is presented in the form 10−NCD (NCD may be
interpreted as the average number of correct decimal digits) The computational efforts are measured by the values of
Nseqdenoting the total number of sequential f-evaluations required over the total number of integration steps denoted
by Nstp
Ignoring load balancing factors and communication times between processors in parallel methods, the comparison
of various methods in this section is based on Nseqand the obtained NCDs The numerical experiments with small
widely-used test problems taken from the literature below show a potential superiority in a parallel setting of the new CPIRKN methods over extant methods This superiority will be significant in a parallel machine if the test problems
Table 2
Stability boundaries βstab(m)for various CPIRKN methods
Trang 8are large enough and/or the f-evaluations are expensive (cf e.g., [1]) In order to see the convergence behaviour of our
CPIRKN methods, we follow a dynamical strategy in all PC methods for determining the number of iterations in the
successive steps It seems natural to require that the iteration error is of the same order in h as the local error of the
corrector This leads us to the stopping criterion (cf e.g., [3,4,6–8,10])
Y(m)
n − Y(m −1)
n
where C is a problem- and method-dependent parameter, p is the step point order of the corrector method All the
computations were carried out on a 15-digit precision computer An actual implementation on a parallel machine is a subject of further studies
4.1 Comparison with parallel methods
We shall report numerical results obtained by PIRKN methods, ones of the best parallel explicit RKN methods available in the literature proposed in [4,28], and the CPIRKN methods considered in this paper We consider indirect PIRKN (Ind.PIRKN) methods investigated in [28] and direct PIRKN (Dir.PIRKN) methods investigated in [4] We select a test set of three problems taken from the literature These three problems possess exact solutions in closed form Initial conditions are taken from the exact solutions
4.1.1 Linear nonautonomous problem
As a first numerical test, we apply the various p-order PC methods to the linear nonautonomous problem (cf e.g.,
[4,6,7])
d2y(t)
dt2 =
−2α(t) + 1 −α(t) + 1 2(α(t) − 1) α(t )− 2
y(t),
α(t )= max 2 cos2(t ),sin2(t ) , 0 t 20,
with exact solution y(t) = (− sin(t), 2 sin(t))T The numerical results listed in Table 3 clearly show that the CPIRKN methods are much more efficient than the indirect and direct PIRKN methods of the same order For this linear problem, all the CPIRKN methods need only about one iteration per step Notice that because of round-off errors, we cannot expect 15 digits accuracy for the numerical results As a consequence, Table 3 contains four empty spots in the last two lines where the numerical results were in the neighbourhood of the accuracy-limits of the computer and therefore considered as unreliable
4.1.2 Nonlinear Fehlberg problem
For the second numerical test, we apply the various p-order PC methods to the well-known nonlinear Fehlberg
problem (2.7) considered in Section 2 The numerical results are reported in Table 4 These numerical results show
that the CPIRKN methods are again by far superior to the indirect and direct PIRKN methods of the same order For
this nonlinear Fehlberg problem, the number of iterations m needed at each step for all CPIRKN methods is one or
two
Table 3
Values of NCD/Nseqfor problem (4.2) obtained by various p-order parallel PC methods
PC methods p Nstp = 80 Nstp = 160 Nstp = 320 Nstp = 640 Nstp = 1280 C
Trang 9Table 4
Values of NCD/Nseqfor problem (2.7) obtained by various p-order parallel PC methods
PC methods p Nstp = 200 Nstp = 400 Nstp = 800 Nstp = 1600 Nstp = 3200 C
Table 5
Values of NCD/Nseqfor problem (4.3) obtained by various p-order parallel PC methods
PC methods p Nstp = 100 Nstp = 200 Nstp = 400 Nstp = 800 Nstp = 1600 C
4.1.3 Newton’s equation of motion problem
The third numerical example is the two-body gravitational problem for Newton’s equation of motion (see [27,
p 245])
d2y1(t )
dt2 = − y1(t )
(
y12(t ) + y2
2(t ))3
, d2y2(t )
d2t = − y2(t )
(
y12(t ) + y2
2(t ))3
, 0 t 20,
y1( 0) = 1 − ε, y2( 0) = 0, y
1( 0) = 0, y
2( 0)=
1+ ε
This problem can also be found in [20] or in the test set of problems in [26] The solution components are y1(t )=
cos(u(t)) −ε, y2(t )=√(1+ ε)(1 − ε) sin(u(t)), where u(t) is the solution of Kepler’s equation t = u(t)−ε sin(u(t)) and ε denotes the eccentricity of the orbit In this example, we set ε = 0.3 The numerical results for this problem are
given in Table 5 and give rise to nearly the same conclusions as formulated in the two previous examples
4.2 Comparison with sequential codes
In Section 4.1, the CPIRKN methods were compared with indirect and direct PIRKN methods In this section, we shall compare these CPIRKN methods with some of the best sequential codes currently available
We restricted the numerical experiments to the comparison of two of our 6-order CPIRKN56 and CPIRKN66 methods with two well-known sequential codes for the nonlinear Fehlberg problem (2.7), that is the codes DOPRIN and ODEX2 taken from [24] We reproduced the best results obtained by these sequential codes given in the literature (cf e.g., [28,9]) and added the results obtained by CPIRKN56 and CPIRKN66 methods In spite of the fact that the results of the sequential codes are obtained using a stepsize strategy, whereas CPIRKN56 and CPIRKN66 methods are applied with fixed stepsizes, it is the CPIRKN56 and CPIRKN66 methods that are the most efficient (see Table 6)
Trang 10Table 6 Comparison with sequential codes for problem (2.7)
5 Concluding remarks
In this paper, we proposed a new class of parallel PC methods called continuous parallel-iterated RKN-type PC methods (CPIRKN methods) based on continuous RKN corrector methods Three numerical experiments showed that the CPIRKN methods are much superior to the well-known PIRKN methods and ODEX2 and DOPRIN codes available in the literature
The paper limits its focus to IVPs of the form y(t ) = f(t, y(t)), y(t0)= y0, y(t
0)= y
0, however there has been
proposed RKN methods for the more general problem y(x) = f(t, y(t), y(t )) , y(t
0)= y0, y(t
0)= y
0(see e.g., [21])
In a forthcoming paper, we will extend the ideas of this paper to this more general problem
References
[1] K Burrage, Parallel and Sequential Methods for Ordinary Differential Equations, Clarendon Press, Oxford, 1995.
[2] J.C Butcher, The Numerical Analysis of Ordinary Differential Equations, Runge–Kutta and General Linear Methods, Wiley, New York, 1987 [3] N.H Cong, An improvement for parallel-iterated Runge–Kutta–Nyström methods, Acta Math Viet 18 (1993) 295–308.
[4] N.H Cong, Note on the performance of direct and indirect Runge–Kutta–Nyström methods, J Comput Appl Math 45 (1993) 347–355 [5] N.H Cong, Direct collocation-based two-step Runge–Kutta–Nyström methods, SEA Bull Math 19 (1995) 49–58.
[6] N.H Cong, Explicit symmetric Runge–Kutta–Nyström methods for parallel computers, Comput Math Appl 31 (1996) 111–122.
[7] N.H Cong, Explicit parallel two-step Runge–Kutta–Nyström methods, Comput Math Appl 32 (1996) 119–130.
[8] N.H Cong, RKN-type parallel block PC methods with Lagrange-type predictors, Comput Math Appl 35 (1998) 45–57.
[9] N.H Cong, Explicit pseudo two-step RKN methods with stepsize control, Appl Numer Math 38 (2001) 135–144.
[10] N.H Cong, N.T Hong Minh, Parallel block PC methods with RKN-type correctors and Adams-type predictors, Internat J Comput Math 74 (2000) 509–527.
[11] N.H Cong, N.T Hong Minh, Fast convergence PIRKN-type PC methods with Adams-type predictors, Internat J Comput Math 77 (2001) 373–387.
[12] N.H Cong, N.T Hong Minh, Parallel-iteratet pseudo two-step Runge–Kutta–Nyström methods for nonstiff second-order IVPs, Comput Math Appl 44 (2002) 143–155.
[13] N.H Cong, N.V Minh, Improved parallel-iteratet pseudo two-step RKN methods for nonstiff problems, SEA Bull Math., in press [14] N.H Cong, K Strehmel, R Weiner, Runge–Kutta–Nyström-type parallel block predictor–corrector methods, Adv Comput Math 10 (1999) 115–133.
[15] N.H Cong, K Strehmel, R Weiner, A general class of explicit pseudo two-step RKN methods on parallel computers, Comput Math Appl 38 (1999) 17–30.
... ourCPIRKN methods, we follow a dynamical strategy in all PC methods for determining the number of iterations in the
successive steps It seems natural to require that the iteration error... codes for problem (2.7)
5 Concluding remarks
In this paper, we proposed a new class of parallel PC methods called continuous parallel-iterated RKN-type PC methods. .. Runge–Kutta–Nyström methods for nonstiff second-order IVPs, Comput Math Appl 44 (2002) 143–155.
[13] N.H Cong, N.V Minh, Improved parallel-iteratet pseudo two-step RKN methods for nonstiff