These help to reduce a half of the number of function evaluations comparing with an adaptive Runge-Kutta method using a pair of arbitrary Runge-Kutta methods of order
Trang 1A MODIFIED ALGORITHM AND ITS IMPLEMENTATION
FOR AN ADAPTIVE RUNGE-KUTTA METHOD
Tran Thi Hue *
University of Technology - TNU
ABSTRACT
Adaptive Runge-Kutta methods are used popularly in finding an approximation for the solution of
an initial value problem (IVP) because of their low computational cost and efficiency This article aims to present how to derive an adaptive Runge-Kutta method, and to present a modification for its algorithm in general, and also to explain how the algorithm works The main contribution of the article is to introduce a new pattern to adjust the step-size in a more flexible way of increasing the efficiency This pattern is created when we use a scale for a step size which is also determined basing on the estimation of the local truncation error Finally, the article emphasizes the aforementioned contents for a particular method, namely Runge-Kutta-Fehlberg, and presents its implementation as well
Keywords: initial value problem, Runge-Kutta, Runge-Kutta-Fehlberg, adaptive Runge-Kutta
method, error control.
The IVP 𝑦′ = 𝑓(𝑥, 𝑦), 𝑦(𝑥0) = 𝑦0, where
𝑦 = 𝑦(𝑥), 𝑥 ∈ ℝ, has been studied for long
time ago Many approaches have succeeded
in finding important discoveries of the
problem Numerical approach is one of the
most succeeding ones which with useful helps
of supercomputer nowadays seems to be
favorite to scientists who work in the fields of
applied mathematics Adaptive Runge-Kutta
methods take advantages from class of
Runge-Kutta methods in eliminating the need
advantages from class of adaptive methods in
reducing the errors of approximation and save
dramatically the computational cost as well
Therefore, adaptive Runge-Kutta methods are
really powerful tools for studying the solution
of an IVP
The main idea of the method is that: First, we
use a Runge-Kutta method of order 𝑝 (with 𝑠
steps) to approximate the solution at 𝑥0+ ℎ,
𝑤1∗= 𝑤0∗+ ℎ ∑𝑠 𝑏𝑖∗𝐹𝑖
* Tel: 0984 632890, Email: cuonghue1980@gmail.com
where ℎ > 0 is a step-size, and 𝑤0∗= 𝑦0,
𝐹1= 𝑓(𝑥0, 𝑤0∗), 𝐹2= 𝑓(𝑥0+ 𝑐2ℎ, 𝑤0∗+
ℎ𝑎21𝐹1),
𝐹3= 𝑓(𝑥0+ 𝑐3ℎ, 𝑤0∗+ ℎ(𝑎31𝐹1+ 𝑎32𝐹2)),
…
𝐹𝑠= 𝑓(𝑥0+ 𝑐𝑠ℎ, 𝑤0∗+ ℎ ∑𝑠−1𝑎𝑠,𝑘𝐹𝑘
𝑘=1 ) Then, we use another Runge-Kutta method of order 𝑝 + 1 approximating the solution,
𝑤1= 𝑤0+ ℎ ∑𝑠 𝑏𝑖𝐹𝑖
to estimate the error produced by (1.1), where
𝑤0= 𝑦0 and 𝑤0∗ in all formulae of
𝐹1, 𝐹2, … , 𝐹𝑠 is replaced by 𝑤0 From this, we can learn whether the local truncation error revealed by (1.1) is confidently accepted and can decide what to do in the next step (to approximate the solution at 𝑥0+ 2ℎ) The next performance needs an adjustment in step-size ℎ to meet the requirement of the error Here, in (1.1) and (1.2), parameters
𝑏𝑖∗, 𝑏𝑖, 𝑐𝑖 and 𝑎𝑖,𝑘 (𝑖 = 1,2, , 𝑠, 𝑘 = 1,2, … , 𝑖 − 1) are derived from Runge-Kutta method by identifying them with coefficients
of Taylor’s expansion up to the required order
𝑝 or 𝑝 + 1, respectively The next section presents how to find them from a particular adaptive Runge-Kutta method
Trang 2In (1.1) and (1.2), 𝑠 coefficients 𝐹1, 𝐹2, … , 𝐹𝑠,
called stage derivatives, are the same These
help to reduce a half of the number of
function evaluations comparing with an
adaptive Runge-Kutta method using a pair of
arbitrary Runge-Kutta methods of order 𝑝 and
𝑝 + 1
Table 1 Butcher tableau for parameters in (1.1)
and (1.2).
0
𝑐𝑠 𝑎𝑠1 𝑎𝑠2 … … 𝑎𝑠,𝑠−1
Here, we assume that 𝑐𝑖 = ∑𝑖−1 𝑎𝑖,𝑘
𝑖 = 1,2, … , 𝑠
ERROR CONTROL
Firstly, for a traditional adaptive Runge-Kutta
algorithm, we need to estimate the local
truncation error in order to decide an
adjustment for the step-size To do so, we
attempt to bound the error by a given 𝜀 > 0
Local truncation error of (1.1) at 𝑥0+ ℎ is
𝜏1∗(ℎ) =𝑦(𝑥0+ ℎ) − 𝑤1∗
That of (1.2) is
𝜏1(ℎ) =𝑦(𝑥0+ ℎ) − 𝑤1
So, 𝜏1∗(ℎ) = (𝑦(𝑥0+ ℎ) − 𝑤1∗)/ℎ
ℎ[(𝑦(𝑥0+ ℎ) − 𝑤1) + (𝑤1− 𝑤1∗)]
= 𝜏1(ℎ) +1
ℎ(𝑤1− 𝑤1∗) (1.3) Then, it is simple to estimate the error
produced from (1.1) since 𝜏1(ℎ) has a higher
order than 𝜏1∗(ℎ) That estimate is
𝜏1∗(ℎ) ≈ (𝑤1− 𝑤1∗)/ℎ
This is enough for a traditional adaptive
Runge-Kutta method That is, from this
estimate, we require that
𝑅 ≔|𝑤1− 𝑤1∗|
So, if this condition is not met, we reduce the step-size ℎ to a smaller one, for instance, ℎ/2, then recalculate all approximation at this step with this new step-size Conversely, if this condition is met, we accept the calculation for all approximations at the step 𝑥0+ ℎ and move to the next step, 𝑥0+ 2ℎ This pattern
of adjustment seems to be awkward as it ignores the estimate of the local truncation error (1.3)
To make an important modification for the algorithm of a traditional adaptive Runge-Kutta method, we will use again (1.3) to adapt the initial step-size ℎ to a new one
Concretely, we introduce a scale q for the step-size h such that this scale will be
determined basing on the calculated estimate
of the local truncation error To see that, firstly, let 𝐾 be a constant such that 𝜏1∗(ℎ) ≈
𝐾ℎ𝑝 Adjusting the step-size to 𝑞ℎ produces a local truncation error from (1.1), which is
𝜏1∗(𝑞ℎ) ≈ 𝐾(𝑞ℎ)𝑝≈𝑞
𝑝
ℎ (𝑤1− 𝑤1∗)
To control the error, we require that
|𝜏1∗(ℎ)| ≤ 𝜀 or 𝑞𝑝(𝑤1− 𝑤1∗)/ℎ ≤ 𝜀 So,
|𝑤1− 𝑤1∗|)
1/𝑝
From this, the initial step-size ℎ at the first
step is used to find the first values of 𝑤1 and
𝑤1∗, which leads to the determination of 𝑞 for that step, then calculations were repeated This procedure requires twice the number of function evaluations per step as without the error control Here, we choose 𝑞 in a flexible way which makes the increased function-evaluation cost whorthwhile The scale 𝑞 is chosen conservatively satisfying the above bound Concretely, we take
1− 𝑤1∗|)
1/𝑝
Trang 3
In fact, by the penalty of evaluating the value
of function at each step point if the step are
repeated, this method of choosing 𝑞 seems to
be meaningful Moreover, we always consider
that 𝑞 ≥ 0.1 These modifications help us to
save a lot of time plugging in very small
intervals Besides, we eliminate a large
variation by requiring that 𝑞 ≤ 4 (see the
pseudo-code) This upper bound for 𝑞 can
help us to reduce the omission of sensitive
intervals The scale 𝑞 is used for the
following useful purposes:
If 𝑅 > 𝜀, we reject ℎ and adjust the step-size
to 0.1ℎ (if 𝑞 ≤ 0.1), or to 4𝑞 (if 𝑞 ≥ 4), or to
𝑞ℎ (if otherwise) Then we recalculate all
approximations at this step
If 𝑅 ≤ 𝜀, we keep computed values for this
step, but change the step-size to 0.1ℎ (if
𝑞 ≤ 0.1), or to 4𝑞 (if 𝑞 ≥ 4), or to 𝑞ℎ (if
otherwise), in the next step, to approximate
the solution at 𝑥0+ 2ℎ
For the latter, when 𝑅 ≤ 𝜖, we accept 𝑤1 as
the approximation of 𝑦(𝑥0+ ℎ)
RUNGE-KUTTA-FELHBERG METHOD
abovementioned modification for a specified
combines a Runge-Kutta method of order 4
and one of order 5 This method was
proposed by Erwin Felhberg [2] The
advantage of this method comparing with
other adaptive Runge-Kutta method can be
explained basing on the relationship between
the number of stages 𝑠 and the order 𝑝 of a
method As J Butcher proved in [1] that if
𝑝 > 5 then the difference between 𝑠 and 𝑝
becomes large and large Therefore, the
orders around 𝑝 = 4 is expected to maximize
the exactness of the approximation, and
minimize the number of calculations
For Runge-Kutta-Felhberg method, we take
𝑠 = 5 for the method of order 𝑝 = 4, and
𝑠 = 6 for the other one of order 𝑝 + 1 = 5
To derive parameters in (1.1) and (1.2), first,
we use the Taylor expansion up to order 𝑝 for the solution 𝑦(𝑥) at 𝑥0+ ℎ at 𝑥0 to get 𝑦(𝑥0+ ℎ) = 𝑦0+ ∑𝑝𝑘=1ℎ𝑘!𝑘𝑑𝑑𝑥𝑘𝑦𝑘(𝑥0) (1.5) Using the notation of ‘rooted tree’, introduced
in [1], we can rewrite (1.7) in the form
𝑦(𝑥0+ ℎ)
= 𝑦0+ ∑ ℎ𝑟(𝑡)
𝜎(𝑡)𝛾(𝑡)
𝑡∈𝑇𝑝
𝐹(𝑡)(𝑦0) (1.6)
where 𝑇𝑝 is the set of all rooted trees 𝑡 of order 𝑟(𝑡) ≤ 𝑝
Next, we use the Taylor series for s stage derivative 𝐹1, 𝐹2, … , 𝐹𝑠 and sum up them in (1.1), and (1.2) as well, to get the approximation of 𝑦(𝑥0+ ℎ),
𝑦(𝑥0+ ℎ)
= 𝑦0+ ∑ℎ𝑟(𝑡)
𝜎(𝑡)Φ(𝑡)
𝑡∈𝑇𝑝
𝐹(𝑡)(𝑦0), (1.7)
with Φ(t) is called elementary weights of a rooted tree 𝑡 When matching terms of the same order of ℎ in (1.8) and (1.9), we imply the order condition [1], which states that
Φ(𝑡) = 1/𝛾(𝑡) , ∀𝑡 ∈ 𝑇𝑝 (1.8)
Since (1.10), with 𝑐1= 0, we find parameters
of Runge-Kutta-Felhberg method from Table 2
Table 2 Value of functions 𝛷(𝑡), and 𝛾(𝑡) on rooted trees of order 𝑟(𝑡) ≤ 5, that is 𝑡 ∈ 𝑇5.
Roote
d tree
𝒕 ∈ 𝑻𝟓
Tree order
𝑖=2 𝑐𝑖 2
𝑖=2 𝑐𝑖2 3
2≤𝑗<𝑖 2≤𝑖≤6 𝑎 𝑖𝑗 𝑐 𝑗 6
𝑖=2 𝑐𝑖3 4
2≤𝑗<𝑖 2≤𝑖≤6 𝑐 𝑖 𝑎 𝑖𝑗 𝑐 𝑗 8
4 ∑2≤𝑗<𝑖𝑏̂𝑖
2≤𝑖≤6 𝑎𝑖𝑗𝑐𝑗2 12
Trang 44 ∑ 𝑏̂ 𝑖
2≤𝑘<𝑗<𝑖 2≤𝑖≤6 𝑎 𝑖𝑗 𝑎 𝑗𝑘 𝑐 𝑘 24
6 𝑖=2 𝑐 𝑖4 5
5 ∑2≤𝑗<𝑖𝑏̂𝑖
2≤𝑖≤6 𝑐𝑖2 𝑎𝑖𝑗𝑐𝑗 10
2≤𝑗<𝑖 2≤𝑖≤6 𝑐 𝑖 𝑎 𝑖𝑗 𝑐 𝑗2 15
2≤𝑗<𝑖 2≤𝑖≤6 𝑎 𝑖𝑗 𝑐𝑗3 20
𝑖=1 (∑ 𝑎𝑖𝑗𝑐𝑗 2≤𝑗<𝑖 )
2
20
2≤𝑘<𝑗<𝑖 2≤𝑖≤6 𝑐 𝑖 𝑎 𝑖𝑗 𝑎 𝑗𝑘 𝑐 𝑘 30
2≤𝑘<𝑗<𝑖 2≤𝑖≤6 𝑎 𝑖𝑗 𝑐 𝑗 𝑎 𝑗𝑘 𝑐 𝑘 40
5 ∑2≤𝑘<𝑗<𝑖𝑏̂𝑖
2≤𝑖≤6 𝑎𝑖𝑗𝑎𝑗𝑘𝑐𝑘 60
5 ∑2≤𝑙<𝑘<𝑗<𝑖𝑏̂𝑖
2≤𝑖≤6 𝑎𝑖𝑗𝑎𝑗𝑘𝑎𝑘𝑙𝑐𝑙
120
We find parameters 𝑏𝑖, 𝑏𝑖∗, 𝑐𝑖, and 𝑎𝑖𝑗 (∀𝑖, 𝑗 =
1,2, … ,6, 𝑖 > 𝑗), basing on 17 equations
obtained from Table 2 with the use of (1.10),
where 𝑏̂𝑖’s are replaced by 𝑏𝑖’s and 𝑏𝑖∗’s,
respectively One of solutions of this system
is given in Table 3
Note that from Table 3, 𝑏6∗= 0, so the
method (1.1) has 𝑠 = 5 steps, and order
𝑝 = 4 (since each rooted tree 𝑡 of order
𝑟(𝑡) = 5 has Φ(𝑡) = 0)
aforementioned modification is performed by
simply taking 𝑞 somewhat consecutively,
2|𝑤 1 − 𝑤1∗ |)
1/4
2|𝑤 1 − 𝑤1∗ |)
1/4
The estimate for the local truncation error
𝜏1∗(ℎ) is denoted by
𝑅 =|𝑤1 −𝑤1∗|
55 |
Table 3 Parameters of Runge-Kutta-Felhberg
method
0
1
4 1
4
3
8 323 9
32
12
13 1932
2197 −7200
2197 7296
2197
216 −8 3680
513 −8454104
1
2 −827 2 −3544
2565 18594104 −1140
16
135 0 6656
12825 2856156430 −950 552
25
216 0 1408
2565 21974104 −15 0 IMPLEMENTATION OF RUNGE-KUTTA-FELHBERG METHOD
The following pseudocode describes the algorithm for the Runge-Kutta-Felhberg method to approximate the solution of the IVP:
𝑦′= 𝑓(𝑥, 𝑦), 𝑎 ≤ 𝑥 ≤ 𝑏, 𝑦(𝑎) = 𝛼, such that the local truncation error bounded
by a given tolerance 𝜀 > 0
INPUT function 𝑓, endpoints 𝑎 and 𝑏, initial
value 𝛼, tolerance 𝜀, maximum step-size ℎ𝑚𝑎𝑥, minimum step-size ℎ𝑚𝑖𝑛
OUTPUT mesh point 𝑥, current step-size ℎ, approximation 𝑤 of 𝑦(𝑥), or message that ℎ𝑚𝑖𝑛 is exceeded (the procedure fails)
Step 1 (Initiate the procedure)
𝑥 ≔ 𝑎; 𝑤 ≔ 𝛼, ℎ = ℎ𝑚𝑎𝑥, 𝐹𝐿𝐴𝐺 ≔ 1;
OUTPUT (𝑥, 𝑤)
Step 2 While (𝐹𝐿𝐴𝐺 = 1) do steps 3-9 Step 3 𝐹 1 ≔ ℎ𝑓(𝑥, 𝑤);
𝐹2≔ ℎ𝑓 (𝑥 +14ℎ, 𝑤 +14𝐹1) ;
𝐹3≔ ℎ𝑓 (𝑥 +3ℎ8 , 𝑤 +3𝐹1
32 ) ;
𝐹4≔ ℎ𝑓 (𝑥 +12ℎ13 , 𝑤 +1932𝐹1
2197 ) ;
𝐹 5 ≔ ℎ𝑓 (𝑥 + ℎ, 𝑤 +439𝐹1
216 − 8𝐹 2 +3680𝐹3
513 −845𝐹4
4104 ) ;
𝐹6≔ ℎ𝑓 (𝑥 +ℎ2, 𝑤 −8𝐹1
27 + 2𝐹2−3544𝐹3
2565 +1859𝐹4
4104 −11𝐹5
40 ) ;
Step 4 𝑅: =1ℎ|𝐹1
55 |
Step 5 If 𝑅 < 𝜀 then do
𝑥 ≔ 𝑥 + ℎ; (Adopt the approximation.)
Trang 5𝑤 ≔ 𝑤 +25𝐹1
216 +
1408𝐹3
2565 +
2197𝐹4
4104 −
𝐹5
5 ;
OUTPUT (𝑥, 𝑤, ℎ); end do; End If
Step 6 𝑞 ≔ 0.84(𝜀/𝑅)1/4
Step 7 If 𝑞 ≤ 0.1 then ℎ ≔ 0.1ℎ
elseif 𝑞 ≥ 4 then ℎ ≔ 4ℎ
else ℎ ≔ 𝑞ℎ
End If
Step 8 If ℎ > ℎ𝑚𝑎𝑥 then ℎ ≔ ℎ𝑚𝑎𝑥;
End If
Step 9 If 𝑥 ≥ 𝑏 then 𝐹𝐿𝐴𝐺 ≔ 0
elseif 𝑥 + ℎ > 𝑏 then ℎ ≔ 𝑏 − 𝑥
elseif ℎ < ℎ𝑚𝑖𝑛 then 𝐹𝐿𝐴𝐺 ≔ 0;
OUTPUT(“minimum h exceeded”)
(Procedure fails!)
STOP
EXAMPLE
We illustrate the result harvested from a
program written by basing on the algorithm to
approximate the solution of the IVP:
𝑦′= 𝑦 − 𝑥2+ 1, 0 ≤ 𝑥 ≤ 1.5, 𝑦(0) = −1
We take the tolerance 𝜀 = 10−6, and the
0.25, ℎ𝑚𝑖𝑛 = 0.01, respectively
The approximations are represented in Table
4 It is easy to see that the exact solution is
𝑦(𝑥) = (𝑥 + 1)2− 2𝑒𝑥
So, we can determine the absolute error of the
approximation at each mesh point 𝑥𝑖 This
table also reveals the value of 𝑦(𝑥) (in
column 2) and the absolute error (in column
5) at each mesh point Column 4 and 5
indicate the approximations obtained by the
Runge-Kutta method of order 4 and 5,
respectively Column 6 presents the estimate
for the local truncation error 𝜏𝑖+1∗ (ℎ)
produced by the Runge-Kutta method of order
4 at the operating mesh point
COMPAIRISON WITH THE ALGORITHM
RUNGE-KUTTA METHOD
The modification we made here is worthwhile
in the sense of increasing efficiency It reduces dramatically the number of function-evaluations, especially in the case of a very small tolerance, compare to a traditional one This point is easily illustrated with a traditional algorithm in which we adjust the
step-size h by reducing it to a half each time
the condition (1.4) is not met However, in this strategy, we do not intervene to an increasing adjustment of the step-size at each time the condition (1.4) is met Table 5 represents the comparison of results obtained from such a traditional algorithm and from the modified algorithm It shows us that the flexibility of verifying the step-size gains the upper hand in diminishing the number of intermediate stages The IVP illustrated here is
𝑦′ = 𝑦 − 𝑥2+ 1, 0 ≤ 𝑥 ≤ 1.5, 𝑦(0) = 0.5, with the tolerance 𝜀 > 0, and the maximum,
0.01, respectively
In Table 5, the first and the second column indicate the number of intermediate stages performed for the traditional algorithm and the modified algorithm, correspondingly The third column presents the respective tolerance
𝜀 Definitely, the exactness of the results from the modified algorithm is lower than that from the traditional one, however, it is still in the accepted limit In fact, the requirement for the accuracy does not request a further exactness than what it really needs Therefore, the higher exactness of the results from the
Moreover, if we compare the rate of the increasing number of immediate stages and the decreasing error produced by the traditional algorithm with that produced by the modified algorithm, we recognize easily that the compensation which the traditional has to pay is too expensive
Trang 6SUMMARY
The adaptive Runge-Kutta methods have the
advantages in finding a highly exact
approximation for the solution of an IVP It is
a favor in solving a non-stiff differential
equation numerically The paper presented an
improvement of the algorithm for any
improvement can be easily achieved by
introducing a scale for the adjusted step-size
in a flexible way which makes the twice
traditional adaptive Runge-Kutta method
worthwhile Nowadays, with the help of
supercomputers, it is quite simple to translate
this algorithm into one that is easy to execute
by a process of high performance computing
This reduces dramatically time for calculating
and increasing so much the exactness of the approximations
REFERENCES
1 John C Butcher (2008), Numerical Methods for Ordinary Differential Equations, 2nd Edition, John Wiley & Sons, Ltd
2 Richard L Burden, J Douglas Faires (2010),
Numerical Analysis, 9th Edition, Brooks/Cole
3 John C Butcher (1963), “Coefficients for the
study of Runge-Kutta integration processes” Journal of Australian Mathematical Society, 3(2),
pp 185-201
4 Michiel Hazewinkel, (2001), “Runge-Kutta method”, https://www.encyclopediaofmath.org/ind
Mathematics, Springer Science+Business Media B.V / Kluwer Academic Publishers
Table 4 Approximations for Example
Absolute Error
|𝒚(𝒙𝒊) − 𝒘𝒊∗| 𝒘𝒊 by RK5
Estimate 𝑹 𝒊
of 𝝉𝒊+𝟏∗ (𝒉)
0.25 -1.005550834 0.25 -1.005550873 0.000000039 -1.005550725 5.97× 10 −7 0.407146 -1.024987041 0.157145948 -1.024987153 0.000000112 -1.024986925 5.14× 10 −7
0.563048 -1.068914234 0.155901799 -1.06891447 0.000000236 -1.068914115 8.1× 10 −7
0.701105 -1.138199742 0.13805677 -1.138200102 0.00000036 -1.138199649 7.25× 10 −7
0.826775 -1.23476275 0.125670756 -1.234763232 0.000000482 -1.234762696 6.59× 10 −7
0.943955 -1.361290913 0.117179383 -1.361291519 0.000000606 -1.36129091 6.24× 10 −7
1.054692 -1.520421635 0.110736981 -1.520422373 0.000000738 -1.520421698 6.042× 10 −7
1.160198 -1.71467541 0.10550681 -1.714676288 0.000000878 -1.714675552 5.91× 10 −7
1.261289 -1.94650959 0.101090617 -1.946510613 0.000001023 -1.946509819 5.81× 10 −7
1.358555 -2.21835268 0.097265817 -2.218353859 0.000001179 -2.218353009 5.74× 10 −7
1.452434 -2.532574686 0.093879137 -2.532576029 0.000001343 -2.532575126 5.68× 10 −7
1.5 -2.71337814 0.047565983 -2.713379551 0.000001411 -2.713378645 4.32× 10 −8
Table 5 Comparison of the results from a traditional algorithm and the modified algorithm
Number of
intermediate stages
by the traditional
algorithm
Number of intermediate stages
by the modified algorithm
Tolerance
𝜺
Absolute error at the last step (at x=1.5) from the traditional procedure
Absolute error at the last step (at x=1.5) from the modified procedure
Trang 7TÓM TẮT
MỘT THUẬT TOÁN CẢI TIẾN VÀ CHƯƠNG TRÌNH THỰC THI
CỦA THUẬT TOÁN CHO PHƯƠNG PHÁP RUNG-KUTTA THÍCH NGHI
Trần Thị Huê *
Trường Đại học Kỹ thuật Công nghiệp - ĐH Thái Nguyên
Các phương pháp Runge-Kutta thích nghi được sử dụng rộng rãi để tìm nghiệm xấp xỉ của bài toán phương trình vi phân với giá trị ban đầu bởi tính hiệu quả và khối lượng tính toán tương đối nhỏ Bài báo này sẽ giới thiệu cách xây dựng phương pháp Runge-Kutta trong trường hợp tổng quát, và
sự cải tiến thuật toán của phương pháp Đóng ghóp quan trọng của bài báo này là việc đưa ra một cách thức mới cho việc điều chỉnh kích thước bước linh hoạt hơn nhằm tăng hiệu quả tính toán Phần sau của bài báo nhằm giới thiệu và nhấn mạnh các nội dung cải tiến nói trên cho một phương pháp cụ thể, phương pháp Runge-Kutta-Felhberg, cũng như chương trình thực thi của thuật toán
Từ khóa: bài toán giá trị ban đầu, phương pháp Runge-Kutta, phương pháp
Runge-Kutta-Felhberg, phương pháp Runge-Kutta thích nghị, điều khiển sai số.
Ngày nhận bài: 22/8/2018; Ngày phản biện: 11/9/2018; Ngày duyệt đăng: 12/10/2018
* Tel: 0984 632890, Email: cuonghue1980@gmail.com