Contents lists available atScienceDirectComputers and Mathematics with Applications journal homepage:www.elsevier.com/locate/camwa A two-dimensional sideways problem with random discrete
Trang 1Contents lists available atScienceDirect
Computers and Mathematics with Applications
journal homepage:www.elsevier.com/locate/camwa
A two-dimensional sideways problem with random discrete data
Dang Duc Tronga,b, Tran Quoc Vietc,d, Vo Dang Khoae, Nguyen Thi Hong Nhunga,b,∗
aFaculty of Mathematics and Computer Sciences, University of Science, Ho Chi Minh City, Viet Nam
bVietnam National University Ho Chi Minh, Viet Nam
cInstitute of Fundamental and Applied Sciences, Duy Tan University, Ho Chi Minh city 700000, Viet Nam
dFaculty of Natural Science, Duy Tan University, Danang City 550000, Viet Nam
eHo Chi Minh City Medicine and Pharmacy University, Viet Nam
A R T I C L E I N F O
Keywords:
Heat equation
Heat distribution
Ill-posed problem
Regularization
Statistical inverse problems
Nonparametric regression
A B S T R A C T The heat distribution on the surface of a layer inside a heat conducting body can be recovered from two interior temperature measurements It is reasonable to take discrete measured data with random noise into account In the paper, approximations of interior measurements from the discrete data are established First, a
one-dimensional Sinc series is applied to represent the approximation function on 𝑥 standing for the length of the object and then the trigonometric estimators on time-variable 𝑡 is obtained We also construct an estimator
for the heat distribution which converges in the sense of integrated mean squared error (MISE) Some numerical experiments are given as demonstrations of the ability of numerical implementation of our regularization
1 Introduction
In the heat conduction theory, the temperature history of a body
is often defined from temperature of whole surface body But, in a
lot of applications, some part of the surface is inaccessible when it is
too hot or too cold In the special situations (see, e.g., [1]), instead of
attaching a temperature sensor at the surface of the body, we can only
measure the temperature in the interior points or on the accessible part
of surface body to get the heating history in the body The problem
of determining the surface temperature or heat flux histories from
interior/accessible part measured temperature is called the sideways
problem
In references, there are many types of interior measurements For
the one-dimensional case the heat body is modeled as the interval
𝛺 = [0, 𝐿]and we can list here some kind of interior measurements
In [2,3], the temperature is recovered from the temperature history
𝑢 (𝑥0, 𝑡)and the flux 𝑢 𝑥 (𝑥0, 𝑡)where 𝑥0 ∈ [0, 𝐿] is the 𝑥-coordinate of
the interior point of the heat body In [4,5], the authors considered the
model 𝛺 = (0, ∞) and used the data 𝑢(𝑥0, 𝑡 ) (𝑥0>0)and the assumption
lim𝑥→ 𝑢 (𝑥, 𝑡) = 0 to recovery 𝑢(𝑥, 𝑡) For the higher dimensional case,
we also have some papers In [6], the authors consider the body 𝛺 =
(0, 1) × (0, ∞) with the inaccessible surface being {𝑥 = 1} and the
interior data are 𝑢(0, 𝑦, 𝑡) = 𝜙(𝑦, 𝑡), 𝑢 𝑥 (0, 𝑦, 𝑡) = 0 In [7], the authors
considered 𝛺 = R × (0, 2) and recovered the function 𝑢(𝑥, 𝑡) from
measurements 𝑢(𝑥, 1, 𝑡), 𝑢(𝑥, 2, 𝑡).
In this paper, the body is modeled by the strip R × (0, 2) with the
line R × {𝑦 = 0} represented the inaccessible surface of the body The
∗ Corresponding author at: Faculty of Mathematics and Computer Sciences, University of Science, Ho Chi Minh City, Viet Nam
E-mail addresses: ddtrong@hcmus.edu.vn(D.D Trong),tranquocviet5@duytan.edu.vn(T.Q Viet),vdkhoa@ump.edu.vn(V.D Khoa),
nthnhung@hcmus.edu.vn(N.T.H Nhung)
temperature is measured at two accessible lines R×{𝑦 = 1} and R×{𝑦 =
2} We consider the problem of determining the heat distribution
𝑣 (𝑥, 𝑡) = 𝑢(
𝑥, 𝑦0, 𝑡)
where 𝑢 satisfies
𝜅∇2𝑢−𝜕𝑢
for some constant 𝜅 > 0, and subjects to the interior conditions
and the initial condition
In fact, some results dealing with deterministic data have been stud-ied carefully so far First of all, it is sensible to have an interpretation of previous examined result in [7] The authors have studied the problem with attention paid on deterministic data For the uniqueness of solu-tion, we shall have to measure the temperature history at two interior
lines R × {𝑦 = 1} and R × {𝑦 = 2} which enable us to identify uniquely
the heating history inside of the layer (see, e.g., [1]) Additionally,
it is worth insisting that the problem is ill-posed if we consider the
problem over the whole time interval R+with respect to the 𝐿2-norm
In particular, the authors regularized the function 𝑢 (𝑥, 0, 𝑡) directly from measured data of 𝑢 (𝑥, 1, 𝑡) and 𝑢 (𝑥, 2, 𝑡) To be more specific, they
https://doi.org/10.1016/j.camwa.2021.01.013
Received 2 August 2020; Received in revised form 1 December 2020; Accepted 22 January 2021
Available online xxxx
0898-1221/© 2021 Elsevier Ltd All rights reserved
Trang 2changed the problem into an integral equation of convolution type as
well as represented the solution by an expansion of two-dimensional
Sinc series
In the present paper, we develop the ideas of paper [7]
Concentrat-ing on the numerical aspect of the sideways problem, we focus on the
problem of recovering the heat distribution from measured data along
with the effect of random noise In this circumstance, the problem is
still ill-posed Moreover, we are also concerned with random noise data
which is new from previous viewpoint Indeed, measurements are often
to contain some errors all the time In practice, if we measure the
func-tions 𝑓 (𝑥, 𝑡), 𝑔(𝑥, 𝑡) at discrete points {
𝑥 𝑚 = 𝑚ℎ, ℎ > 0, −𝑀 ≤ 𝑚 ≤ 𝑀}
and discrete points of time 0 < 𝑡1≤ 𝑡2 ≤ ⋯ ≤ 𝑡 𝑛 ≤ 𝑇 , then we obtain
a set of measured values{
(𝜏 𝑗,𝑚 , 𝜂 𝑗,𝑚 ) ∶ −𝑀 ≤ 𝑚 ≤ 𝑀, 1 ≤ 𝑗 ≤ 𝑛}
where
𝜏 𝑗,𝑚 ≈ 𝑓(
𝑚ℎ, 𝑡 𝑗)
, 𝜂 𝑗,𝑚 ≈ 𝑔(
𝑚ℎ, 𝑡 𝑗)
The points 𝑥 𝑚 , for 𝑚 = −𝑀, 𝑀,
are called (non-random) design points The actual measurements are
always observed with errors i.e
𝜏 𝑗,𝑚 = 𝑓(
𝑚ℎ, 𝑡 𝑗)
𝜂 𝑗,𝑚 = 𝑔(
𝑚ℎ, 𝑡 𝑗)
In statistics literature, the unknown errors 𝜖 𝑗,𝑚 , 𝜀 𝑗,𝑚 , 𝑚 = −𝑀, 𝑀 and
𝑗 = 1, 𝑛are often assumed to be mutually independent There are a lot
of reasons why these errors arise, such as instrument or environment
for instance In general, when the effects of the instrument on
mea-surements are considered, the errors seem to have the greatest possible
bound It can be noted that this model is more or less similar to the
one for deterministic case and can be considered in the deterministic
setting Otherwise, if the errors come from the environment, then their
magnitude may be not uniformly bounded Therefore, we will only
examine a model in which the errors’ variance is uniformly bounded
and call it the bounded variance model, i.e., there are 𝜎1, 𝜎2 > 0such
that
In this case, the errors can be non-identically distributed The random
data model was studied in some recent papers (see, e.g., [8–13])
Different to the studies [8–12], in the present paper, we consider the
numerical problem on an unbounded domain (𝑥, 𝑦, 𝑡) ∈ R × (0, 2) × R+
and using the Sinc expansion The problem is of finding the heat
distribution 𝑢 = 𝑢(𝑥, 𝑦, 𝑡) satisfying(2)–(5)from the discrete data in(6),
(7) The one-dimensional similar problem has been considered recently
in [13] In the present paper, we combine the Sinc expansion with the
fractional discrete Fourier transform (fDFT) to construct numerically a
regularization for the problem From our knowledge, this combination
is new
The remaining part of present paper is divided into five sections
Section2is devoted to set up some necessary definitions and
trans-form our problem to the integral trans-form In Section 3, we state four
main results of our paper More explicitly, we first present a result
which can be applied to define computation parameters Next, we
state approximations of functions 𝑓 (𝑥, 𝑡), 𝑔(𝑥, 𝑡) by a combination of
truncated Sinc expansion and truncated Fourier expansion with respect
to the 𝑥-variable and the 𝑡-variable respectively Then, we propose
an estimator for 𝑢(𝑥, 𝑦0, 𝑡)which converges in the sense of integrated
mean squared error (MISE) We also give a specific way to define
regularization parameters In Section 4, some numerical experiments
are given as demonstrations of the ability of numerical implementation
for our regularization Finally, in Section 5, we present the proofs of
the main results
2 Preliminary results
2.1 The Fourier transform and Fourier expansion
Before going to the main part of the paper, we set up some
nota-tions For 𝑘 ∈ N, 𝜙 ∈ 𝐿2(R𝑘), the Fourier transform of 𝜙 is defined by
𝜙f t(𝑝) =
∫R𝑘
where i2 = −1, 𝑝 = (𝑝1, … , 𝑝 𝑘), 𝑥 = (𝑥1, … , 𝑥 𝑘), 𝑝 ⋅ 𝑥 = ∑𝑘
𝑗=1𝑝 𝑗 𝑥 𝑗
Moreover, 𝜙f t∈ 𝐿2(R𝑘)and
𝜙 (𝑥) = 1 (2𝜋) 𝑘∫R𝑘
𝜙f t(𝑝) e i𝑝 d𝑝, 𝑥∈ R𝑘a.e
In addition, we have the Parseval equality
‖𝜙‖2
𝐿2(R𝑘)=
1
(2𝜋) 𝑘‖‖‖𝜙f t‖‖‖2
𝐿2 (R𝑘).
For 𝜃 > 0, we define
𝐻 𝜃(R𝑘) =
{
𝜙 ∈ 𝐿2(R𝑘) ∶‖𝜙‖2
𝐻 𝜃=
∫R𝑘
(1 +|𝑝|2)𝜃 |𝜙f t(𝑝)|2d𝑝 < ∞
}
.
For 𝜙 ∈ 𝐿2(R𝑘), we denote the Fourier transform of function 𝜙(𝑥1,
… , 𝑥 𝑘)with respect to the 𝑗th variable 𝑥 𝑗 (for 𝑗 = 1, 𝑘) by
𝜙f t𝑗 (
𝑥1, … , 𝑝 𝑗 , … , 𝑥 𝑘)
=
∫
∞
−∞
𝜙 (𝑥1, … , 𝑥 𝑗 , … , 𝑥 𝑘) e−i𝑥 𝑗 𝑝 𝑗 d𝑥 𝑗 , 𝑝 𝑗 ∈ R Similarly, 𝜙f t
𝑗,𝑙 denotes the Fourier transform of 𝜙f t
𝑗 with respect to 𝑥 𝑙 for 𝑙 = 1, 𝑘 and 𝑙 ≠ 𝑗 We define a class of functions used in the Sinc
approximation of our paper
Definition 2.1. Let 𝐶 > 0, 𝑞, 𝑟0, 𝑇0, 𝜌, 𝜚 >0 We denote by 𝑉 (𝑞, 𝐶) the
set of functions 𝜙 ∈ 𝐿2(R × [0, ∞)) such that
∬R×[0,∞) (1 + 𝑟
2)𝑞 |𝜙f t
1(𝑟, 𝑡)|2𝑑𝑟𝑑𝑡
+
∬(R⧵[𝑟0,𝑟0])×[0,∞)
(1 + 𝑟2)𝑞||
||
||
𝜕𝜙f t
1
𝜕𝑟 (𝑟, 𝑡)||
||
||
2
𝑑𝑟𝑑𝑡 ≤ 𝐶.
We also denote by 𝑉trunc(𝜌, 𝐶) the set of functions 𝜙 ∈ 𝐿2(R × [0, ∞))
such that
∬R×(𝑇 ,∞) |𝜙(𝑥, 𝑡)|2d𝑥d𝑡≤ 𝐶
𝑇 𝜌 for all 𝑇 > 𝑇0.
Finally, we define
𝑊 (𝜚) ={
𝜙 ∈ 𝐿2(R) ∶ supp 𝜙f t⊂ [−𝜚, 𝜚]}
.
As in [14] we define the Cardinal function
𝑆 (𝑚, ℎ)(𝑚ℎ) = 1, 𝑆 (𝑚, ℎ)(𝑥) =sin
[
𝜋 (𝑥 − 𝑚ℎ)∕ℎ]
𝜋 (𝑥 − 𝑚ℎ)∕ℎ ,
𝑚 ∈ Z, ℎ > 0, 𝑥 ≠ 𝑚ℎ
which has the orthogonal property (see [14], Chapter 1, Section 1.10, pages 91–92)
Lemma 2.2 Let ℎ > 0, then the family of functions{
ℎ−1∕2𝑆 (𝑚, ℎ)}
is a complete orthonormal basis in the space 𝑊(
𝜋 ℎ
)
We have
∫
∞
−∞
𝑆 (𝑚, ℎ)(𝑥)𝑆(𝑙, ℎ)(𝑥)d𝑥 =
{
0, otherwise.
We also give here some definitions which will be used in Fourier
expansions in 𝐿2(0, 𝑇 ) for 𝑇 > 0 Recall that the system {𝜙 𝑝} with
𝜙 𝑝 (𝑡) = sin(
(𝑝 −1
2)𝜋𝑡
𝑇
)
for every 𝑝 = 1, 2, …, is an orthogonal basis of
𝐿2[0, 𝑇 ]and denote⟨𝜑, 𝜓⟩ 𝑇=∫0𝑇 𝜑 (𝑡)𝜓(𝑡)𝑑𝑡.
Definition 2.3. Denote
F ={
𝜓 ∶ R × [0, ∞) → R ∣ 𝜓(𝑥, 0) = 0, 𝜓(𝑥, ⋅) ∈ 𝐿2(0, 𝑇 ) for every 𝑇 > 0, 𝑥 ∈ R}
For 𝛼, 𝛽 > 0, ℎ ∈ (0, 1) and a function 𝛬 ∶ [0, ∞) × [0, 1] → (0, ∞),
𝛬 = 𝛬(𝑇 , ℎ), we define the ellipsoid 𝐶 𝛼,𝛽,𝛬 the set of 𝜓 ∈ F such that
∑
𝑝≥1
𝑝 2𝛼 |⟨𝜓(0, ⋅), 𝜙 𝑝⟩𝑇|2
|𝑚|≥1
[
∑
𝑝≥1
𝑝 2𝛼 |𝑚| 2𝛽 |⟨𝜓(𝑚ℎ, ⋅), 𝜙 𝑝⟩𝑇|2
]
≤ 𝛬2 ∀(𝑇 , ℎ) ∈ [0, ∞) × [0, 1].
We will give an example of the ellipsoid
Trang 3Lemma 2.4. Let 𝐶Fou>0, 0 < 𝛼 < 3∕2, 𝛽′
1, 𝛽2′>0, 0 < 𝛽 < (2𝛽′− 1)∕2
with 𝛽′ = min{(𝛽1′ + 𝛽2′)∕2; 𝛽2′} Assume that 𝜓 ∈ F, 𝜓 = 𝜓(𝑥, 𝑡) has
the second derivative with respect to the variable 𝑡 and ‖𝜓 𝑡 (𝑥,⋅)‖𝐿2(0,∞)≤
𝐶Fou(1 +|𝑥|) −𝛽′
1, ‖𝜓 𝑡𝑡 (𝑥,⋅)‖𝐿2(0,∞) ≤ 𝐶Fou(1 +|𝑥|) −𝛽′
2 Then 𝜓 ∈ 𝐶 𝛼,𝛽,𝛬 with
𝛬2= 𝛬2(𝑇 , ℎ) = 𝐶 𝛬 𝑇4
ℎ 2𝛽′ and
2
Fou
𝜋4
(4 − 2𝛼
3 − 2𝛼
) (
1 + 2
( 2𝛽′− 2𝛽 2𝛽′− 2𝛽 − 1
))
.
2.2 The Fourier transform of the solution
We shall find solutions of Eq.(2)by using the Fourier transform
Putting 𝑢(𝑥, 𝑦, 𝑡) = 0 for 𝑡 < 0, 𝑥 ∈ R2, 𝑦 ∈ (0, 2), and applying the
Fourier transform with respect to 𝑥, 𝑡, we obtain
−𝑟2𝑢f t
1,3 (𝑟, 𝑦, 𝑠) + 𝜕
2
𝜕𝑦2𝑢f t
1,3 (𝑟, 𝑦, 𝑠) = i 𝑠
𝜅 𝑢
f t
The characteristic equation 𝜆2−𝑟2−i𝑠∕𝜅 = 0of the differential equation
(10)has two solutions 𝜆 1,2 = ±𝜆(𝑟, 𝑠), where
𝜆 = 𝐴0+ i𝐵0, 𝐴0=√1
2
√√
𝑟4+ 𝑠2∕𝜅2+ 𝑟2,
𝐵0= sgn(𝑠)√1
2
√√
𝑟4+ 𝑠2∕𝜅2− 𝑟2
(11)
for 𝑟, 𝑠 ∈ R Hence 𝑢f t
1,3 (𝑟, 𝑦, 𝑠) = 𝐶1e𝜆𝑦 + 𝐶2e−𝜆𝑦 , where 𝐶1, 𝐶2 do not
depend on 𝑦 We shall find 𝐶1, 𝐶2 In view of(3)and(4), we obtain
directly
𝐶1e2𝜆 + 𝐶2e−2𝜆 = 𝑢f t1,3 (𝑟, 2, 𝑠) =
∫R𝑔
f t
1(𝑟, 𝑡) e −i𝑠𝑡 d𝑡 ∶= 𝐺(𝑟, 𝑠),
𝐶1e𝜆 + 𝐶2e−𝜆 = 𝑢f t
1,3 (𝑟, 1, 𝑠) =
∫R𝑓
f t
1 (𝑟, 𝑡) e −i𝑠𝑡 d𝑡 ∶= 𝐹 (𝑟, 𝑠),
which gives
𝐶1=𝐺e
−𝜆 − 𝐹 e −2𝜆
e𝜆− e−𝜆 , 𝐶2=𝐹e
2𝜆 − 𝐺e 𝜆
e𝜆− e−𝜆
The Fourier transform of the solution of (1)–(5)with respect to the
variable (𝑥, 𝑡) is
𝑣f t(𝑟, 𝑠) = 𝐺 (𝑟, 𝑠) e
−𝜆 − 𝐹 (𝑟, 𝑠) e −2𝜆
e𝜆− e−𝜆 e𝜆𝑦0+𝐹 (𝑟, 𝑠) e
2𝜆 − 𝐺(𝑟, 𝑠) e 𝜆
e𝜆− e−𝜆 e−𝜆𝑦0
= 𝐹 (𝑟, 𝑠) 𝐶 𝜆 (𝑟, 𝑠) − 𝐺(𝑟, 𝑠) 𝐷 𝜆 (𝑟, 𝑠),
where the Fourier transform 𝑣f t is defined by(9), and
𝐶 𝜆 (𝑟, 𝑠) = sinh 𝜆(2 − 𝑦0)
sinh 𝜆 , 𝐷 𝜆 (𝑟, 𝑠) =
sinh 𝜆(1 − 𝑦0)
In fact
lim
(𝑟,𝑠)→(0,0) 𝐶 𝜆 (𝑟, 𝑠) = 2 − 𝑦0, lim
(𝑟,𝑠)→(0,0) 𝐷 𝜆 (𝑟, 𝑠) = 1 − 𝑦0.
Let us remind that 𝑣(𝑥, 𝑡) = 𝑢(𝑥, 𝑦0, 𝑡), hence 𝑣f t(𝑟, 𝑠) = 𝑢f t
1,3 (𝑟, 𝑦0, 𝑠)
Consequently, we have the following formula of the exact solution
𝑣 (𝑥, 𝑡) = 1
4𝜋2∬R2
𝑢f t
1,3 (𝑟, 𝑦0, 𝑠)ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠
= 1
4𝜋2∬R2
(
𝐹 (𝑟, 𝑠)e
𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆 − 𝐺(𝑟, 𝑠)e
𝜆 (1−𝑦0)− e−𝜆(1−𝑦0)
e𝜆− e−𝜆
)
2.3 The ill-posedness of the problem
As mentioned above, the solution of(1)–(5)can be obtained as the
following form
𝑣f t(𝑟, 𝑠) = 𝑢f t1,3 (𝑟, 𝑦0, 𝑠)
= 𝐹 (𝑟, 𝑠)e
𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆 − 𝐺(𝑟, 𝑠)e
𝜆 (1−𝑦0)− e−𝜆(1−𝑦0)
e𝜆− e−𝜆
Note that
||
||e
𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆 ||
||
2
= e
2𝐴0(2−𝑦0)+ e−2𝐴0(2−𝑦0)− 2 cos(
2𝐵0(
2 − 𝑦0))
e2𝐴0+ e−2𝐴0− 2 cos(2𝐵0) ,
where 𝐴0 and 𝐵0 are defined in(11) The first quantity is increasing
exponentially for 0 < 𝑦0 < 1as 𝐴0 = √√
𝑟4+ 𝑠2∕𝜅2+ 𝑟2∕√
2 → ∞
Therefore, a small disturbance for the data 𝑓 (𝑥, 𝑡) will be amplified
in-finitely by this factor, hence the problem of recovering the temperature
𝑣 (𝑥, 𝑡)from the measured data is severely ill-posed
3 Main results of the paper
3.1 Assumptions
The latter subsection shows that the problem is ill-posed, hence a
regularization is in order We first state assumptions for 𝑓 , 𝑔 From now
on, we assume (A1) ∃𝑞 > 1∕2, 𝐶sinc> 0 s.t 𝑓 , 𝑔 ∈ 𝑉 (𝑞, 𝐶sinc). (14) (A2) ∃ 𝐸, 𝜌 > 0 s.t.𝑓 , 𝑔 ∈ 𝑉trunc(𝜌, 𝐸). (15) (A3) ∃ 𝛼, 𝛽′> 1∕2, 𝛽 > 0, 𝐶 𝛬 , 𝜏 > 0, ℎ0, 𝑇0> 0, s.t 𝑓 , 𝑔 ∈ 𝐶 𝛼,𝛽,𝛬 for all 𝛬 =√
𝐶 𝛬 𝑇 𝜏 ∕ℎ 𝛽′, ℎ ∈ (0, ℎ0), 𝑇 ∈ (𝑇0, ∞). (16)
Herein 𝑉 and 𝑉trunc are defined in Definition 2.1, 𝐶 𝛼,𝛽,𝛬 is in
Definition 2.3
We denote by I𝐴the indicator of the set A, i.e., I𝐴 (𝑥) = 1 for 𝑥 ∈ 𝐴
and I𝐴 (𝑥) = 0 for 𝑥 ∉ 𝐴.
3.2 Main results
Since there are so many parameters, it is better to find a way of specifying them
Theorem 3.1. Let 𝑌 > 2, 0 < 𝑦 < 𝑌 , 𝑞 > 0, 0 < 𝛽 < 1∕2 Assume (i) 𝑢(𝑥, 𝑦, 𝑡) defined on R × [0, 𝑌 ] × [0, ∞) and satisfies(2),(5) (ii) (1 + |𝑥| + |𝑡|)𝑣0,(1 +|𝑥| + |𝑡|)𝑣 𝑌 ∈ 𝐿2(R2)where 𝑣 𝑦 (𝑥, 𝑡) ∶= 𝑢(𝑥, 𝑦, 𝑡)
for 𝑡 > 0 and 𝑣 𝑦 (𝑥, 𝑡) for 𝑡 ≤ 0.
Then we can find 𝐶, 𝐸, 𝐶 𝛬 >0such that 𝑣 𝑦 ∈ 𝑉 (𝑞, 𝐶) ∩ 𝑉trunc(2, 𝐸) ∩ 𝐶 1,𝛽,𝛬
with 𝛬2= 𝐶 𝛬 𝑇4∕ℎ2 Hence, with 𝑢 satisfying (i), (ii) we can choose
𝛼 = 𝛽′= 1, 𝜌 = 2, 𝜏 = 2, arbitrary 𝑞 > 0, arbitrary 𝛽 ∈ (0, 1∕2).
We can also choose
R×[0,∞)
(1 + 𝑟2)𝑞 |𝑣f t
𝑦,1(𝑟, 𝑡)|2𝑑𝑟𝑑𝑡
+
∬(R⧵[𝑟0,𝑟0])×[0,∞)
(1 + 𝑟2)𝑞||
||
||
𝜕𝑣f t
𝑦,1
𝜕𝑟 (𝑟, 𝑡)||
||
||
2
𝑑𝑟𝑑𝑡,
4𝜋2∬R2
||
||
||
𝜕𝑣f t
𝑦
𝜕𝑠 (𝑟, 𝑠)||
||
||
2
d𝑟d𝑠,
𝐶Fou ≥ sup𝑥∈R(1 +|𝑥|){‖𝑣 𝑦,𝑡 (𝑥, )‖𝐿2(0,∞)+‖𝑣 𝑦,𝑡𝑡 (𝑥, )‖𝐿2(0,∞)
}
and 𝐶 𝛬 be as in Lemma2.4.
Remark The condition (1 +|𝑥| + |𝑡|)𝑤 ∈ 𝐿2(R2) is equivalent to
𝑤f t ∈ 𝐻1(R2)which is quite natural In the following parts, we use
𝑓 (𝑥, 𝑡) = 𝑣1(𝑥, 𝑡) = 𝑢(𝑥, 1, 𝑡) and 𝑔(𝑥, 𝑡) = 𝑣2(𝑥, 𝑡) = 𝑢(𝑥, 2, 𝑡) Hence,
𝐶, 𝐸, 𝐶 𝛬 corresponding to 𝑓 , 𝑔 can be calculated from the discrete data.
From Eq (13), we can divide the regularization into two steps
In the first step, we will find two estimators ̂ 𝐹 and ̂ 𝐺 for 𝐹 and 𝐺, respectively From the data 𝜏 𝑗,𝑚 , 𝜂 𝑗,𝑚 as in (6), (7), we are going to
Trang 4build ̃ 𝑓 and ̃ 𝑔 which are concrete estimators of the functions 𝑓 and 𝑔,
respectively Before recalling the Sinc series representation of 𝑓 as well
as 𝑔, we need some conditions on them First, 𝑓 (⋅, 𝑡) and 𝑔(⋅, 𝑡) have to
be (at least) continuous in order for 𝑓 (𝑚ℎ, 𝑡) and 𝑔(𝑚ℎ, 𝑡) to be defined
for any 𝑡 > 0 In the present paper, we choose
̂ = ̂ 𝐹 𝑁 ,𝑀 = ̂ 𝑓 𝑁 ,𝑀f t , 𝐺 ̂ = ̂ 𝐺 𝑁 ,𝑀 = ̂ 𝑔 𝑁 ,𝑀f t
with ̂ 𝑓 𝑁 ,𝑀 and ̂ 𝑔 𝑁 ,𝑀defined by
Definition Let the model(6)hold According toLemma 5.3, we define
the estimators for the coefficients 𝑐 𝑝 (𝑚ℎ)as follows:
̂
𝑐 𝑝,𝑚= 2
𝑛
𝑛−1
∑
𝑗=1
𝜏 𝑗,𝑚 𝜙 𝑝 (𝑡 𝑗) +1
𝑛 𝜏 𝑛,𝑚 𝜙 𝑝 (𝑡 𝑛 ), 𝑝 = 1, 𝑛. (17)
We estimate the function 𝑓 (𝑥, 𝑡) on R2by
̂
𝑁 ,𝑀 (𝑥, 𝑡) = I [0,𝑇 ] (𝑡)
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚 𝜙 𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥). (18)
Definition Let the model(7)hold According toLemma 5.4, we define
the estimators for the coefficients 𝑑 𝑝 (𝑚ℎ)as follows:
̂
𝑑 𝑝,𝑚=2
𝑛
𝑛−1
∑
𝑗=1
𝜂 𝑗,𝑚 𝜙 𝑝 (𝑡 𝑗) +1
𝑛 𝜂 𝑛,𝑚 𝜙 𝑝 (𝑡 𝑛 ), 𝑝 = 1, 𝑛. (19)
We estimate the function 𝑔(𝑥, 𝑡) on R2by
̂
𝑔 𝑁 ,𝑀 (𝑥, 𝑡) = I [0,𝑇 ] (𝑡)
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑑 𝑝,𝑚 𝜙 𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥). (20)
Note that positive numbers ℎ, 𝑇 > 0 and positive integers 𝑁, 𝑀 are
called regularization parameters For 𝜎1and 𝜎2defined in(8), 𝐶sincand
𝑞in(14), put
𝜎 = max{𝜎1, 𝜎2}, 𝐶 sinc,𝑞=√
𝐶sinc
( 1
√
2𝜋 4𝑞+1
2√
(4𝑞 − 1)𝜋 4𝑞−1
)
. (21)
Using the parameters of Assumptions (A1)–(A3) in(14)–(16), we
de-note
𝜀0(ℎ, 𝑇 , 𝑀, 𝑁)
= 2
(
1 +𝛼2
−2𝛼+2
2𝛼 − 1
)
ℎ𝑇 𝑁 −2𝛼 𝛬2+ 2ℎ𝑇 𝑀 −2𝛽 𝛬2+4𝑀ℎ𝑇 𝑁𝜎
2
𝑛
+4𝐶 sinc,𝑞2 ℎ 4𝑞−2+ 𝐸
= 𝐴 𝜀 𝑇
2𝜏+1
𝑁 2𝛼 ℎ 2𝛽′ −1+ 𝐵 𝜀 𝑇
2𝜏+1
ℎ 2𝛽′ −1𝑀 2𝛽 + 𝐶 𝜀 𝑀 ℎ𝑇 𝑁
𝑇 𝜌
where
𝐴 𝜀 = 2𝐶 𝛬
(
1 +𝛼2
−2𝛼+2
2𝛼 − 1
)
, 𝐵 𝜀 = 2𝐶 𝛬 , 𝐶 𝜀 = 4𝜎2, 𝐷 𝜀 = 4𝐶2
sinc,𝑞 (23) Then, one obtains an upper bound for two main parts in the Fourier
transform of solution 𝑣 as presenting is the following two theorems
Theorem 3.2 Let 𝑓 , 𝑔 satisfy Assumptions (A1)–(A3) in(14)–(16) Then
max
{
E‖‖‖𝐹 − ̂ 𝐹 𝑁 ,𝑀‖‖‖2
𝐿2 (R 2 ), E ‖ ‖‖𝐺 − 𝐺 ̂
𝑁 ,𝑀‖‖‖2
𝐿2 (R 2 )
}
≤ 4𝜋2𝜀0(ℎ, 𝑇 , 𝑀, 𝑁).
In the second step, for 𝜆 defined in(11), we have to replace the
terms (e𝜆 (2−𝑦0)− e−𝜆(2−𝑦0))∕(e𝜆− e−𝜆)and (e𝜆 (1−𝑦0)− e−𝜆(1−𝑦0))∕(e𝜆− e−𝜆)
in(13) by stable terms For 𝜖 > 0, since the instability terms tend to
infinity as 𝑟, 𝑠 → ∞, we replace the terms by
e𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆 I𝐷 𝜖 (𝑟, 𝑠), e
𝜆 (1−𝑦0)− e−𝜆(1−𝑦0)
e𝜆− e−𝜆 I𝐷 𝜖 (𝑟, 𝑠),
where
𝐷 𝜖 = {(𝑟, 𝑠) ∈ R2∶ |𝑟| ≤ 𝑏 𝜖 , |𝑠| ≤ 𝜅𝑏2} (24)
and lim𝜖→0+𝑏 𝜖= ∞ Due to technical reasons, we choose 𝑏𝜖 = ln(4∕𝜖)∕
(√
2√√
2 + 1) In fact, such the quantity 𝑏𝜖 implies 𝑒 2𝐴0≤ 4𝜖−1for (𝑟, 𝑠)
∈ 𝐷 𝜖 Combining the idea, we will approximate the function 𝑣 by the
function
̂
𝑣 𝜖 (𝑥, 𝑡) = 1
4𝜋2∬𝐷 𝜖
(
̂
𝑁 ,𝑀 (𝑟, 𝑠)e
𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆
− ̂ 𝐺 𝑁 ,𝑀 (𝑟, 𝑠)e
𝜆 (1−𝑦0)− e−𝜆(1−𝑦0)
e𝜆− e−𝜆
)
ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠.
(25)
In conclusion, we give the main result that will lead to the conver-gence of our estimators in associating conditions
Theorem 3.3 Let Assumptions (A1)–(A3) in(14)–(16)hold, 𝜃 ≥ 0, 0 <
𝛽 < 2𝛽′−1
2 and let 𝑣 ∈ 𝐻 𝜃(R2) Then we have
E ∬R2|𝑣 𝜖 − ̂ 𝑣|2
d𝑟d𝑠 ≤ 8𝜀0(ℎ, 𝑇 , 𝑀, 𝑁)(4
𝜖
)3−𝑦0
4𝜋2min{1, 𝜅 𝜃 }𝑏 2𝜃
𝜖 ∬R2⧵𝐷 𝜖
(𝑟2+ 𝑠2)𝜃 |𝑣f t(𝑟, 𝑠)|2𝑑𝑟𝑑𝑠 where 𝜀0is defined in(22).
In fact, to obtain the convergence of the estimators, there are
in-finitely many ways to define the regularization parameters ℎ, 𝑇 , 𝑀, 𝑁.
In this theoretical part, we will give an instance of regularization parameters selection We first set up some necessary notations Put
( 1
2𝛼+
1
2𝛽+ 1 +
1
𝜌
(
2𝜏 + 1 2𝛼 +
2𝜏 + 1 2𝛽 + 1
)
4𝑞 − 2
(2𝛽′− 1
2𝛼 +
2𝛽′− 1
2𝛽 − 1
))−1
.
(26)
and
𝑝1= 2𝛼𝜇−1, 𝑝2= 2𝛽𝜇−1, 𝑝3= 𝜇−1, (27)
𝑝4= (4𝑞 − 2)𝜇−1
(2𝛽′− 1
2𝛼 +
2𝛽′− 1
2𝛽 − 1
)−1
𝑝5= 𝜌𝜇−1
(
2𝜏 + 1 2𝛼 +
2𝜏 + 1 2𝛽 + 1
)−1
Theorem 3.4. Let Assumptions (A1)–(A3) in(14)–(16)hold, 0 < 𝛽 <
2𝛽′ −1
(ℎ 𝑛 , 𝑇 𝑛 , 𝑀 𝑛 , 𝑁 𝑛 ) = argmin 𝜀0(ℎ, 𝑇 , 𝑀, 𝑁) where ℎ, 𝑇 , 𝑀, 𝑁 > 0
Then
𝜀0(ℎ 𝑛 , 𝑇 𝑛 , 𝑀 𝑛 , 𝑁 𝑛)
=(
𝑝1𝐴 𝜀)1
𝑝1 (
𝑝2𝐵 𝜀)1
𝑝2(
𝑝3𝐶 𝜀)1
𝑝3 (
𝑝4𝐷 𝜀)1
𝑝4(
𝑝5𝐸)1
𝑝5(1
𝑛
)𝜇
where 𝐴 𝜀 , 𝐵 𝜀 , 𝐶 𝜀 , 𝐷 𝜀 , 𝐸 and 𝑝 𝑖 , 𝑖 = 1, 5, are defined as in(23)and(16),
(21),(26)–(29), respectively Moreover we have
𝑇 𝑛 =
(𝑝
5𝐸
𝑝4𝐷 𝜀
)1
𝜌
ℎ−
4𝑞−2 𝜌
𝑛 ,
𝑁 𝑛 =
(𝑝
1𝐴 𝜀
𝑝4𝐷 𝜀
)1
2𝛼(𝑝
5𝐸
𝑝4𝐷 𝜀
)2𝜏+1 2𝛼𝜌
ℎ−
(2𝛽′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛼𝜌
(𝑝
2𝐵 𝜀
𝑝4𝐷 𝜀
)1
2𝛽(𝑝
5𝐸
𝑝4𝐷 𝜀
)2𝜏+1 2𝛽𝜌
ℎ−
(2𝛽′ −1)𝜌+(4𝑞−2)(𝜌+2𝜏+1) 2𝛽𝜌
and
𝑛 𝜗1
(𝑝
3𝐶 𝜀
𝑝4𝐷 𝜀
)1
𝜗(𝑝
5𝐸
𝑝4𝐷 𝜀
)1
𝜌𝜗+2𝜏+1+2𝜏+1(𝑝
1𝐴 𝜀
𝑝4𝐷 𝜀
) 1
2𝛼𝜗(𝑝
2𝐵 𝜀
𝑝4𝐷 𝜀
) 1
2𝛽𝜗
where 𝜗 = 4𝑞 − 3 + 4𝑞−2
𝜌 +(2𝛽′−1)𝜌+(4𝑞−2)(𝜌+2𝜏+1)
2𝜌
(
1
𝛼+1
𝛽
)
.
Trang 5In addition, assume that there are 𝜃, 𝐾 𝑣,𝜃 >0satisfying 𝑣 ∈ 𝐻 𝜃(R2)and
‖𝑣‖ 𝐻 𝜃 ≤ 𝐾 𝑣,𝜃 Let
𝜖 𝑛=argmin
0<𝜖<1
{
32𝜋2𝜀0(ℎ 𝑛 , 𝑇 𝑛 , 𝑀 𝑛 , 𝑁 𝑛)(4
𝜖
)3−𝑦0
+
𝐾2
𝑣,𝜃
max{1, 𝜅 𝜃 }𝑏 2𝜃
𝜖
}
.
Then by denoting ̂ 𝑣 𝑛 = ̂ 𝑣 𝜖
𝑛 we can find a constant 𝐶′>0independent of 𝑛 such that
E ‖‖𝑣 − ̂𝑣 𝑛‖‖2
𝐿2 (R 2 )≤ 𝐶
′
ln2𝜃 (𝑛) .
4 Numerical results
We first present the algorithm for the problem(1)–(7)
• Step 1: For 𝑛 ∈ N, choose the parameters ℎ 𝑛 , 𝑇 𝑛 , 𝑀 𝑛 , 𝑁 𝑛 , 𝜖 𝑛 Put
𝑏 𝑛= log(
4∕𝜖 𝑛)
∕
√ 2(√
2 + 1), 𝑎𝑛 = 𝑏2
• Step 2: Compute ̂ 𝑓 𝑁
𝑛 ,𝑀 𝑛 (𝑥, 𝑡) and ̂ 𝑔 𝑁
𝑛 ,𝑀 𝑛 (𝑥, 𝑡)as in(18)and(20)
• Step 3: Calculate ̂𝑣 𝑛 = ̂ 𝑣 𝜖
𝑛as in(25) Next, we explain numerical methods for the steps above
4.1 Numerical implementation
Let us fix the values of 𝑛, 𝑇 , ℎ, 𝑀 and 𝑁.
Step 2: For 𝑀, 𝑁 ∈ N such that 0 < 𝑁 ≤ 𝑛,
𝑓 𝑁 ,𝑀 (𝑥, 𝑡) = I [0,𝑇 ] (𝑡)
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚 𝜙 𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥), (30)
𝑔 𝑁 ,𝑀 (𝑥, 𝑡) = I [0,𝑇 ] (𝑡)
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑑 𝑝,𝑚 𝜙 𝑝 (𝑡)𝑆(𝑚, ℎ)(𝑥),
where
̂
𝑐 𝑝,𝑚 = 2
𝑛
𝑛−1
∑
𝑗=1
𝜏 𝑗,𝑚sin
⎛
⎜
⎜
(𝑝 −12)𝜋𝑗
𝑛
⎞
⎟
⎟+
(−1)𝑛+1
̂
𝑑 𝑝,𝑚 = 2
𝑛
𝑛−1
∑
𝑗=1
𝜂 𝑗,𝑚sin
⎛
⎜
⎜
(𝑝 −1
2)𝜋𝑗
𝑛
⎞
⎟
⎟+
(−1)𝑛+1
for any 𝑝 = 1, 𝑛 and 𝑚 = −𝑀, 𝑀 Herein 𝜏 𝑗,𝑚 and 𝜂 𝑗,𝑚are given in(6)
and(7) Thus, we obtain
𝑓 𝑁 ,𝑀f t (𝑟, 𝑠) =
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚(
I[0,𝑇 ] 𝜙 𝑝)f t
(𝑠) (𝑆(𝑚, ℎ))f t(𝑟), (33)
𝑔f t𝑁 ,𝑀 (𝑟, 𝑠) =
𝑀
∑
𝑚 =−𝑀
𝑁
∑
𝑝=1
̂
𝑑 𝑝,𝑚(
I[0,𝑇 ] 𝜙 𝑝)f t
(𝑠) (𝑆(𝑚, ℎ))f t(𝑟), (34)
where we can calculate 𝜙f t
𝑝 (𝑠)directly from the definition(9) Herein, since
1
2𝜋∫
∞
−∞
e−i𝑚ℎ𝑟I(−𝜋∕ℎ,𝜋∕ℎ) (𝑟) e i𝑟𝑥 d𝑟 = ℎ−1𝑆 (𝑚, ℎ)(𝑥),
we can use the inverse Fourier transform to deduce that
(𝑆(𝑚, ℎ))f t(𝑟) = ℎ e −i𝑚ℎ𝑟I(−𝜋∕ℎ,𝜋∕ℎ) (𝑟), 𝑟 ∈ R. (35)
Computing(33)and(34)directly for various mesh points are very
time consuming Therefore, we need an efficient method to perform the
tasks fast We shall explain it later
Step 3: From (25), we aim to compute numerically the regularized
solution:
𝑣 𝜖 (𝑥, 𝑡) = 1
4𝜋2∬𝐷 𝜖
(
𝑓 𝑁 ,𝑀f t (𝑟, 𝑠)e
𝜆 (2−𝑦0)− e−𝜆(2−𝑦0)
e𝜆− e−𝜆
−𝑔 𝑁 ,𝑀f t (𝑟, 𝑠)e
𝜆 (1−𝑦0)− e−𝜆(1−𝑦0)
e𝜆− e−𝜆
)
ei(𝑟𝑥+𝑠𝑡) d𝑟d𝑠,
(36)
where 𝑦0∈ (0, 1), 𝜆 = 𝜆(𝑟, 𝑠) is defined from(11), 𝐷 𝜖is defined in(24)
with 𝜖 = 𝜖 𝑛 from Step 3, 𝑓f t
𝑁 ,𝑀 and 𝑔f t
𝑁 ,𝑀 are given in(33)and(34), respectively
We now approximate 𝑣 𝜖from(36)on an interest domain, e.g 𝛺 ∶= [−𝑎, 𝑎] × [0, 𝑏], where 𝑎, 𝑏 > 0 is chosen on demand Let us abbreviate
the integral(36)as
𝑣 𝜖 (𝑥, 𝑡) ∶=
∫
𝐴
−𝐴∫
𝐵
−𝐵
𝜓 (𝑟, 𝑠)e i(𝑟𝑥+𝑠𝑡) d𝑟d𝑠, (𝑥, 𝑡) ∈ 𝛺, where 𝜓 stands for the integrand in(36), 𝐴 and 𝐵 stand for 𝑏 𝜖 and 𝑏2,
respectively Next we will approximate 𝑣 𝜖 for (𝑥 𝑗 , 𝑡 𝑘)given mesh points
in 𝛺.
Based on the idea in [15], we define the fractional discrete Fourier transform (fDFT)
{𝜑 𝑝}𝑝 =1,𝐽 ↦ 𝐅(𝛼) 𝐽 (
{𝜑 𝑝}𝑝 =1,𝐽)
𝑗∶=
𝐽
∑
𝑝=1
𝜑 𝑝e2𝛼i(𝑗−1)(𝑝−1) , 𝑗 = 1, 𝐽 , (37)
for some parameter 𝛼, where {𝜑 𝑗}𝑗 =1,𝐽 is some array of complex num-bers
Let us define the mesh points (𝑥 𝑗 , 𝑡 𝑘 ) ∈ 𝛺 = [−𝑎, 𝑎] × [0, 𝑏]: for
𝐽 , 𝐾∈ N,
𝑥 𝑗 = (𝑗 − 1)𝛿 𝑥 − 𝑎, 𝛿 𝑥= 2𝑎
𝐽− 1, 𝑡 𝑘 = (𝑘 − 1)𝛿 𝑡 , 𝛿 𝑡=
𝑏
for 𝑗 = 1, 𝐽 , 𝑘 = 1, 𝐾 Similarly, let us define the mesh for the domain [−𝐴, 𝐴] × [−𝐵, 𝐵]:
𝑟 𝑝 = (𝑝 − 1)𝛿 𝑟 − 𝐴, 𝛿 𝑟= 2𝐴
𝐽− 1, 𝑠 𝑞 = (𝑞 − 1)𝛿 𝑠 − 𝐵, 𝛿 𝑠=
2𝐵
𝐾− 1, (39)
for 𝑝 = 1, 𝐽 , 𝑞 = 1, 𝐾 Denote 𝜓 𝑝,𝑞 ∶= 𝑤(𝑟 𝑝 , 𝑠 𝑞) Using the Newton–Cotes method we have
𝑣 𝜖 (𝑥 𝑗 , 𝑡 𝑘) =
∫
𝐵
−𝐵∫
𝐴
−𝐴
𝜓 (𝑟, 𝑠)e i(𝑟𝑥 𝑗 +𝑠𝑡 𝑘)d𝑟d𝑠
≃ 𝛿 𝑟 𝛿 𝑠
𝐽
∑
𝑝=1
𝐾
∑
𝑞=1
𝜔 𝑝 𝜔 𝑞 𝜓 𝑝,𝑞ei(𝑥 𝑗 𝑟 𝑝 +𝑡 𝑘 𝑠 𝑞)
≡ d(𝜓) 𝑗,𝑘 ,
(40)
where 𝜔 𝑝 are some weight coefficients Thus, 𝑣 𝜖 (𝑥 𝑗 , 𝑡 𝑘) ≃ d(𝜓) 𝑗,𝑘 Hence, we calculated(𝜓) 𝑗,𝑘 , as follows Substituting 𝑥 𝑗 , 𝑟 𝑝 , 𝑡 𝑘 and 𝑠 𝑞
from(38)and(39)tod(𝜓) 𝑗,𝑘, we obtain
d(𝜓) 𝑗,𝑘
= 𝛿 𝑟 𝛿 𝑠e−i(𝐴𝑥 𝑗 +𝐵𝑡 𝑘)𝐅(𝛼 𝑠)
𝐾
({
𝜔 𝑞𝐅(𝛼 𝑟)
𝐽
({
𝜔 𝑝e−i(𝑝−1)𝑎𝛿 𝑟 𝜓 𝑝,𝑞}
𝑝 =1,𝐽
)
𝑗
}
𝑞 =1,𝐾
)
𝑘
,
(41)
where 𝛼 𝑟 = 𝛿 𝑥 𝛿 𝑟∕2and 𝛼 𝑠 = 𝛿 𝑡 𝛿 𝑠∕2 Herein the transforms 𝐅(𝛼 𝑟)
𝐽 and 𝐅(𝛼 𝑠)
𝐾
are defined in(37), which can be calculated fast by the method in [15]
For the transformation 𝜓 𝑝,𝑞↦d(𝜓) 𝑗,𝑘, we can use only one storage,
i.e the array 𝜓, through Step 3 to save computer memory Let us summarize the procedure in three following steps For 𝜓 an array of complex numbers with the length of 𝐽 𝐾, we perform(41)as follows:
• Step 3.1: Looping for 𝑞 = 1, 𝐾 and 𝑝 = 1, 𝐽, assign
𝜓 𝑝,𝑞 ∶= 𝜔 𝑝 𝜓 𝑝,𝑞e−i(𝑝−1)𝑎𝛿 𝑟
Looping for 𝑞 = 1, 𝐾, perform the fDFT {𝜓 𝑝,𝑞}𝑝 =1,𝐽 ↦ 𝜓 𝑗,𝑞∶= 𝐅(𝛼 𝑟)
𝐽
(
{𝜓 𝑝,𝑞}𝑝 =1,𝐽)
𝑗
, 𝑗 = 1, 𝐽
• Step 3.2: Looping for 𝑞 = 1, 𝐾 and 𝑗 = 1, 𝐽, adjust
𝜓 𝑗,𝑞 ∶= 𝜔 𝑞 𝜓 𝑗,𝑞
Looping for 𝑗 = 1, 𝐽 , perform the fDFT {𝜓 𝑗,𝑞}𝑞 =1,𝐾 ↦ 𝜓 𝑗,𝑘∶= 𝐅(𝛼 𝑠)
𝐾
(
{𝜓 𝑗,𝑞}𝑞 =1,𝐾)
𝑘 , 𝑘 = 1, 𝐾.
Trang 6• Step 3.3: Looping for 𝑘 = 1, 𝐾 and 𝑗 = 1, 𝐽, assign
d(𝜓) 𝑗,𝑘 ∶= 𝛿 𝑟 𝛿 𝑠e−i(𝐴𝑥 𝑗 +𝐵𝑡 𝑘)𝜓 𝑗,𝑘
Finally, 𝑣 𝜖 (𝑥 𝑗 , 𝑡 𝑘) ≃ Re{
d(𝜓) 𝑗,𝑘}
for all the mesh points (𝑥 𝑗 , 𝑡 𝑘) ∈
[−𝑎, 𝑎] × [0, 𝑏].
Remark on the fDFT: To accelerate the calculation of(37), we apply
the fast Fourier transform (FFT) algorithm, e.g subroutinesCFFTB(for
𝛼 > 0) and CFFTF(for 𝛼 < 0) obtained from [16] Details of the
calculation are referred to [15]
Remark on the Newton–Cotes method: In addition, combining the
Simpson’s 1/8 and 3/8 rules, we derive
∫
𝑟 𝐽
𝑟1
𝜑 (𝑟)d𝑟 = 𝛿 𝑟
𝐽
∑
𝑝=1
where 𝛿 𝑟 = (𝑟 𝐽 −𝑟1)∕(𝐽 −1) and 𝑟 𝑝 = 𝑟1+(𝑝−1)𝛿 𝑟, and the weight numbers
in this implementation are 𝜔1 = 𝜔 𝐽 = 17∕48, 𝜔2 = 𝜔 𝐽−1 = 59∕48,
𝜔3= 𝜔 𝐽−2= 43∕48, 𝜔4 = 𝜔 𝐽−3 = 49∕48, and 𝜔5 =⋯ = 𝜔 𝐽−4= 1for
𝐽 ≥ 9 and 𝐽 odd.
Remark on Step 2: The summations(31)and(32)can be seen in view
of the quarter-sine transform(59) When the size 𝑛 in Eqs.(6)and(7)is
large, we use subroutineSINQFin [16] to perform these summations
rapidly Furthermore, we can apply the FFT technique to approximate
(33)and(34)fast
Indeed, we will provide the numerical procedure to approximate
only 𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞)in(33)for all the mesh points (𝑟 𝑝 , 𝑠 𝑞)defined in(39)
After that 𝑔f t𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞)in(34)can be approximated by the same method
Firstly, applying(9)and(35)to(30), we can see that
𝑓 𝑁 ,𝑀f t (𝑟, 𝑠) =
⎧
⎪
⎨
⎪
ℎ
𝑀
∑
𝑚 =−𝑀
̂
𝐶 𝑚 (𝑠)e −i𝑚ℎ𝑟 , −𝜋∕ℎ < 𝑟 < 𝜋∕ℎ,
(43)
where
̂
𝐶 𝑚 (𝑠) =
∫
𝑇
0
𝐶 𝑚 (𝜚)e −i𝜚𝑠 d𝜚, 𝐶 𝑚 (𝜚) =
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚 𝜙 𝑝 (𝜚), 𝑚 = −𝑀, 𝑀. (44)
For 𝐾 as in(38), to approximate the integral in(44), we define 𝛿 𝜚and
𝜚 𝑘
𝜚 𝑘 = (𝑘 − 1)𝛿 𝜚 , 𝛿 𝜚= 𝑇
and we define the sequence { ̃ 𝑐 𝑝,𝑚}, for 𝑝 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀, such
that
̃
𝑐 𝑝,𝑚=
{
̂
𝑐 𝑝,𝑚 , 1≤ 𝑝 ≤ 𝑁,
Denoting 𝐶 𝑘,𝑚 = 𝐶 𝑚 (𝜚 𝑘)from(44), we have
𝐶 1,𝑚 =
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚 𝜙 𝑝 (0) = 0,
𝐶 𝑘,𝑚 =
𝑁
∑
𝑝=1
̂
𝑐 𝑝,𝑚 𝜙 𝑝 (𝜚 𝑘) =
𝐾∑−1
𝑝=1
̃
𝑐 𝑝,𝑚sin
⎛
⎜
⎜
(𝑝 −12)𝜋(𝑘 − 1)
⎞
⎟
⎟, 𝑘 = 2, 𝐾. (47) Here we apply the inverse of quarter-sine transform (60) to obtain
{𝐶 𝑘,𝑚}rapidly for 𝑘 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀, e.g subroutineSINQB
in [16] can be applied For 𝑠 𝑞 and 𝛿 𝑠in(39), we approximate ̂ 𝐶 𝑚 (𝑠 𝑞)
in(44)by ̃ 𝐶 𝑞,𝑚as follows
̂
𝐶 𝑚 (𝑠 𝑞) =
∫
𝑇
0
𝐶 𝑚 (𝜚)e −i𝜚𝑠 𝑞 d𝜚 ≃ 𝛿 𝜚
𝐾
∑
𝑘=1
𝑤 𝑘 𝐶 𝑘,𝑚e−i𝜚 𝑘 𝑠 𝑞 = ̃ 𝐶 𝑞,𝑚 , (48)
where the weights 𝑤 𝑘are given in(42) Due to(45)and(39), we have
𝜚 𝑘 𝑠 𝑞 = −2𝛽 𝑠 (𝑘 − 1)(𝑞 − 1) − 𝐵𝜚 𝑘 , where 𝛽 𝑠 = −𝛿 𝜚 𝛿 𝑠∕2, and we deduce
̃
𝐶 𝑞,𝑚 = 𝛿 𝜚
𝐾
∑
𝑘=1
𝑤 𝑘 𝐶 𝑘,𝑚e−i𝜚 𝑘 𝑠 𝑞 = 𝛿 𝜚
𝐾
∑
𝑘=1
(
𝑤 𝑘 𝐶 𝑘,𝑚ei𝐵𝜚 𝑘)
e2𝛽 𝑠 i(𝑘−1)(𝑞−1)
= 𝛿 𝜚𝐅(𝛽 𝑠)
𝐾
({
𝑤 𝑘 𝐶 𝑘,𝑚ei𝐵𝜚 𝑘}
𝑘 =1,𝐾
)
Here we can see that the vector { ̃ 𝐶 𝑞,𝑚}𝑞 =1,𝐾 can be calculated fast by
𝐅(𝛽 𝑠)
𝐾 , for every 𝑚 Therefore ̂ 𝐶 𝑚 (𝑠 𝑞)is approximated efficiently by the FFT technique
Secondly, based on the previous work, we deduce from(43)that
𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞 ) ≃ ̃ 𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞)
=
⎧
⎪
⎨
⎪
ℎ
𝑀
∑
𝑚 =−𝑀
̃
𝐶 𝑞,𝑚e−i𝑚ℎ𝑟 𝑝 , −𝜋∕ℎ < 𝑟 𝑝 < 𝜋 ∕ℎ,
(50)
where 𝑟 𝑝 and 𝑠 𝑞are defined in(38) For 𝐽 > 2𝑀 + 2, we put 𝐽1= −𝐽 ∕2 and 𝐽2= 𝐽 ∕2 − 1 for 𝐽 even, or 𝐽1= −(𝐽 − 1)∕2 and 𝐽2= (𝐽 − 1)∕2for
𝐽 odd Then we have
𝑀
∑
𝑚 =−𝑀
̃
𝐶 𝑞,𝑚e−i𝑚ℎ𝑟 𝑝=
𝐽2
∑
𝑚 =𝐽1
̃
𝐶′
𝑞,𝑚e−i𝑚ℎ𝑟 𝑝 , (51)
in which we define
̃
𝐶′
𝑞,𝑚
=
{̃
𝐶 𝑞,𝑚 , if −𝑀 ≤ 𝑚 ≤ 𝑀,
0, if (𝐽1≤ 𝑚 < −𝑀) ∨ (𝑀 < 𝑚 ≤ 𝐽2), =
̃
𝐶′′
𝑞,𝑚 −𝐽1+1
(52)
for all 𝑚 = 𝐽1, 𝐽2 and 𝑞 = 1, 𝐾 Now, putting 𝑗 = 𝑚 − 𝐽1+ 1, we can see
that 𝑗 = 1, 𝐽 as 𝑚 = 𝐽1, 𝐽2and the right-hand side of(51)becomes
𝐽2
∑
𝑚 =𝐽1
̃
𝐶′
𝑞,𝑚e−i𝑚ℎ𝑟 𝑝 =
𝐽
∑
𝑗=1
̃
𝐶′
𝑞,𝐽1+𝑗−1e−iℎ((𝑝−1)𝛿 𝑟 −𝐴)(𝑗+𝐽1−1)
=
𝐽
∑
𝑗=1
̃
𝐶′′
𝑞,𝑗e−iℎ((𝑝−1)𝛿 𝑟 −𝐴)(𝑗−1)e−iℎ((𝑝−1)𝛿 𝑟 −𝐴)𝐽1
= e−i𝐽1ℎ𝑟 𝑝
𝐽
∑
𝑗=1
(
̃
𝐶′′
𝑞,𝑗eiℎ𝐴(𝑗−1))
e2𝛽 𝑥 i(𝑝−1)(𝑗−1) , (53)
where 𝑟 𝑝 and 𝛿 𝑟are defined in(39), 𝛽 𝑥 = −ℎ𝛿 𝑟∕2, and ̃𝐶′′
𝑞,𝑗 = ̃ 𝐶′
𝑞,𝑗 +𝐽1−1
due to(52) Thus, combining(50),(51)and(53)we obtain
𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞 ) ≃ ℎe −i𝐽1ℎ𝑟 𝑝𝐅(𝛽 𝑥)
𝐽
({
̃
𝐶′′
𝑞,𝑗eiℎ𝐴(𝑗−1)}
𝑗 =1,𝐽
)
𝑝
(54)
for 𝑝 = 1, 𝐽 , 𝑞 = 1, 𝐾, and 𝑟 𝑝 ∈ (−𝜋∕ℎ, 𝜋∕ℎ) If 𝑟 𝑝 ∉ (−𝜋∕ℎ, 𝜋∕ℎ), we assign 𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞) = 0
Note that we must impose 𝐽 > 2(𝑀 + 1) and 𝐾 ≥ 𝑛 in advance Let
us summarize the calculation procedure for Step 2 as follows Starting
with the data {𝜏 𝑘,𝑚}(Eq (6)) and {𝜂 𝑘,𝑚}(Eq (7)), for 𝑘 = 1, 𝑛 and
𝑚 = −𝑀, 𝑀, we perform stepwise:
• Step 2.1: To compute(31)and(32) Looping for 𝑚 = −𝑀, 𝑀, we
perform
{𝜏 𝑘,𝑚}
𝑘 =1,𝑛
𝚂𝙸𝙽𝚀𝙵
⟼ {̂ 𝑐 𝑞,𝑚}
𝑞 =1,𝑛 , {𝜂 𝑘,𝑚}
𝑘 =1,𝑛
𝚂𝙸𝙽𝚀𝙵
⟼ { ̂ 𝑑 𝑞,𝑚}
𝑞 =1,𝑛
• Step 2.2: To define(46) For the sequences { ̃ 𝑐 𝑞,𝑚}and { ̃ 𝑑 𝑞,𝑚},
wh-ere 𝑞 = 1, 𝐾 − 1 and 𝑚 = −𝑀, 𝑀 Looping for 𝑚 = −𝑀, 𝑀, we
define
̃
𝑐 𝑞,𝑚 = ̂ 𝑐 𝑞,𝑚 , 𝑑 ̃
𝑞,𝑚 = ̂ 𝑑 𝑞,𝑚 , ∀𝑞 = 1, 𝑁, and we assign ̃ 𝑐 𝑞,𝑚 = ̃ 𝑑 𝑞,𝑚= 0for all 𝑞 = 𝑁 + 1, 𝐾 − 1.
• Step 2.3: To compute(47) For the sequences {𝐶 𝑘,𝑚}and {𝐶 𝑘,𝑚},
where 𝑘 = 1, 𝐾 and 𝑚 = −𝑀, 𝑀 Looping for 𝑚 = −𝑀, 𝑀, we
perform
𝐶 1,𝑚 = 0, { ̃ 𝑐 𝑞,𝑚}𝑞 =1,𝐾−1 𝚂𝙸𝙽𝚀𝙱⟼ {𝐶 𝑘,𝑚}𝑘 =2,𝐾
𝐷 1,𝑚 = 0, { ̃ 𝑑 𝑞,𝑚}𝑞 =1,𝐾−1 𝚂𝙸𝙽𝚀𝙱⟼ {𝐷 𝑘,𝑚}𝑘 =2,𝐾
Trang 7• Step 2.4: To compute(49) Put 𝛽 𝑠 = −𝛿 𝜚 𝛿 𝑠∕2 Looping for 𝑚 =
−𝑀, 𝑀, we perform
̃
𝐶 𝑞,𝑚 = 𝛿 𝜚𝐅(𝛽 𝑠)
𝐾
({
𝑤 𝑘ei𝐵𝜚 𝑘 𝐶 𝑘,𝑚}
𝑘 =1,𝐾
)
𝑞
,
̃
𝐷 𝑞,𝑚 = 𝛿 𝜚𝐅(𝛽 𝑠)
𝐾
({
𝑤 𝑘ei𝐵𝜚 𝑘 𝐷 𝑘,𝑚}
𝑘 =1,𝐾
)
𝑞
, ∀𝑞 = 1, 𝐾.
• Step 2.5: To define(52) If 𝐽 is even, we put 𝐽1= −𝐽 ∕2 and 𝐽2=
𝐽∕2−1, otherwise, we define 𝐽1= −(𝐽 −1)∕2 and 𝐽2= (𝐽 −1)∕2 For
the sequences { ̃ 𝐶′′
𝑞,𝑗}and { ̃ 𝐷′′
𝑞,𝑗}, where 𝑞 = 1, 𝐾 and 𝑗 = 1, 𝐽
Looping for 𝑚 = −𝑀, 𝑀, using the mapping 𝑗 = 𝑚 − 𝐽1+ 1, we
define
̃
𝐶′′
𝑞,𝑗 = ̃ 𝐶 𝑞,𝑚 , 𝐷 ̃′′
𝑞,𝑗 = ̃ 𝐷 𝑞,𝑚 , ∀𝑞 = 1, 𝐾.
Looping for 𝑚 = 𝐽1, −𝑀 − 1 and for 𝑚 = 𝑀 + 1, 𝐽2, we assign
̃
𝐶′′
𝑞,𝑗 = ̃ 𝐷′′
𝑞,𝑗= 0for all 𝑞 = 1, 𝐾.
• Step 2.6: To compute(54) Put 𝛽 𝑥 = −ℎ𝛿 𝑟∕2 Looping for 𝑞 = 1, 𝐾,
we perform
𝐹 𝑝,𝑞 = ℎe −i𝐽1ℎ𝑟 𝑝𝐅(𝛽 𝑥)
𝐽
({
eiℎ𝐴(𝑗−1) 𝐶 ̃′′
𝑞,𝑗
}
𝑗 =1,𝐽
)
𝑝
,
𝐺 𝑝,𝑞 = ℎe −i𝐽1ℎ𝑟 𝑝𝐅(𝛽 𝑥)
𝐽
({
eiℎ𝐴(𝑗−1) 𝐷 ̃′′
𝑞,𝑗
}
𝑗 =1,𝐽
)
𝑝
, ∀𝑝 = 1, 𝐽
For 𝑞 = 1, 𝐾 and 𝑝 = 1, 𝐽 , if 𝑟 𝑝 ∉ (−𝜋∕ℎ, 𝜋∕ℎ) then we adjust
𝐹 𝑝,𝑞= 0and 𝐺 𝑝,𝑞= 0
Finally, we approximate 𝑓f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞)and 𝑔f t
𝑁 ,𝑀 (𝑟 𝑝 , 𝑠 𝑞)by 𝐹 𝑝,𝑞 and 𝐺 𝑝,𝑞,
respectively, for 𝑝 = 1, 𝐽 and 𝑞 = 1, 𝐾.
In practice, we use Fortran programming language [17] to perform
the calculation in computer To save computer memory, we use only
two arrays, namely U and V, to get through the whole of Steps 2 and 3
Precisely, we use the couple (U, V) to store values of (𝜏, 𝜂), (̂ 𝑐, ̂ 𝑑), ( ̃𝑐, ̃ 𝑑),
(𝐶, 𝐷), ( ̃ 𝐶, ̃ 𝐷), ( ̃𝐶′′, ̃ 𝐷′′)and (𝐹 , 𝐺), stepwise, with the aid of the index
mapping 𝑗 = 𝑚 − 𝐽1 + 1 Based on the column-major order of array
model in Fortran, 𝑈 and 𝑉 should be processed with the dimension
𝐾 × 𝐽 from Step 2.1 to Step 2.5 However, before Step 2.6, the arrays
should be transposed so as the dimension is turned from 𝐾 × 𝐽 to 𝐽 × 𝐾.
Thus, the 𝑞-loop of Step 2.6 should be modified, e.g., as follows
U(p, q) ∶= ℎe −i𝐽1ℎ𝑟 𝑝𝐅(𝛽 𝑥)
𝐽
({
eiℎ𝐴(𝑗−1) U(j, q)}
𝑗 =1,𝐽
)
𝑝 , ∀𝑝 = 1, 𝐽
This transposition is not only to speed up the computing at Step 2.6
due to the column-major order of the array model, but also to ensure
that the dimension of the outcome (i.e 𝐽 × 𝐾) is consistent with that
of the input for Step 3
Remark on the mesh resolutions: In order to improve the accuracy
of the Newton–Cotes procedure(42) as well as(40)and(48), we set
the mesh resolutions 𝐽 and 𝐾 to be large enough At least, they should
satisfy 𝐽 > 2𝑀 + 2 and 𝐾 ≥ 𝑛.
In practice, we determine the sizes of our problem, i.e 𝐽 and 𝐾,
in advance of Step 2 It should be large as much as possible so that we
can achieve the desired accuracy However, there is a trade-off between
doing the calculation accurately and doing it quickly Now we explain
calculation procedure for Step 1
Step 1: Theoretically, for 𝑛 ∈ N, we need to choose the parameters ℎ 𝑛,
𝑇 𝑛 , 𝑀 𝑛 , 𝑁 𝑛 , 𝜖 𝑛 in such a way that 𝜀0(ℎ, 𝑇 , 𝑀, 𝑁) becomes small as 𝑛
becomes large Herein 𝜀0is given with the setting 𝛬2= 𝐶 𝛬 𝑇 2𝜏 ℎ −2𝛽′
for
𝐶 𝛬 constant, 𝛽′>1∕2, 0 < 𝛼 < 3∕2 and 0 < 𝛽 < 𝛽′− 1∕2as in(16),
𝜏= 2as inLemma 2.4, i.e
𝜀0(ℎ, 𝑇 , 𝑀, 𝑁) = 𝑇
2𝜏+1
ℎ 2𝛽′ −1
( 𝐴
𝜀
𝑁 2𝛼 + 𝐵 𝜀
𝑀 2𝛽
)
+ 𝐶 𝜀 𝑀 ℎ𝑇 𝑁
𝑛 + 𝐷 𝜀 ℎ 4𝑞−2 + 𝐸𝑇 −𝜌
However, as defined(23), the constants 𝐴 𝜀 , 𝐵 𝜀 , 𝐶 𝜀 , 𝐷 𝜀 , 𝐸are calculated
from the a priori constants 𝛼, 𝛽, 𝛽′, 𝜏, 𝜌, 𝐶sinc, 𝐶Fou To overcome this obstacle partly, we can useTheorem 3.1to obtain parameters 𝜌 = 2,
𝛽′= 1, 𝛼 = 1, 𝜏 = 2 Since 0 < 𝛽 < 𝛽′− 1∕2we can choose 𝛽 = 3∕8 In practice we take a number 𝑇 in advance such that 𝑇 −𝜌is small enough,
e.g 𝑇 = 20 Let us fix 𝑇 from now In the experiment example, to
verify the convergence of the algorithm, the role of a priori constants
𝐶sinc, 𝐶Fou, 𝐾 𝑣,𝜃is dimmed out to simplify our analysis Hence, to choose these parameters, we should have another scheme Put
𝑁 𝑛 = 𝑆 𝑁 𝑛1, 𝑀 𝑛 = 𝑆 𝑀 𝑛1, ℎ 𝑛 = 𝑆 ℎ 𝑛−1, 𝜖 𝑛 = 𝑆 𝜖 𝑛−1,
𝑏 𝑛= log
(
4∕𝜖 𝑛)
√ 2(√
2 + 1)
where 𝑆 𝑁 , 𝑆 𝑀 , 𝑆 ℎ , 𝑆 𝜖 are some positive constants which will be
pro-vided empirically Then we can see that ℎ 2𝛽 𝑛′−1𝑁 𝑛 2𝛼 ∼ 𝑛1, ℎ 2𝛽 𝑛′−1𝑀 𝑛 2𝛽∼
𝑛121, 𝑛−1𝑀 𝑛 ℎ 𝑛 𝑁 𝑛 ∼ 𝑛−127 and ℎ 𝑛 ∼ 𝑛−1 as 𝑛 → ∞, therefore in this process 𝜀0(ℎ 𝑛 , 𝑇 , 𝑀 𝑛 , 𝑁 𝑛)tends to 𝑇 −𝜌, which plays the role of numerical tolerance
4.2 Simulation and comments
Example 1 To obtain the ‘‘exact’’ Cauchy data for the problem(2)–(5)
we apply the finite difference method (FDM) in which the implicit Crank–Nicolson method and the central difference scheme are adopted for the temporal and the spatial approximation, respectively The solu-tion of Eq.(2)is solved numerically in a fine 2D mesh with the initial condition(5)and the boundary conditions
𝑢 (𝑎∞, 𝑦, 𝑡 ) = 𝑢(−𝑎∞, 𝑦, 𝑡 ) = 𝑢(𝑥, 𝑏∞, 𝑡 ) = 0, 𝑢 (𝑥, 0, 𝑡) =√
𝑡e−(𝑥−𝑡∕2)2−𝑡
After that we use subroutineRGSF3Pin [18] to interpolate the
numeri-cal result from this mesh in order to obtain the ‘‘exact’’ Cauchy data 𝑓 , 𝑔
in(3),(4), and the ‘‘exact’’ solution 𝑢(⋅, 𝑦0,⋅)for 𝑦0>0 In our practice,
we choose 𝑎∞ = 15and 𝑏∞ = 30, and perform the FDM on the mesh with the resolution 3001 × 3001 for the spatial discretization, and with the time step 10−2 A similar implementation using FDM to obtain data input can be found in [19, Appendix]
For 𝑛 given, we compute 𝑣 𝑛 = 𝑢(𝑥, 0.1, 𝑡) (i.e 𝑦0 = 0.1) by the
procedure in Steps 1–3 with the Cauchy data (Eqs.(6)and(7)) given in two cases: without noise, and with Gaussian noise In this test case we aim to observe the estimation of the expectation of the relative error
of‖𝑣 𝑛 − 𝑢(⋅, 𝑦0,⋅)‖, that is
ERE ∶= 1
𝐿
𝐿
∑
𝑙=1
√
∑𝐽
𝑗=1
∑𝐾
𝑘=1|||𝑣 𝑛, (𝑙),𝑗,𝑘 − 𝑢(𝑥 𝑗 , 𝑦0, 𝑡 𝑘)|||2
√
∑𝐽
𝑗=1
∑𝐾
𝑘=1|||𝑢 (𝑥 𝑗 , 𝑦0, 𝑡 𝑘)|||2
where 𝐿 is the number of replication, 𝑥 𝑗 and 𝑡 𝑘were defined in(38),
and 𝑣 𝑛, (𝑙),𝑗,𝑘 denotes the value of the 𝑙th solution 𝑣 𝑛, (𝑙)at the mesh point
(𝑥 𝑗 , 𝑡 𝑘), for 𝑙 = 1, 𝐿 For every 𝑛, we repeat generating the data randomly
as in(6)–(7)and performing the calculation of 𝑣 𝑛 in 𝐿 times in order
to obtain the above error estimate
In this example, the diffusivity is 𝜅 = 1 We set 𝑎 = 𝑏 = 9 in(38),
𝐽 = max{4096, 2𝑀 𝑛+ 3}and 𝐾 = max{1025, 𝑛 + 1} for every 𝑛 Letting
𝑇 = 20, we set the values of constants in(55) as follows: 𝑆 𝑁 = 5,
𝑆 𝑀= 30, 𝑆ℎ= 10−1and 𝑆 𝜖= 1 We approximate 𝑣𝑛 = 𝑣 𝜖 𝑛(Eq.(36)) by the method in Steps 1–3
Table 1 shows ERE in(56) for 𝑦0 = 0.1, for various values of 𝑛 and for different noise levels Therein we calculate ERE with 𝐿 = 1 for the data without noise, and with 𝐿 = 100 for the data disturbed
by Gaussian noise (0, 𝜎), where 𝜎 = 5%, 1% From the left to right columns of the table we can see the convergence of 𝑣 𝑛 to 𝑢(𝑦0) stably,
Trang 8Table 1
Error estimate ERR defined in (56).
No noise 1.48E−01 1.01E−01 9.94E−02 9.66E−02 9.09E−02 8.50E−02 8.15E−02
𝜎= 1% 2.37E−01 2.13E−01 2.12E−01 2.07E−01 1.99E−01 1.95E−01 1.92E−01
𝜎= 5% 9.25E−01 9.28E−01 9.45E−01 9.17E−01 8.90E−01 8.82E−01 8.75E−01
this trend appears in every row of the table Therefore it confirms the
reliability of the proposed method
The calculation speed of Steps 2 and 3 is moderately fast For the
mesh resolution 𝐽 × 𝐾, where 𝐽 = 4096, the following table shows
average time that the calculation takes in the computer with CPU
3.00 GHz Intel Core i7-9700F, RAM 16G, and with the compiler GNU
Fortran 8.3.0 in Debian/Linux
CPU time (in second) 1.49 3.36 6.68 10.59 14.47
The computer program was not optimized since we focus only on
illustrating the method implementation
Example 2 Let us consider the heat transfer in metal with the thermal
diffusivity of the medium 𝜅 = 0.23 (cm2∕s) The problems(2)and(5)
are defined with an instantaneous source located initially at the origin
(see, e.g., [20, p 28]), and the exact solution is given by
𝑢 (𝑥, 𝑦, 𝑡) = 𝑄
𝑡e−
𝑥2 +𝑦2
where 𝑄 > 0 is some constant Let us take from the formula(57)the
functions
𝑓 (𝑥, 𝑡) = 𝑄
𝑡e−
𝑥2+1
4𝜅𝑡 , 𝑔 (𝑥, 𝑡) = 𝑄
𝑡e−
𝑥2+4
4𝜅𝑡 ,
𝑣 (𝑥, 𝑡) = 𝑢(𝑥, 0, 𝑡) = 𝑄
𝑡e−
𝑥2 4𝜅𝑡 , 𝑡 > 0,
(58)
which then stand for our exact data (i.e 𝑓 and 𝑔) and exact solution
(i.e 𝑣) of (1)–(7) To verify that 𝑓 and 𝑔 satisfy the assumptions
(14)–(16), readers may be interested in Section5.6
Herein 𝑣(0, 𝑡) is the instantaneous source of the problem We aim to
detect 𝑣(𝑥, 𝑡) from the Cauchy data 𝑓 , 𝑔 given by the models(6)and(7)
We have a note on(57) Since lim𝑡↓0𝑢 (𝑥, 𝑦, 𝑡) = 0 for every (𝑥, 𝑦)≠
(0, 0), and lim 𝑡↓0𝑢 (0, 0, 𝑡) = ∞, the exact solution 𝑣(𝑥, 𝑡) gets singularity
at 𝑥 = 0 as 𝑡 tends to zero We need to approximate 𝑢(𝑥, 𝑦, 0) Based on
the fact that is∫−∞∞ ∫−∞∞ 𝑢 (𝑥, 𝑦, 𝑡)d𝑥d𝑦 = 4𝜋𝜅𝑄 for all 𝑡 > 0, to eliminate
the singularity, we approximate 𝑢(𝑥, 𝑦, 𝑡) ∶= 𝛿 𝑑 (𝑥, 𝑦) for 𝑡 ∈ [0, (4𝜅𝑑)−1],
where 𝛿 𝑑 (𝑥, 𝑦) = 4𝜅𝑄𝑑 e −𝑑(𝑥2+𝑦2)for some 𝑑 > 0 large enough Herein we
can check that 𝛿 𝑑 is an approximation of the Dirac delta at (𝑥, 𝑦) = (0, 0)
with the scale factor 4𝜋𝜅𝑄, i.e. ∫−∞∞∫−∞∞ 𝛿 𝑑 (𝑥, 𝑦)d𝑥d𝑦 = 4𝜋𝜅𝑄for all
𝑑 >0, lim𝑑→ 𝛿 𝑑 (𝑥, 𝑦) = 0 for every (𝑥, 𝑦) ≠ (0, 0), and lim 𝑑→ 𝛿 𝑑 (0, 0) =
∞ Therefore, we approximate 𝑣(𝑥, 𝑡) ≃ 𝛿𝑑 (𝑥, 0), 𝑓 (𝑥, 𝑡) ≃ 𝛿 𝑑 (𝑥, 1)and
𝑔 (𝑥, 𝑡) ≃ 𝛿 𝑑 (𝑥, 2) for 𝑡 ∈ [0, (4𝜅𝑑)−1] Otherwise, we use the exact form
(58)
Let 𝑄 = 2, 𝑑 = 100, 𝑇 = 20 Let 𝑎 = 𝑏 = 9 in(38) Let 𝑆 𝑁= 5, 𝑆𝑀
= 30, 𝑆ℎ = 10−1, 𝑆 𝜖 = 1∕80 in (55) For 𝑛, 𝐽 and 𝐾 given as in
Example 1, we perform Steps 1–3 with the Cauchy data in(6)and(7)
are given in two cases of the white noise: 𝜖 𝑗,𝑚 , 𝜀 𝑗,𝑚∼ (0, 𝜎) for 𝜎 = 10%
and 5%, respectively
Fig 1shows the Cauchy data that is defined by the model(6)and
(7), where 𝜖 𝑗,𝑚 and 𝜀 𝑗,𝑚is from Gaussian noise (0, 𝜎), for 𝜎 = 10%, 5%.
Herein the data were chosen randomly In the sub figures1(c)–1(f), we
can see that the data quality was mostly defaced by the noise However,
the data inFigs 1(c)–1(d)looks better than the one shown inFigs 1(e)–
1(f) We can see how this difference influences the numerical solutions
inFig 2
Fig 2 shows the regularized solution 𝑣 𝑛 (𝑥, 𝑡) calculated from the
data (𝜏 𝑗,𝑚 , 𝜂 𝑗,𝑚)by the method in Steps 1–3 Therein we selected the
results randomly to display From the top to bottom of the figure,
we can see that Dirac delta distribution 𝑣(𝑥, 0) was well detected at the origin (𝑥, 𝑡) = (0, 0) It is improved clearly when 𝑛 becomes large, i.e 𝑛 = 1000, 4000, 8000 From the left to right of the figure, we can see that the graph of 𝑣 𝑛 with the noise 𝜎 = 5% has less wiggles than the one with 𝜎 = 10% This phenomenon appears from the top to bottom of the
figure It confirms the fact that the quality of data not only depended
on the sample size (i.e 𝑛), but it also is influenced by the deviation of
the noise
As experimented inExamples 1and2, we conclude that the pro-posed regularization method works well and it is feasible to perform in practice However, let us emphasize that there is a trade-off between
stability and accuracy, and that the empirical constants 𝑆 𝑁 , 𝑆 𝑀 , 𝑆 ℎ , 𝑆 𝜖
play an important role in the accuracy and stability of the regularized
solution For instance, if we decrease 𝑆 𝜖 , e.g from 𝑆 𝜖= 1to 10−1in
Example 1, then the accuracy can be achieved at a higher precision but the stability is less confirmed, and reversely Let us reserve this problem
of parameter selection for a future study
5 Proofs
5.1 The Fourier series and Sinc series 5.1.1 The Fourier series
To present the proofs, we rewrite the following property of the trigonometric basis, which are given by Tsybakov, [21]
Lemma 5.1. For 𝑛 ∈ N, let 𝑡 𝑗 = 𝑗𝑇 ∕𝑛 for 𝑗 = 1, 𝑛.
(a) For every 𝑝, 𝑞 ∈ N, we have
2
𝑛
𝑛−1
∑
𝑙=1
𝜙 𝑝 (𝑡 𝑙 )𝜙 𝑞 (𝑡 𝑙) +1
𝑛 𝜙 𝑝 (𝑡 𝑛 )𝜙 𝑞 (𝑡 𝑛)
=
⎧
⎪
⎨
⎪
−1, if 𝑝 + 𝑞 = 2𝑘𝑛 + 1, 𝑘 = 1, 2, 3, … ,
0, otherwise.
(b) For {𝑢 𝑗}𝑗 =1,𝑛 and {̂ 𝑢 𝑝}𝑝 =1,𝑛 in C, if
̂
𝑢 𝑝=2
𝑛
𝑛−1
∑
𝑗=1
𝑢 𝑗 𝜙 𝑝 (𝑡 𝑗) +1
then we have
𝑢 𝑗 =
𝑛
∑
𝑝=1
̂
and vice versa We call(59)and(60)the quarter-sine transform and its inverse, respectively Moreover we have
𝑛
∑
𝑝=1
|̂𝑢 𝑝|2
= 2
𝑛
𝑛−1
∑
𝑗=1
|𝑢 𝑗|2+1
𝑛 |𝑢 𝑛|2, which is called the discrete Parseval identity.
(c) Let 𝑢 ∈ 𝐿2(0, 𝑇 ) and 𝑢 be piecewise-continuous on (0, 𝑇 ) with
𝜃 𝑗 = (2∕𝑇 ) ⟨𝑢, 𝜙 𝑗⟩𝑇 for 𝑗 = 1, 𝑛 Put 𝛼 𝑗 = 2
𝑛
∑𝑛−1
𝑘=1𝑢 (𝑡 𝑘 )𝜙 𝑗 (𝑡 𝑘) +
1
𝑛 𝑢 (𝑡 𝑛 )𝜙 𝑗 (𝑡 𝑛 ) − 𝜃 𝑗 Then we have
𝛼 𝑗=
∞
∑
𝑙=1
(
𝜃 𝑗 +2𝑙𝑛 − 𝜃 −𝑗+2𝑙𝑛+1)
, 𝑗 = 1, 𝑛.
Trang 9Fig 1.Example 2 The Cauchy data defined by (6) and (7)with the white noise 𝜖 𝑗,𝑚 , 𝜀 𝑗,𝑚∼ (0, 𝜎), for 𝜎 = 10%, 5% Herein the data were chosen randomly.
Proof To prove (a), we apply the Lagrange’s trigonometric identities,
i.e
cos 𝑥 + cos 2𝑥 + ⋯ + cos 𝑛𝑥 = −1
2+
sin((𝑛 +12)𝑥)
2 sin(𝑥2) ,
sin 𝑥 + sin 2𝑥 + ⋯ + sin 𝑛𝑥 = cos(
𝑥
2)
2 sin(𝑥
2)−
cos((𝑛 +1
2)𝑥)
2 sin(𝑥
2)
for 𝑛 ∈ N and for any 𝑥 ≠ 2𝑘𝜋, 𝑘 ∈ Z The rest of this proof is
ele-mentary, therefore we omit it
Trang 10Fig 2. Example 2 Regularized solution 𝑣𝑛 (𝑥, 𝑡)chosen randomly from many calculations as detailed in Steps 1–3 with the Cauchy data influenced by the white noise (0, 𝜎).
The proof of (b) can be derived directly from the orthonormal
property in (a) So we also omit it Now, let us give the proof of (c)
Since 𝑢(𝑡 𝑙) =∑∞
𝑝=1𝜃 𝑝 𝜙 𝑝 (𝑡 𝑙), we have
𝛼 𝑞 = 2
𝑛
𝑛−1
∑
𝑙=1
𝑢 (𝑡 𝑙 )𝜙 𝑞 (𝑡 𝑙) +1
𝑛 (𝑡 𝑛 )𝜙 𝑞 (𝑡 𝑛 ) − 𝜃 𝑞 , 𝑞 = 1, 𝑛
= 2
𝑛
𝑛−1
∑
𝑙=1
(∞
∑
𝑝=1
𝜃 𝑝 𝜙 𝑝 (𝑡 𝑙)
)
𝜙 𝑞 (𝑡 𝑙) +1
𝑛
(∞
∑
𝑝=1
𝜃 𝑝 𝜙 𝑝 (𝑡 𝑛)
)
𝜙 𝑞 (𝑡 𝑛 ) − 𝜃 𝑞
=
∞
∑
𝑝=1
𝜃 𝑝
( 2
𝑛
𝑛−1
∑
𝑙=1
𝜙 𝑝 (𝑡 𝑙 )𝜙 𝑞 (𝑡 𝑙) +1
𝑛 𝜙 𝑝 (𝑡 𝑛 )𝜙 𝑞 (𝑡 𝑛)
)
− 𝜃 𝑞
Now we apply (a) For every 𝑞 = 1, 𝑛, we define
𝐴 𝑞 = {𝑞 + 2𝑚𝑛 ∶ 𝑚 = 0, 1, 2, …} ,
𝐵 𝑞 = {−𝑞 + 2𝑘𝑛 + 1 ∶ 𝑘 = 1, 2, 3, …} , where we can see that 𝐴 𝑞 ⊂ N and 𝐵 𝑞 ⊂ N Particularly, we have 𝐴 𝑞
∩𝐵 𝑞= ∅ Indeed, if 𝑙 ∈ 𝐴𝑞 and 𝑙 ∈ 𝐵 𝑞 then 𝑙 = 𝑞+2𝑚𝑛 and 𝑙 = −𝑞+2𝑘𝑛+1,