Thus, in this paper, we will construct an estimation scheme for Γ(A) based on an irregular sample {Xti, i = 0, 1, . . .} of X and study its asymptotic behavior. In particular, we first introduce an unbiased estimator for when X is a standard Brownian motion and provide a functional central limit theorem (Theorem 2.2) for the error process.
Trang 1This paper is available online at http://stdb.hnue.edu.vn
ON DISCRETE APPROXIMATION OF OCCUPATION TIME OF DIFFUSION
PROCESSES WITH IRREGULAR SAMPLING
Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh
Faculty of Mathematics and Informatics, Hanoi National University of Education
Abstract.Let X be a diffusion processes and A be some Borel subset of R In this
paper, we introduce an estimator for the occupation time Γ(A)t = Rt
0I{Xs∈A}ds based on an irregular sample of X and study its asymptotic behavior
Keywords:Occupation time, diffusion processes, irregular sample
1 Introduction
Let X be a solution to the following stochastic differential equation
dXt= b(Xt)dt + σ(Xt)dWt, X0 = x0 ∈ R, (1.1) where b and σ are measurable functions and Wt is a standard Brownian motion defined
on a filtered probability space (Ω, F, (Ft)t>0, P)
For each set A ∈ B(R) the occupation time of X in A is defined by
Γ(A)t=
Z t
0
I{Xs ∈A}ds
The quantity Γ(A) is the amount of time the diffusion X spends on set A The problem of evaluating Γ(A) is very important in many applied domains such as mathematical finance, queueing theory and biology For example, in mathematical finance, these quantities are
of great interest for the pricing of many derivatives, such as Parisian, corridor and Eddoko options (see [1, 2, 9])
In practice, one cannot observe the whole trajectory of X during a fixed interval
In other words, we can only collect the values of X at some discrete times, say 0 = t1 <
t2 < Recently, Ngo and Ogawa [10] and Kohatsu-Higa et al [7] have introduced an
estimate for Γ(A) by using a Riemann sum and they studied the rate of convergence of this approximation when X is observed at regular points, i.e {ti = i
n, i 6 [nt]} for all i > 0 Received December 25, 2013 Accepted June 26, 2014.
Contact Nguyen Thi Lan Huong, e-mail address: nguyenhuong0011@gmail.com
Trang 2and any n > 0 However, in practice for many reasons we can not observe X at regular observation points Thus, in this paper, we will construct an estimation scheme for Γ(A) based on an irregular sample {Xt i, i = 0, 1, } of X and study its asymptotic behavior
In particular, we first introduce an unbiased estimator for Γ(A) when X is a standard Brownian motion and provide a functional central limit theorem (Theorem 2.2) for the error process It should be noted here that assumption A, which is obviously satisfied for regular sampling, is the key to construct the limit of the error process for irregular sampling We then introduce an estimator for Γ(A) for general diffusion process and show that its error is of order 3/4
2 Main results
Throughout out this paper, we suppose that coefficients b and σ satisfy the following conditions:
(i) σ is continuously differentiable and σ(x) ≥ σ0 > 0 for all x ∈ R,
(ii) |b(x) − b(y)| + |σ(x) − σ(y)| ≤ C|x − y| for some constant C > 0 (2.1) The above conditions on b and σ guarantee the continuity of sample path and marginal distribution of X (see [11]) We note here that under a more restrictive condition
on the smoothness and boundedness of b, σ and their derivatives, Kohatsu-Higa et al [7]
have studied the strong rate of approximation of Γ(A) via a Riemann sum as one defined
in [10]
At the nth stage, we suppose that X is observed at times tn
i, i = 0, 1, 2, satisfying
0 = tn
0 < tn
1 < tn
2 < and there exists a constant k0 > 0 such that
∆n≤ k0min
where ∆n
i = tn
i − tn i−1and ∆n = max
i ∆n
i We assume moreover that limn→∞∆n = 0
We denote ηn(s) = tn
i if tn
i ≤ s < tn
i+1
2.1 Occupation time of Brownian Motions
We first recall the concept of stable convergence: Let (Xn)n≥0 be a sequence of random vectors with values in a Polish space (E, E), all defined on the same probability space (Ω, F, (Ft)t≥0, P) and let G be a sub-σ-algebra of F We say that Xn converges
G-stably in law to X, denote Xn G−st
→ X, if X is an E−valued random vector defined
on an extension (Ω′, F′, P′) of the original probability space and limn→∞E(g(Xn)Z) =
E′(g(X)Z), for every bounded continuous functions g : E → R and all bounded G−measurable random variables Z (see [4, 5, 8]) When G = F we write Xn
st
→ X instead of Xn G−st
→ X
We denote by Lt(a) the local time of a standard Brownian motion B at a, up to and
including t given by
Lt(a) = |Bt− a| − |a| −
Z t
0
sign(Bs− a)dBs
Trang 3For each Borel function g defined on R and γ > 0, we set
βγ(g) =
Z
|x|γ|g(x)|dx, λ(g) =
Z g(x)dx
In order to study the asymptotic behavior of the estimation error, we need the following assumption
Assumption A: There exists a non-decreasing function Ft(x) such that
1
(∆n)3/2
X
t n
i 6 t
(tni − tni−1)3/2E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i−1)−→ FP t(x), ∀ t > 0, x ∈ R (2.3)
Theorem 2.1(The approximation of local time) Suppose that g satisfies the following
conditions:
g(x) = o(x) as x → ∞, β1(g) < ∞, and λ(|g|) < ∞. (2.4)
Then for all x ∈ R it holds
X
t n
i 6 t
ptn
i − tn i−1g Btni−1− x
ptn
i − tn i−1
P
−
→ λ(g)Lt(x)
Now, we proceed to state the functional central limit theorem for the error process
First, let us recall the definition of F-progressive conditional martingale (see [5] for more details) We call extension of B another stochastic basis ˜B = (˜Ω, ˜F, ( ˜Ft), ˜P) constructed
as follows: We have an auxiliary filtered space (Ω′
, F′
, (F′
t)t≥0) such that each σ-field
F′
t − is separable, and a transition probability Qω(dω′
) from (Ω, F) into (Ω′
, F′
), and we set
˜
Ω = Ω × Ω′, ˜F = ˜F ⊗ F′, ˜F = ∩s>tFs⊗ Fs′,
˜ P(dω, dω′) = P(dω)Qω(dω′)
A process X on the extension ˜B is called an F-progressive conditional martingale if it is adapted to ˜F and if for P-almost all ω in the process X(ω, ) is a martingale on the basis
Bω = (Ω′, F′, (Ft′)t≥0, Qω)
Theorem 2.2. Suppose that B is a standard Brownian motion defined on a filtered space
B = (Ω, F, Ft, ˜P) For each n > 1, t > 0 and K ∈ R we set
˜ Γ(K)nt =
Z t
0
ΦBηn (s)− K
ps − ηn(s)
ds,
where Φ is the standard normal distribution function.Then,
(i) ˜Γ(K)n
t is an unbiased estimator for the occupation time Γ([K, ∞))t;
Trang 4(ii) ˜Γ(K)n
t
P
−
→ Γ([K, ∞))t;
(iii) Moreover, suppose that A holds then there exists a good extension ˜ B of B and a
continuous B-biased F-progressive conditional martingale with independent increment
X′on this extension with
hX′, X′it= 9
20√ 2πFt(K), hX′, Bi = 0
1 (∆n)3/4 Γ(K)˜ n
t − Γ([K, ∞))t
st
−→ X′
Remark 2.1. Assumption A make sense It is is obviously satisfied for regular sampling The following condition is sufficient to have (2.3),
lim
n→∞min
i ∆ni(∆n)−1 = 1 (2.5)
Moreover, we have Ft(x) = Lt(x).
Indeed, we set γn=
min
i (∆n
i) max
i (∆n
i) and
Fn
t (x) = 1
(∆n)3/2
X
t n
i 6 t
(tn
i − tn i−1)3/2E(Ltn
i(x) − Lt n
i−1(x)|Ft n
i)
Hence Fn
t (x) = P
t n
i 6t
E(Ltn
i(x) − Lt n
i−1(x)|Ft n
i) − Sn
t(x), where
0 6 Stn(x) =X
t n
i 6 t
1 −(∆
n
i)3/2
(∆n)3/2E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i)
6(1 − γn)X
t n
i 6t
E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i)−→ 0,P
since P
t n
i 6 t
E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i) −→ LP t(x) Thus, Sn(t)−→ 0, and FP n
t (x)−→ LP t(x) as
n → ∞
The condition (2.5) can be localized a little as follows: Suppose that there exists a sequence of fixed times S1 < S2 < , which does not depend on n such that in each interval (Si, Si+1) the condition (2.5) is satisfied Then the condition (2.3) also holds
2.2 Occupation time of general diffusion
In order to study the rate of convergence, we recall the definition of C-tightness.
First, we denote by D(R) the Polish space of all càdlàg function: R+ → R with Skorokhod topology A sequence of D(R) -valued random vector (Xn) defined on (Ω, F, (F)t, P) is tight if inf
K sup
n
P(Xn ∈ K) = 0, where the infimum is taken over all/
Trang 5compact sets K in D(R) The sequence (Xn) of processes is called C-tightness if it is
tight, and if all limit points of the sequence {L(Xn)} are laws of continuous processes (see [5])
Denote S(x) = Rx
x 0
1 σ(u)du and Yt= S(Xt) For each set A ∈ B(R), A =
m
S
i=0
[a2i, a2i+1) where −∞ 6 a0 < a1 < · · · < a2m+1 6 +∞ we introduce the following estimate for Γ(A)t:
˜
Γ(A)n
t =
m
X
j=0
Z t
0
ΦS(a2j+1) − S(Xη n (s))
ps − ηn(s)
− ΦS(a2j) − S(Xηn (s))
ps − ηn(s)
ds
In particular, if A = [K, +∞) then the biased and consistent estimator for the occupation time Rt
0I{Xs> K}ds is defined by
˜ Γ([K, ∞))nt =
Z t
0
ΦS(Xηn (s)) − S(K)
ps − ηn(s)
ds
Theorem 2.3. For each set A ∈ B(R), A = Sn
i=0
[a2i, a2i+1) where −∞ 6 a0 < a1 <
· · · < a2n+1 6+∞ the sequence of stochastic processes
1 (∆n)3/4 Γ(A)˜ n
t −
Z t
0
I{Xs∈A}ds)
t≥0
is C-tight.
We denote (Pt)t>0a Brownian semigroup given by Ptk(x) =R k(x + y√
t)ρ(y)dy, where ρ(y) = √ 1
2πe−y 2 /2and k is a Lebesgue integrable function
3.1 Some preliminary estimates
Throughout this section we denote by K a constant which may change from line
to line If K depends on an additional parameter γ, we write Kγ We first recall some estimates on the semigroup (Pt)
Lemma 3.1(Jacod [4]) Let k : R → R be an integrable function If t > s > 0 and γ > 0
we have:
|Ptk(x)| 6 Kλ(|k|)√
Ptk(x) − √λ(k)
2πte
−x 2 /2t
6 Kγ
t
β1(k)
1 + |x/√t|γ +β1+γ(k)
1 + |x|γ
, (3.2)
Ptk(x) − √λ(k)
2πte
−x 2 /2t
6 K
t3/2(β2(k) + β1(k)|x|) (3.3)
Trang 6We will need the following estimate.
Lemma 3.2. Let k : R → R be an integrable function Suppose that the sequence {tn
i}
satisfies (2.2) Denote
γ1(k, x)n
t = E X
t n
i 6 t i>2
(∆n
i)2k(x + Btni−1
p∆n i
), γ2(k, x)n
t = E X
t n
i 6 t i>2
p∆n
ik(x + Btni−1
p∆n i
)
Then
(i) |γ1(k, x)nt| 6 Kλ(|k|)(∆n)3/2√
(ii) |γ2(k, x)nt| 6 Kλ(|k|)√t (3.5)
Moreover, if λ(k) = 0 then
|γ1(k, x)nt| 6 Kβ1(k)(∆n)2k0(1 + log+(tk0
∆n)), (3.6)
|γ1(k, x)nt| 6 K(∆n)2(β2(k) + β1(k)|x|), (3.7)
and
|γ2(k, x)nt| 6 Kβ1(k)p∆nk0(1 + log+(tk0
∆n
|γ2(k, x)nt| 6 Kpk0p∆n(β2(k) + β1(k)|x|) (3.9)
Proof. From (3.1) and estimates (4.1), (4.2), we obtain (3.4) and (3.6) Furthermore, from (3.3) in the Lemma 3.1 we get
|γ1(k, x)nt| 6 X
t n
i 6 t, i>2
(∆ni)2 K (tni−1
∆ n
i )3(β2(k) + β1(k)|x|))
6Kk0(∆n)5/2(β2(k) + β1(k)|x|))
Z t
∆ n /k 0
x−3/2dx
6K(∆n)2(β2(k) + β1(k)|x|)
By using analogous arguments as above, we obtain (3.5), (3.8) and (3.9)
Lemma 3.3. Assume that λ(g) = 0 and g satisfies (2.4), then
(i) 1
(∆n)3E
X
t n
i 6 t
(tni − tni−1)2g x + Bt
n i−1
ptn
i − tn i−1
!
2 n→∞
−−−→ 0
(ii) E
X
t n
i 6 t
ptn
i − tn i−1g x + Bt
n i−1
ptn
i − tn i−1
!
2 n→∞
−−−→ 0
Trang 7Proof. We first note that condition (2.4) implies that λ(g2) < ∞ We write
1
(∆n)3E
X
t n
i 6t
(tni − tni−1)2g x + Bt
n i−1
ptn
i − tn i−1
!
2
= 1
(∆n)3
X
t n
i 6 t
E((tni − tni−1)4g(x + Bt
n i−1
p∆n
i) )
2)+
+ 2
(∆n)3
i:t n
i <t n i+1 6t
E(∆ni)2g(x + Bt
n i−1
p∆n i
)( X
j:t n
i <t n
j 6t
(∆nj)2g(x + Bt
n j−1
p∆n j
)) (3.10) Using (3.4) and (2.4), the first term of (3.10) is bounded by
∆ng( x p∆n 1
)2+ (∆n)1/2Kλ(g2)√
t → 0 as n → ∞
Using (3.6), we have
E X
j:t n
i <t n
j 6t
(∆nj)2g(x + Bt
n j−1
p∆n j
)|Ft n i−1
= E X
t n
i <t n
j 6t
(∆nj)2g(y + Bt
n j−1 −t n i−1
p∆n j
)
y=x+Btn i−1
≤ Kβ1(k)(∆n)2k0(1 + log+((t − ti−1)k0
∆n )) ≤ Kβ1(k)(∆n)2k0(1 + log+(k0n))
Thus the second term of (3.10) is bounded by
2
(∆n)3
X
i:t n
i 6 t n
i+1 6 t
E
(∆ni)2g(x + Bt
n i−1
p∆n i
)E( X
j:t n
i <t n
j 6 t
(∆nj)2g(x + Bt
n j−1
p∆n j
)|Ft n i−1)
≤ Kk0β1(g)(∆n)−1(1 + log+(k0n)) X
t n
i <t n i+1 6t
E(∆ni)2g(x + Bt
n i−1
p∆n i
)
≤ Kk0β1(g)(∆n)−1(1 + log+(k0n))(∆n
1)2g( x p∆n 1
) + Kβ1(g)(∆n)2k0(1 + log+(k0n)),
which tends to 0 as n → ∞ because of condition (2.4) We conclude part (i) In an analogous manner, applying (3.5), (3.8) and (2.4) we have (ii)
For each set A ∈ B(R) where B(R) is Borel σ-algebra on R we denote
Γ(A)nt =X
t n
i 6 t
∆niI{Xtn
i ∈A}
Lemma 3.4. Suppose that the conditions (2.2) holds and for each set A ∈ B(R) satisfying
R
∂Adx = 0, then Γ(A)n
t as
−→ Γ(A)t
Trang 8The proof is similar to one of Proposition 2.1[10] and will be omitted.
Lemma 3.5. Assume that the condition (2.3) holds and the function g satisfies (2.4) Then
for all x ∈ R it holds
1 (∆n)3/2
X
t n
i 6 t
(tni − tni−1)2g( Bt
n i−1 − x
ptn
i − tn i−1
)
!
P
−
→ λ(g)Ft(x) (3.11)
Proof. We set ˆg(x) = E(|x + B1| − |x|) Appying the condition (2.3) we write
1
(∆n)3/2
X
t n
i 6 t
(tni − tni−1)2g(ˆ Bt
n i−1 − x
ptn
i − tn i−1
) = 1 (∆n)3/2
X
t n
i 6 t
(tni − tni−1)3/2E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i−1)
P
−
→ Ft(x)
Set g′(x) = g − λ(g)ˆg, then λ(g′) = 0 It follows from Lemma 3.3 and the condition (2.3)
that
1
(∆n)3/2
X
t n
i 6 t
(tni − tni−1)2g(ˆ Bt
n i−1 − x
ptn
i − tn i−1
)
= λ(g) 1
(∆n)3/2
X
t n
i 6t
(tni − tni−1)2ˆg( Bt
n i−1− x
ptn
i − tn i−1
) + 1 (∆n)3/2
X
t n
i 6t
(tni − tni−1)2g′( Bt
n i−1 − x
ptn
i − tn i−1
)
P
−
→ λ(g)Ft(x)
3.2 Proof of Theorem 2.1
We denote ˆg(x) as in Lemma 3.5 From the definition of Ltwe have
E(|Bt n
i − x| − |Bt n
i−1− x||Ft n
i−1) = E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i−1), for all x ∈ R On the other hand, since Bt n
i − Bt n i−1 is independent of Ft n
i−1 and it has the same distribution as p∆n
iB1, we have
E(|Bt n
i − x| − |Bt n
i−1 − x||Ft n
i−1) = E(|Bt n
i−1+ Bt n
i − Bt n i−1 − x| − |Bt n
i−1− x||Ft n
i−1)
= E(|y +p∆n
iB1| − |y|)
y=Btn i−1 −x =p∆n
ig(ˆ Bt
n i−1 − x p∆n i
)
Hence, it follows from Lemma 2.14 [3] that
X
t n
i 6t
ptn
i − tn i−1g(ˆ Bt
n i−1 − x
ptn
i − tn i−1
) = X
t n
i 6t
E(Lt n
i(x) − Lt n
i−1(x)|Ft n
i−1)−→ LP t(x)
Trang 9Set g′(x) = g − λ(g)ˆg, then λ(g′) = 0 From Lemma 3.3 (ii) one gets
X
t n
i 6 t
ptn
i − tn i−1g( Bt
n i−1 − x
ptn
i − tn i−1
)
= λ(g)X
t n
i 6t
ptn
i − tn i−1g(ˆ Bt
n i−1 − x
ptn
i − tn i−1
) +X
t n
i 6t
ptn
i − tn i−1g′( Bt
n i−1− x
ptn
i − tn i−1
)
P
−
→ λ(g)Lt(x)
This concludes the proof of Theorem 2.1
Lemma 3.6. We denote Nn
t n
i 6t
Ni,nand Mn
t n
i 6t
Mi,n, where
Ni,n= 1
(∆n)3/4
(tni − tni−1)I[K,∞)(Bt n
i−1) −
Z t n i
t n i−1
I[K,∞)Bsds,
Mi,n= Ni,n− E(Ni,n|Ft n
i−1)
Then sequence Mnconverges stable to a continuous process defined on an extension of original probability space In particular, the sequence (Mn) is C- tigh under probability
measure P.
Proof. We will prove the lemma in the following steps:
Step 1.A simple calculation using properties of Brownian motion yiels
E(Ni,n|Ft n
i−1) = 1
(∆n)3/4
∆niI{Btn
i−1>K}−
Z ∆ n i
0
Φ(Bt
n i−1− K
√
u )du
= ∆
n i
(∆n)3/4
1
√ 2π
Z +∞
Btni−1−K
√
∆ni
(1 −(Bt
n i−1− K)2
∆n
it2 )e−t2/2dtI{Btn
i−1>K}
− ∆
n i
(∆n)3/4
1
√ 2π
Z
Btn i−1−K
√∆n
i
n i−1 − K)2
∆n
it2 )e−t2/2dtI{Btn
i−1 <K}
We set g1(x) = R∞
x (1 − x
2
t2)e−t 2 /2dtI{x>0}−Rx
−∞(1 − x
2
t2)e−t 2 /2dtI{x<0}2 We have R
Rg1(x)dx = 7
√ 2π
20 , and g1(x) 6 min{π
2, x−2e−x 2
} for any x ∈ R Hence it follows from Lemma 3.5 that
X
t n
i 6t
E(Ni,n|Ft n
i−1)2 =X
t n
i 6t
1 2π
1 (∆n)3/2(∆ni)2g1(Bt
n i−1 − K p∆n i
)−→P 7
√ 2π
20 Ft(K). (3.12)
Trang 10Step 2.Next, by Markov property and Fubini theorem, we have
Eh
Z t n
i
t n
i−1
I{Bs> K}ds2|Ft n
i−1
i
=
Z ∆ n i
0
Z ∆ n i
0
EI{Bs> −r}I{Bu> −r}
duds|r=B tn
i−1 −K
A direct calculation of the expectation EI{Bs> −r}I{Bu> −r} yields, that if r 6 0
E(
Z ∆ n i
0
I{Bs> −r}ds)2 = ∆
n i
π
Z 1
0
z3/2
√
1 − zexp (−
r2
2∆n
i(1 − z))dz, and if r > 0 then
E(
Z ∆ n
i
0
I{Bs> −r}ds)2
= (∆ni)2 1 −
Z 1
0
1
πpz(1 − z)exp (−
r2
2z∆n i
)dv
! +(∆
n
i)2
π
Z 1
0
z3/2
√
1 − zexp (−
r2
2∆n
i(1 − z))dz.
We have
E(Ni,n2 |Ft n
i−1) = (∆
n
i)2
(∆n)3/2I{Btn
i−1>K}− 2∆
n i
(∆n)3/2I{Btn
i−1>K}E(
Z t n i
t n i−1
I{Bs > K}ds|Ft n
i−1)
(∆n)3/2E(
Z t n i
t n i−1
I{Bs> K}ds)2|Ft n
i−1)
Hence
E(Ni,n2 |Ft n
i−1) = (∆
n
i)2 (∆n)3/2{π1
Z 1
0
z3/2
√
1 − zexp (−
(Bt n i−1 − K)2
2∆n
i(1 − z) )dzI{Btn i−1 <K}
+ 2
Z 1
0 (1 − Φ( 1
p∆n i
√
u(Btni−1− K)))duI{B tn i−1>K}
+ 1 π
Z 1
0
z3/2
√
1 − z exp (−
1 2∆n
iz(Btni−1− K)2)dzI{Btn
i−1>K}
− π1
Z 1
0
1 p(1 − z)z exp (−
1 2∆n
iz(Btni−1− K)2)dzI{Btn
i−1>K}}
Set
g2(x) =1
π
Z 1
0
z3/2
√
1 − z exp (−
x2
2(1 − z))dzI{x<0}
+ {2
Z 1
0 (1 − Φ(√x
u))du +
1 π
Z 1
0
z3/2
√
1 − zexp (−
x2
2z)dz
− 1 π
Z 1
0
1 pz(1 − z)exp (−
x2
2z)dz}I{x>0}
... class="page_container" data-page="8">The proof is similar to one of Proposition 2.1[10] and will be omitted.
Lemma 3.5. Assume that the condition (2.3) holds and the function g satisfies... → ∞ because of condition (2.4) We conclude part (i) In an analogous manner, applying (3.5), (3.8) and (2.4) we have (ii)
For each set A ∈ B(R) where B(R) is Borel σ-algebra on R we denote...
i−1)
Then sequence Mnconverges stable to a continuous process defined on an extension of original probability space In particular, the sequence (Mn)