For part b, it comes fromthe fundamental property of Ito integral, i.e.. Ito integral preserves martingale property forintegrands in V.. By example 8.5.9, φBt is, after a change of times
Trang 271 Problems in Oksendal’s book
By Problem EP1-1 and the continuity of Brownian motion
j=1B2 (j−1)/n1{(j−1)/n<t≤j/n} Indeed,
E[
Z 1 0
E(B2(j−1)/n− B2t)2dt
We note (B2
t − B2
j−1 n
E(B2j−1 n
− Bt2)2dt = 2j + 1
n3
Hence ER01(Bt− Bt(n))2dt =Pn
j=1 2j−1
Trang 28By looking at a subsequence, we only need to prove the L -convergence Indeed,
4.4
Proof For part a), set g(t, x) = ex and use Theorem 4.12 For part b), it comes fromthe fundamental property of Ito integral, i.e Ito integral preserves martingale property forintegrands in V
Comments: The power of Ito formula is that it gives martingales, which vanish underexpectation
4.5
Proof
Btk=
Z t 0
kBsk−1dBs+1
2k(k − 1)
Z t 0
Bk−2s dsTherefore,
βk(t) =k(k − 1)
2
Z t 0
βk−2(s)dsThis gives E[Bt4] and E[Bt6] For part b), prove by induction
2
Trang 30Proof First, we check by integration-by-parts formula,
dYt=
−a + b −
Z t 0
1−s, then Xtis centered Gaussian, with variance
E[Xt2] = (1 − t)2
Z t 0
ds(1 − s)2 = (1 − t) − (1 − t)2
So Xtconverges in L2 to 0 as t → 1 Since Xtis continuous a.s for t ∈ [0, 1), we conclude
0 is the unique a.s limit of Xtas t → 1
Trang 31p(ρ) = x
γ1− ργ1
Rγ 1− ργ 1
7.18 a)
Proof The line of reasoning is exactly what we have done for 7.9 b) Just replace xγ with
a general function f (x) satisfying certain conditions
5
Trang 328.16 a)
Proof Let Lt = −Rt
0
Pn i=1
P n i=1
P n i=1 ∂xi∂h(Bs )dB i
in D So by Poisson formula, for z = reiθ∈ D,
hn(z) = 1
2π
Z 2π 0
Pr(t − θ)hn(eit)dt
6
Trang 33Let n → ∞, hn(z) → E [1F(Bτ)] = P (Bτ ∈ F ) by bounded convergence theorem, andRHS → 2π1 R2π
0 Pr(t − θ)1F(eit)dt by dominated convergence theorem Hence
Pz(Bτ ∈ F ) = 1
2π
Z 2π 0
Pr(t − θ)1F(eit)dtThen by π − λ theorem and the fact Borel σ-field is generated by closed sets, we conclude
Pz(Bτ ∈ F ) = 1
2π
Z 2π 0
Pr(t − θ)1F(eit)dtfor any Borel subset of ∂D
b)
Proof Let B be a BM starting at 0 By example 8.5.9, φ(Bt) is, after a change of timescale α(t) and under the original probability measure P, a BM in the plane ∀F ∈ B(R),
P (B exits D from ψ(F ))
= P (φ(B) exits upper half plane from F )
= P (φ(B)α(t) exits upper half plane from F )
= Probability of BM starting at i that exits from F
f (φ(eit))dt = 1
2πiZ
Trang 34Proof Let θ be an arbitrage for the market {Xt}t∈[0,T ] Then for the market { ¯Xt}t∈[0,T ]:(1) θ is self-financing, i.e d ¯Vθ
Proof We use Theorem 12.3.5 From part a), φ(t, ω) = e−ρtσ We therefore should choose
θ1(t) such that θ1(t)e−ρtσ = σe−ρt So θ1= 1 and θ0can then be chosen as 0
8
Trang 35By the hint, if we consider the i.i.d sequence {Xj}n
j=1 normalized by its 4-th moment, wehave
P (|Sn| > ) < −4E[S4n] ≤ −4CE[X4]n2
By integration-by-parts formula, we can easily calculate the 2k-th moment of N (0, σ) is
of order σk So the order of E[X4] is n−4 This suffices for the Borel-Cantelli lemma toapply
EP1-2
Proof We first see the second part of the problem is not hard, sinceRt
0YsdBsis a martingalewith mean 0 For the first part, we do the following construction We define Yt = 1 for
t ∈ (0, 1/n], and for t ∈ (j/n, (j + 1)/n] (1 ≤ j ≤ n − 1)
Yt:= Cj1{B(i+1)/n−Bi/n≤0, 0≤i≤j−1}
where each Cj is a constant to be determined
Regarding this as a betting strategy, the intuition of Y is the following: We start withone dollar, if B1/n− B0 > 0, we stop the game and gain (B1/n− B0) dollars Otherwise,
we bet C1dollars for the second run If B2/n− B1/n> 0, we then stop the game and gain
C1(B2/n− B1/n) − (B1/n− B0) dollars (if the difference is negative, it means we actuallylose money, although we win the second bet) Otherwise, we bet C2 dollar for the thirdrun, etc So in the end our total gain/loss of this betting is
Rt
0YsdBs = (B1/n− B0) + 1{B1/n−B0≤0}C1(B2/n− B1/n) + · · ·
+1{B1/n−B0≤0,··· ,B(n−1)/n−B(n−2)/n≤0}Cn−1(B1− B(n−1)/n)
We now look at the conditions unde whichR1
0 YsdBs≤ 0 There are several possibilities:(1) (B1/n− B0) ≤ 0, (B2/n− B1/n) > 0, but C1(B2/n− B1/n) < |B1/n− B0|;
Trang 36The last event has the probability of (1/2) The first event has the probability of
P (X ≤ 0, Y > 0, 0 < Y < X/C1) ≤ P (0 < Y < X/C1)where X and Y are i.i.d N (0, 1/n) random variables We can choose C1 large enough sothat this probability is smaller than 1/2n The second event has the probability smallerthan P (0 < X < Y /C2), where X and Y are independent Gaussian random variables with
0 mean and variances 1/n and (C2+ 1)/n, respectively, we can choose C2large enough, sothat this probability is smaller than 1/2n We continue this process untill we get all the
Cj’s Then the probability of R01YtdBt≤ 0 is at most n/2n For n large enough, we canhave P (R01YtdBt> 0) > 1 − for given The process Y is obviously bounded
Comments: Different from flipping a coin, where the gain/loss is one dollar, we havenow random gain/loss (Bj/n− B(j−1)/n) So there is no sense checking our loss and makingnew strategy constantly Put it into real-world experience, when times are tough and theoutcome of life is uncertain, don’t regret your loss and estimate how much more you shouldinvest to recover that loss Just keep trying as hard as you can When the opportunitycomes, you may just get back everything you deserve
j/n− B2 (j−1)/n]/an1, and apply the hint in EP1-1,
Proof A short proof: For part (a), it suffices to set
Yn+1= E[Rn+1− Rn|X1, · · · , Xn+1= 1]
10
Trang 37(What does this really mean, rigorously?) For part (b), the answer is NO, and Rn =
By adaptedness, Rn+1− Rn can be represented as fn+1(X1, · · · , Xn+1) for some Borelfunction fn+1 ∈ B(Rn+1) Martingale property and {Xn}n being i.i.d Bernoulli randomvariables imply
fn+1(X1, · · · , Xn, −1) = −fn+1(X1, · · · , Xn, 1)This inspires us set Yn+1 as
fn+1(X1, · · · , Xn, 1) = E[Rn+1− Rn|X1, · · · , Xn+1= 1]
For part b), we just assume {Xn}n is i.i.d and symmetrically distributed If (Rn)n hasmartingale representation property, then
fn+1(X1, · · · , Xn+1)/Xn+1
must be a function of X1, · · · , Xn In particular, for n = 0 and f1(x) = x3, we have
X2=constant So Bernoulli type random variables are the only ones that have martingalerepresentation theorem
EP5-1
Proof A = rxdxd +12dxd22, so we can choose f (x) = x1−2r for r 6= 12 and f (x) = log x for
r = 12
EP6-1 (a)
Proof Assume the claim is false, then there exists t0 > 0, > 0 and a sequence {tk}k≥1
such that tk ↑ t0, and
|f+0(t) − f+0(t0)| = |f+0(t)| <
2Meanwhile, there exists infinitely many tk’s such that
Trang 38So it suffices to show h is monotone increasing on (t0− δ, t0+ δ) This is easily proved
by showing h cannot obtain local maximum in the interior of (t0− δ, t0+ δ)
Trang 39For uniqueness, by part a), f (Sn∧τ) is a martingale, so use optimal stopping time, wehave
Proof A = Z3− {0}, ∂A = {0}, and F (0) = 0 T0 = inf{n ≥ 0 : Sn ≥ 0} Let c ∈ Rand f (x) = cPx(T0 = ∞) Then f (0) = 0 since T0 = 0 under P0 f is clearly bounded
To see f is harmonic, the key is to show Px(T0 = ∞|S1= y) = Py(T0= ∞) This is due
to Markov property: note T0 = 1 + T0◦ θ1 Since c is arbitrary, we have more than onebounded solution
Proof By induction, it suffices to show if |y − x| = 1, then Ey[TA] < ∞ We note TA =
1 + TA◦ θ1 for any sample path starting in A So
Ex[TA1{S1}] = Ex[TA|S1= y]Px(S1= y) = Ey[TA− 1]Px(S1= y)
Since Ex[TA1{S1}] ≤ Ex[TA] < ∞ and Px(S1= y) > 0, Ey[TA] < ∞
13
Trang 40etdt + 1] = 1 +
Z ∞ 0
Px(τR> t)etdtFor any n ∈ N, Px(τR> n) ≤ Px(∩n
i=1{|Bk−Bk−1| < 2R}) = an, where a = Px(|B1−B0| <2R) < 1 So e(x) ≤ 1+eP∞
n=1(ae)n−1 For small enough, ae< 1, and hence e(x) < ∞.Obviously, is only dependent on D
c)
Proof Since q is continuous and ¯D is compact, q attains its minimum M If M ≥ 0, then
we have nothing to prove So WLOG, we assume M < 0 Then similar to part a),
˜
e(x) ≤ Ex[e−M (τ ∧σ )] ≤ Ex[e−M σ] = 1 +
Z ∞ 0
Px(σ> t)(−M )e−M tdt
Note Px(σ > t) = Px(sups≤t|Bs− B0| < ) = P0(sups≤t|Bs/2| < ) = Px(σ1 > t/2)
So ˜e(x) = 1 +R∞
0 Px(σ1> u)(−M 2)e−M 2udu = Ex[e−M 2σ 1] For small enough, −M 2
will be so small that, by what we showed in the proof of part a), Ex[e−M 2 σ1] will be finite.Obviously, is dependent on M and D only, hence q and D only
d)
Proof Cf Rick Durrett’s book, Stochastic Calculus: A Practical Introduction, page 160
158-14
Trang 41Proof From part d), it suffices to show for a give x, there is a K = K(D, x) < ∞, such that
if q = −K, then e(x) = ∞ Since D is open, there exists r > 0, such that B(x, r) ⊂⊂ D.Now we assume q = −K < 0, where K is to be determined We have
δ = inf
− a
2 <x< a 2
Px(max
t≤1 |Wt| < a, |W0| < a/2, |W1| < a/2)(> 0)then we have
P0(max
t≤n |Wt| < a) > δn,and we are done
EP7-2
15
Trang 42Proof Consider the case of dimension 1 D = {x : x > 0} Then for any x > 0, P (τ <
Proof ∀A ∈ Fn, EQ[Zn+1; A] = EP[Mn+1Zn+1; A] = EP[MnZn; A] = EQ[Zn; A] So
EQ[Zn+1|Fn] = Zn, that is, Zn is a Q-martingale
EP8-2 a)
16
Trang 43Proof Let Zt= exp{
M sdhM, Bisis a BM We take At= −α
B t1{t≤T} The SDEfor B in terms of Ytis
dBt= dYt+ α
Bt
1{t≤T}dt
c)
Proof Under Q, B satisfies the Bessel diffusion process before it hits 1
2 That is, up to thetime T1, B satisfies the equation
dBt= dYt+ α
BtdtThis line may sound fishy as we haven’t defined what it means by an SDE defined up to
a random time Actually, a rigorous theory can be built for this notion But we shall avoidthis theoretical issue at this moment
We choose b > 1, and define τb = inf{t > 0 : Bt 6∈ (1
2, b)} Then Q1(T1 = ∞) =limb→∞Q1(Bτb = b) By the results in EP5-1 and Problem 7.18 in Oksendal’s book, wehave
(i) If α > 1/2, limb→∞Q1(Bτb = b) = limb→∞
1−( 1 )1−2α
b 1−2α −( 1 ) 1−2α = 1 − (12)2α−1 > 0 So inthis case, Q1(T1 = ∞) > 0
(ii) If α < 1/2, limb→∞Q1(Bτb = b) = limb→∞ 1−(1)1−2α
b 1−2α −( 1 ) 1−2α = 0 So in this case,
Q1(T1 = ∞) = 0
17
Trang 44(iii) If α = 1/2, limb→∞Q1(Bτb = b) = limb→∞
0−log log b−log 1 = 0 So in this case, Q1(T1 =
To see ρDis a metric on D, note ρD(z, z) = 0 by definition and ρ(z, ω) ≥ 1 for z 6= ω So
ρD(z, ω) = 0 iff z = ω If {xk} is a finite adjacent sequence connecting z1 and z2, and {yl}
is a finite adjacent sequence connecting z2 and z3, then {xk, z2, yl}k,l is a finite adjacentsequence connecting z1 and z3 So ρD(z1, z3) ≤ ρD(z1, z2) + ρD(z2, z3) Meanwhile, it’sclear that ρD(z, ω) ≥ and ρD(z, ω) = ρD(ω, z) So ρD is a metric
If dist(xk−1, ∂D) ≤ dist(z, ∂D), then for ω close to z, 12max{dist(ω, ∂D), dist(xk−1, ∂D)}
is very close to 12max{dist(z, ∂D), dist(xk−1, ∂D)} = 12dist(z, ∂D) Hence, for ω close to z,
|ω − xk−1| ≤ |z − ω| + |z − xk−1| < 1
2max(dist(xk−1, ∂D), dist(ω, ∂D))Therefore ω and xk−1 are adjacent This shows ρD(z0, ω) ≤ k, i.e ω ∈ Uk
c)
18
Trang 45Proof By induction, it suffices to show there exists a constant c > 0, such that for adjacent
z, ω ∈ D, h(z) ≤ ch(ω) Indeed, let r = 14min{dist(z, ∂D), dist(ω, ∂D)}, then by value property, ∀y ∈ B(ω, r), we have B(y, r) ⊂ B(ω, 2r), so
Proof We first have the following observation Consider circles centered at 0, with radius
r and 2r, respectively Let B be a BM on the plane and σ2r= inf{t > 0 : |Bt| = 2r}
∀x ∈ ∂B(0, r), Px([B0, Bσ2r] doesn’t loop around 0) is invariant for different x’s on
∂B(0, r), by the rotational invariance of BM ∀θ > 0, we define ¯Bt = Bθt, and ¯σ2r =inf{t > 0 : | ¯Bt| = 2r} Since ¯B and B have the same trajectories,
Px([B0, Bσ2r] doesn’t loop around 0)
= P ([B0, Bσ2r] + x doesn’t loop around 0)
= P ([ ¯B0, ¯Bσ¯2r] + x doesn’t loop around 0)
P (√1
θ[ ¯B0, ¯B¯ σ2r] +√x
θ doesn’t loop around 0)
= P ([W0, Wτ] + √x
θ doesn’t loop around 0)
= P√xθ([W0, Wτ] doesn’t loop around 0)
Note √ x
θ ∈ ∂B(0,√ r
θ), we conclude for different r’s, the probability that BM starting from
∂B(0, r) exits B(0, 2r) without looping around 0 is the same
Now we assume 2−n−1 ≤ |x| < 2−n and σn = inf{t > 0 : |Bt| = 2−n} Then for
Ej = {[Bσj, Bσj−1] doesn’t loop around 0}, E ⊂ ∩n
j=1Ej From what we observe above,
PBσj([B0, Bσj−1] doesn’t loop around 0) is a constant, say β Use strong Markov propertyand induction, we have
Px(∩nj=1Ej) = Px(∩nj=2Ej; Px(E1|Fσ 1)) = βPx(∩nj=2Ej) = βn= 2n log β
19
Trang 46Set − log β = α, we have P (E) ≤ 2 = 2 (2 ) ≤ 2 |x| Clearly β ∈ (0, 1) So
P0{[B0, Bσ] loops around 0} By part a), P {[B0, Bσ] loops around 0} = 1 So,
P0( ¯B loops around 0 before exiting B(0, )) = 1
This means P (τD< ¯σ) = 1, ∀ > 0 This is equivalent to x being regular
EP9-3 a)
Proof We first establish a derivative estimate for harmonic functions Let h be harmonic in
D Then ∂z∂h
i is also harmonic By mean-value property and integration-by-parts formula,
∀z0∈ D and ∀r > 0 such that B(z0, r) ⊂ U , we have
|hn(z) − hn(ω)| ≤ 2d
η C|z − ω|
This clearly shows the desired δ exists
b)
Proof Let K be a compact subset of D, then by part a) and Arzela-Ascoli theorem, {hn}n
is relatively compact in C(K) So there is a subsequence {hn j} such that hn j → h uniformly
on K Furthermore, by mean-value property, h must be also harmonic in the interior of K
By choosing a sequence of compact subsets {Kn} increasing to D, and choosing diagonallysubsequences, we can find a subsequence of {hn} such that it converges uniformly on anycompact subset of D This will consistently define a function h in D Since harmonicity is
a local property, h is harmonic in D
EP10-1 a)
20
Trang 47Proof First, we note that
Px(B1≥ 1; Bt> 0, ∀t ∈ [0, 1]) = Px(B1≥ 1) − Px( inf
0≤s≤1Bs≤ 0, B1≥ 1)Let τ0 be the first passage time of BM hitting 0, then by strong Markov property
Px(B1≥ 1; Bt> 0, ∀t ∈ [0, 1])
= Px(B1≥ 1) − Px(B1≤ −1)
=
Z 1+x 1−x
e−y2/2
√2π dy
≥ 2xe
−2
√2πwhere the last inequality is due to x < 1
So F (n+m) ≤ F (n)F (m) By the properties of submultiplicative functions, limn→∞log F (n)n
exists We set this limit by −α ∀m ∈ N, for m large enough, we can find n, such that
2n ≤ m < 2n+1, then P (E2 n) ≥ P (Em) ≥ P (E2n+1) So
log P (E2n)log 2n
Trang 48Let m → ∞, then log 2 / log m → 1 as seen by log 2 ≤ log m < log 2 + log 2 Solimmlog P (Em )
log m exists and equals to −α To see α ∈ (0, 1], note F (1) < 1 and F (n) ≤ F (1)n
By induction, Mn ≥ δn Hence infn log Mn n ≥ log δ > −∞ Let β = infnlog Mn n, then
Mn ≥ eβn We set α = eβ Then Mn≥ αn Meanwhile, there exists constant C ∈ (0, ∞),such that for mn= mink≤Nfn(k), Mn≤ Cmn Indeed, for n = 1, M1= m1, and for n > 1,
22