Unlike the usual American option where only the buyer has the right to choose the exercise time, in a game option we allow the seller to have the same right as well, namely, the seller c
Trang 1230 Chapter 8 Applications of FBSDEs Differentiating (5.14) with respect to x twice a n d d e n o t e v = u~.~, p = qz~,
t h e n we see t h a t (v,p) satisfies the following (linear) B S P D E :
(5.15)
dv= { - T x ~ v ~ - 1 2 2 (2xa 2+xr)v~ (a 2 + r ) v
- x a p ~ - ( 2 a - O ) p } d t - p d W ( t ) ; (t,y) C [O,T) •
v(T, x) = g"(x)
Here a g a i n the well-posedness of (5.15) can b e o b t a i n e d b y considering its equivalent f o r m after t h e Euler t r a n s f o r m a t i o n (since r a n d a are indepen- dent of x!) Now we can a p p l y i n g C h a p t e r 5, Corollary 6.3 to conclude t h a t
v > 0, w h e n e v e r g " ~ 0, a n d hence ~ is convex p r o v i d e d g is
We can discuss m o r e c o m p l i c a t e d s i t u a t i o n b y using the c o m p a r i s o n
t h e o r e m s in C h a p t e r 5 For e x a m p l e , let us a s s u m e t h a t b o t h r a n d a are
deterministic functions of (t, x), a n d we a s s u m e t h a t t h e y are b o t h C 2 for simplicity T h e n (5.10) coincides with (5.4) Now differentiating (5.4) twice
a n d d e n o t i n g v uz~, we see t h a t v satisfies the following P D E :
(5.16)
w h e r e
0 = vt + 2 x2 a 2 v ~ + 5xv~ + by + r ~ ( x u ~ - u),
1
v ( T , x ) = 9 " ( z ) , 9 >_ O,
= 2a 2 + 2 x a a z + r;
D = a 2 + 4xaa~ + (xcr~) 2 + x2aa~z + 2xrz + r
N o w let us d e n o t e V = xu~ - u , t h e n s o m e c o m p u t a t i o n shows t h a t V satisfies t h e equation:
0 = Vt + 2 x 2 a 2 V x z + axVx + (xrx - r ) Y , on [0, T ) x (0, (x~), (5.17)
L V ( T , z ) = x g ' ( x ) - g(x), x >_ O,
for s o m e f u n c t i o n ~ d e p e n d i n g on a a n d /~ (whence r a n d a) T h e r e f o r e
a p p l y i n g t h e c o m p a r i s o n t h e o r e m s of C h a p t e r 5 (use Euler t r a n s f o r m a t i o n
if necessary) we can derive t h e following results: a s s u m e t h a t g is convex,
t h e n
(i) if r is convex a n d x g ' ( x ) - g(x) > O, t h e n u is convex
(ii) if r is concave a n d x g ' ( x ) - g ( x ) < O, t h e n u is convex
(iii) if r is i n d e p e n d e n t of x, t h e n u is convex
Indeed, if x g ' ( x ) - g ( x ) >_ O, t h e n V ~ 0 b y C h a p t e r 5, C o r o l l a r y 6.3 This, t o g e t h e r with t h e convexity of r a n d g, in t u r n shows t h a t the solution
v of (5.16) is n o n - n e g a t i v e , p r o v i n g (i) P a r t (ii) can be a r g u e d similarly
T o see (iii), n o t e t h a t w h e n r is i n d e p e n d e n t of x, (5.16) is h o m o g e n e o u s , thus t h e convexity of h implies t h a t of fi, t h a n k s to C h a p t e r 5, C o r o l l a r y 6.3 again
Trang 2w A stochastic Black-Scholes formula 231
w R o b u s t n e s s o f B l a c k - S c h o l e s f o r m u l a
T h e robustness of the Black-Scholes formula concerns the following prob- lem: suppose a practitioner's information leads him to a misspecified value
of, say, volatility a, and he calculates the option price according to this misspecified parameter and equation (5.4), and then tries to hedge the con- tingent claim, what will be the consequence?
Let us first assume t h a t the only misspecified parameter is the volatil- ity, and denote it by a a(t, x), which is C 2 in x; and assume t h a t the interest rate is deterministic and independent of the stock price By the conclusion (iii) in the previous part we know t h a t u is convex in x Now let
us assume t h a t the true volatility is an {5vt}t>o-adapted process, denoted
by ~, satisfying
Since in this case we have proved that u is convex, it is easy to check that
in this case (6.16) of Chapter 5 reads
(5.19) (s - s + (2Q - AJ)q + ] - f = lx2152 - a2]u~ >_ 0,
where (s M ) is the differential operator corresponding to the misspecified coefficients (r, 3) Thus we conclude from Chapter 5, Theorem 6.2 t h a t
~ ( t , x ) _> u(t, x), V(t, x), a.s Namely the misspecified price dominates the true price
Now let us assume that the inequality in (5.18) is reversed Since both (5.4) and (5.14) are linear and homogeneous, ( - ~ , - ~ ) and ( - u , 0 ) are b o t h solutions to (5.14) and (5.4) as well, with the terminal condition being replaced by - g ( x ) But in this case (5.19) becomes
because u is convex, and 5 2 <_ a 2 Thus - ~ _7 - u , namely ~ _< u
Using the similar technique we can again discuss some more compli- cated situations For example, let us allow the interest rate r to be mis- specified as well, but in the form that it is convex in x, say Assume that the payoff function h satisfies xh~(x) - h(x) 7_ O, and t h a t § and 5 are true interest rate and volatility such that they are {~'t}t_>o-adapted ran- dom fields satisfying ~(t,x) > r ( t , x ) , and ~(t,x) > ~ ( t , x ) , V(t,x) Then, using the notation as before, one shows that
(~ - s = ~x2152 - a ~ ] u ~ + ( ~ - r ) [ x u - u] > 0,
because u is convex, and xu~ - u = V >_ O, thanks to the arguments in the previous part Consequently one has ~t(t, x) >_ u(t, x), V(t, x), a.s Namely,
we also derive a one-sided domination of the true values and misspecified values
Trang 3232 Chapter 8 Applications of FBSDEs
We r e m a r k t h a t if the misspecified volatility is not the deterministic function of the stock price, the comparison m a y fail We refer the inter- ested readers to E1 Karoui-Jeanblanc-Picqu@-Shreve [1] for an interesting counterexample
w A n A m e r i c a n G a m e Option
In this section we apply the result of C h a p t e r 7 to derive an ad hoc option pricing problem which we call the American Game Option
To begin with let us consider the following F B S D E with reflections (compare to C h a p t e r 7, (3.2))
(6.1)
Note t h a t the forward equation does not have reflection; and we assume
t h a t m = 1 and 0 2 ( t , x , w ) = (L(t,x,w),U(t,x,w)), where L and U are two r a n d o m fields such t h a t L(t, x, w) <_ U(t, x, w), for all (t, x, w) E [0, T] x
IR n • f~ We assume further t h a t b o t h L and U are continuous functions
in x for all (t,w), and are {Ft}t_>0-progressively measurable, continuous processes for all x
In light of the result of the previous section, we can think of X in (6.1) as a price process of financial assets, and of Y as a wealth process
of an (large) investor in the market However, we should use the latter
i n t e r p r e t a t i o n only up until the first time we have d~ < 0 In other words,
no externM funds are allowed to be added to the investor's wealth, although
he is allowed to consume
T h e American game option can be described as follows Unlike the usual American option where only the buyer has the right to choose the exercise time, in a game option we allow the seller to have the same right
as well, namely, the seller can force the exercise time if he wishes However,
in order to get a nontrivial option (i.e., to avoid immediate exercise to be optimal), it is required t h a t the payoff be higher if the seller opts to force the exercise Of course the seller m a y choose not to do anything, then the
g a m e option becomes the usual American option
To be more precise, let us denote by J~t,T the set of {gvt}t_>0-stopping times taking values in It, T], and t E [0, T) be the time when the "game" starts Let T E J~t,T be the time the buyer chooses to exercise the option; and a E .h~t, T be t h a t of the seller If T _< a, then the seller pays L(T, X~);
if a < % then the seller pays U(a,X~) If neither exercises the option
by the m a t u r i t y date T, then the sellcr pays B = g(XT) We define the
minimal hedging price of this contract to be the infimum of initial wealth
a m o u n t s ]Io, such t h a t the seller can deliver the payoff, a.s., without having
to use additional outside funds In other words, his wealth process has to follow the dynamics of Y (with d~ _> 0), up to the exercise time a A ~- A T,
Trang 4w An American game option 233 and at the exercise time we have to have
(6.2) Y~A~^T >_ g(XT)l{aAr=T} -t- L(r, Xr)l{r<T,r<_a } + U(cr, X~)l{~<,-}
Our purpose is to determine the minimal hedging price, as well as the corresponding minimal hedging process
To solve this option pricing problem, let us first study the following stochastic game (Dynkin game) is useful: there are two players, each can choose a (stopping) time to stop the game over an given horizon [t, T] Let
cy C Mt,T be the time that player I chooses, and r E .A~t,T be t h a t of player
II'w If a < r, the player I pays U ( a ) ( = U(a,X~)) to player II; whereas if
T _< a < T, player I pays L ( r ) ( = L(r,X~)) (yes, in both cases the player I pays!) If no one stops by time T, player I pays B There is also a running cost h ( t ) ( = h(t, Xt, Yt, Zt)) In other words the payoff player I has to pay
is given by
(6.3) RtB(a, T) ~= f r h(u)du + BI{,,A~-=T}
J t + L(r)l{r<T, r<_a} + U(a)l{,<~}, where B E L2(fl) is a given ~T measurable random variable satisfying
L(T) ~ B ~_ U(T) Suppose that player / / is trying to maximize the payoff, while player I attempts to minimize it Define the upper and lower
value s of the game by
(6.4)
V(t) ~ essinf esssup E{RS(a,r)iJ:t},
~ T ~'E.A4t,T
V(t) =t' esssup essmf" E{Rff(a, T)}J:t}
respectively; and we say that the game has a value if V(t) = V(t) ~ V(t)
The solution to the Dynkin game is given by the following theorem, which can be obtained by a line by line analogue of Theorem 4.1 in Cvitani5 and Karatzas [2] Here we give only the statement
T h e o r e m 6.1 Suppose that there exists a solution (X, Y, Z, 4) to FB- SDER (6.1) (with (_92(t,x) = (L(t,x),U(t,x)) Then the game (6.3) with B = g(XT), h(t) = h(t, Xt,Yt, Zt), and L(t,w) = L(t, Xt(w)), U(t,w) = U(t, Xt(w)) has value V(t), given by the backward component
Y of the solution to the FBSDER, i.e V(t) = V(t) = V_(t) = Yt, a.s., for all 0 < t < T Moreover, there exists a saddle-point (at, ?t) E A/[t,T x 2eft,T, given by
~ t ~ i n f { s E [ t , T ) : ]Is=U(s, X s ) } A T ,
~ t ~ i n f { s E [ t , T ) : Y s = L(s, Xs)} AT,
Trang 5234 Chapter 8 Applications of FBSDEs
namely, we have
_< E{R
=Yt <_ E{R~ (xr) (a, ~t)IV:t}, a.s
In what follows when we mention FBSDER, we mean (6.1) specified as
t h a t in T h e o r e m 6.1
T h e o r e m 6.2 The minimal hedging price of the American Game Option
is greater or equal to V(O), the upper value of the game (at t = O) of Theorem 6.1 If the corresponding F B S D E R has a solution ()(, ]2 2 , r
then the minimal hedging price is equal to 1)o
Proof: Fix the exercise times ~, T of the seller and the buyer, respec- tively If Y is the seller's hedging process, it satisfies the following dynamics
for t < T Acr A T :
Yt + h(s, X s , Ys, Z~)ds = Z s d W s - it,
with ~ non-decreasing Hence, the left-hand side is a supermartingale Prom this and the requirement that Y be a hedging process, we get Yt _>
E{Rf(X~)(cr, r)lf't}, Vt, a.s in the notation of Theorem 4.1 Since the buyer is trying to maximize the payoff, and the seller to minimize it, we get Yt >_ ~ Vt, a.s Consequently, the minimal hedging price is no less
t h a n 17(0)
Conversely, if the F B S D E R has a solution with I) as the backward component, then by Theorem 6.1, process 1) is equal to the value process
of the game, and by (4.4) (with t = 0) and (2.10), up until the optimal exercise time ~ := 50 for the seller, it obeys the dynamics of a wealth process, since Ct is nondecreasing for t _< 8o So, the seller can start with ]10, follow the dynamics of Y until t ~ and then exercise, if the buyer has not exercised first In general, from the saddle-point property we know that, for any ~- E J~'0,T,
1)~AT ~ g(XT)I{~Ar=T} 'V L(T, Xv)l{v<T,r_<~} + U(~, X~)I{~<~}
This implies t h a t that the seller can deliver the required payoff if he uses
as his exercise time, no m a t t e r what the buyer's exercise time T is Con- sequently, 1)o = V(0) is no less than the minimal hedging price []
Trang 6C h a p t e r 9
N u m e r i c a l M e t h o d s for F B S D E s
In the previous chapter we have seen various applications of FBSDEs in theoretical and applied fields In many cases a satisfactory numerical simu- lation is highly desirable In this chapter we present a complete numerical algorithm for a fairly large class of FBSDEs, and analyze its consistency as well as its rate of convergence We note that in the standard forward SDEs case two types of approximations are often considered: a strong scheme
(P 1 which typically converges pathwisely at a rate ( ~ ) , and a weak scheme which approximates only approximates E{f(X(T))}, with a possible faster rate of convergence However, as we shall see later, in our case the weak convergence is a simple consequence of the pathwise convergence, and the rate of convergence of our scheme is the same as the strong scheme for pure forward SDEs, which is a little surprising because a FBSDE is much more complicated than a forward SDE in nature
w F o r m u l a t i o n o f t h e P r o b l e m
In this chapter we consider the following FBSDE: for t C [0, T], { Z(t) + f x J0t b ( s , O ( s ) ) d s + [ J0t a(s,X(s),Y(s))dW(s);
Y(t) = g(X(T)) + ftt b(s'O(s))ds- ft Z(s)dW(s),
where O = (X, Y, Z) We note t h a t in some applications (e.g., in Chapter
8, w Black's Consol Rate Conjecture), the FBSDE (1.1) takes a slightly simpler form:
T h a t is, the coefficients b and/~ do not depend on Z explicitly, and often in these cases only the components (X, Y) are of significant interest In what follows we shall call (1.2) the "special case" when only the approximation
of (X, Y) are considered; and we call (1.1) the "general case" if the approx- imation of (X, ]I, Z) is required We note that in what follows we restrict ourselves to the case where all processes involved are one dimensional The higher dimensional case can be discussed under the same idea, but techni- cally much more complicated Furthermore, we shall impose the following standing assumptions:
Trang 7236 Chapter 9 Numerical Methods of FBSDEs (A1) The functions b, b and a are continuously differentiable in t and twice continuously differentiable in x, y, z Moreover, if we denote any one of these functions generically by r then there exists a constant c~ E (0, 1), such t h a t for fixed y and z, r y, z) E C1+~'2+% Furthermore, for some
L > 0 ,
I1r ", Y, Z)ll~,2,~ < L, V(y,z) E ~2
(A2) The function a satisfies
(1.3) # _< e(t, x, y) < C, V(t, x, y) e [0, T] x 1R 2,
where 0 < it _< C are two constants
(A3) The function g belongs boundedly to C 4+~ for some a C (0, 1) (one may assume t h a t a is the same as that in (A1))
It is clear that the assumptions (A1) (A3) are stronger that those in Chapter 4, therefore applying Theorem 2.2 of Chapter 4, we see that the
F B S D E (1.1) has a unique adapted solution which can be constructed via the Four Step Scheme T h a t is, the adapted solution (X, Y, Z) of (1.1) can
be obtained in the following way:
(1.4) X ( t ) = x + /~(s, X ( s ) ) d s + ~(s, X ( s ) ) d W ( s ) ,
Y ( t ) = O(t,X(t)), Z(t) = a ( t , X ( t ) , O ( t , X ( t ) ) O x ( t , X ( t ) ) ,
where
,~(t, x) = o-(t,x,e(t,x));
and 0 E C l+~'2+a for some 0 < a < 1 is the unique classical solution to
the quasilinear parabolic PDE: {1 Ot + -~a(t,x,O) 0 ~ + b(t,x,O,~(t,x,O)O~)O~
(1.5) +b(t,z,O,~(t,z,O)O~) = O, (t,x) 9 (O,T) x a ,
O(T, ~) = 9(~), x e a
We should point out that, by using standard techniques for gradient esti- mates, t h a t is, applying parabolic Schauder interior estimates to the differ- ence quotients repeatedly (cf Gilbarg & Trudinger [1]), it can be shown
t h a t under the assumptions (A1)-(A3) the solution 0 to the quasilinear
P D E (1.5) actually belongs to the space C2+~ '4+a Consequently, there exists a constant K > 0 such t h a t
(1.6) I ] 0 l l o ~ + l l 0 t l l ~ + l l 0 t t l l ~ + l l 0 ~ l l ~ + l l 0 ~ l l ~ + l l 0 ~ l l ~ + l l 0 ~ l l ~ < ~ Our line of attack is now clear: we shall first find a numerical scheme for the quasilinear P D E (1.5), and then find a numerical scheme for the
Trang 8w Numerical approximation of the quasilinear PDE 237 (forward) SDE (1.4) We should point out that although the numerical analysis for the quasilinear P D E is not new, but the special form of (1.5) has not been covered by existing results In the next Section 2 we shall study the numerical scheme of the quasilinear P D E (1.5) in full details, and then
in Section 3 we study the (strong) numerical scheme for the forward SDE
in (1.4)
w N u m e r i c a l A p p r o x i m a t i o n s o f the Q u a s i l i n e a r P D E
In this section we study the numerical approximation scheme and its con- vergence analysis for the quasilinear parabolic P D E (1.5) We will first carry out the discussion for the special case completely, upon which the study of the general case will be built
w A s p e c i a l c a s e
In this case the coefficients b and b are independent of Z, we only ap- proximate ( X , Y ) Note that in this case the P D E (1.5), although still quasilinear, takes a much simpler form:
o~ + ~ ( t , x, o)%~ + b(t, x, o)ox + ~(t, ~, o) = o, t 9 (o, T),
(2.1)
[ O(T, x ) = g(x), x 9 IR
Let us first standardize the P D E (3.1) Define u ( t , x ) = O ( T - t , x ) , and for ~ = a, b, and/~, respectively, we define
~(t, x, y) = ~(T - t, ~, y), V(t, ~, y)
T h e n u satisfies the P D E
(2.2) { u(o,U~ - x)l~(t' x' u ) ~ 2 = ~(x) - ~(t, ~, ~ ) ~ - ~(t, ~, ~1 = o;
To simplify notation we replace 9, b and ~ by a, b and b themselves in the rest of this section We first determine the characteristics of the first order nonlinear P D E
(2.3) u t - b(t, x , u ) u x = O
Elementary theory of PDEs (see, e.g., John [1]) tells us that the character- istic equation of (2.3) is
d e t [ a i j t ' ( s ) - 5 i j x ' ( s ) l = O, s > O,
where s is the parameter of the characteristic and ( a i j ) is the matrix
- b ( t , x , u )
- 1
Trang 9238 Chapter 9 Numerical Methods of FBSDEs
I n o t h e r words, if we let p a r a m e t e r s = t, t h e n the characteristic curve C is given b y t h e O D E :
(2.4) ~'(t) = b(t, x(t), ~(t, x(t) )
F u r t h e r , if we let 7 be t h e a r c l e n g t h of C, t h e n aiong C we h a v e
d T = [1 + b 2 ( t , x , u ( t , x ) ) ] 89
a n d
w h e r e r = [1 + b 2 ( t , x , u ( t , x ) ) ] 89 Thus, along C, e q u a t i o n ( 2 2 ) i s
simplified to
, ~(o, ~) = g(~)
We shall design our n u m e r i c a l scheme b a s e d on (2.5)
w N u m e r i c a l s c h e m e
L e t h > 0 a n d A t > 0 be fixed n u m b e r s Let xi = ih, i = 0 , + 1 , - ,
a n d t k : k A t , k = 0, 1 , - - , N , where t N : T For a function f ( t , x), let
f k ( ) = f ( t k, ); and let f k = f ( t k, xi) denote the grid value of the function
f Define for each k t h e a p p r o x i m a t e solution w k by the following recursive
steps:
0
S t e p 0: Set w i = g(xi), i , - 1 , 0 , 1 , ; use linear i n t e r p o l a t i o n to
o b t a i n a function w ~ defined on x C JR
S u p p o s e t h a t w k-1 (x) is defined for x C JR, let w k-1 = w k - l ( x i ) and
b ~ = b ( t k x~,w~ ); ~ = o ( t k , x ~ , ~ ,, b~ = ~ , , , ,
~(~)~ = h - ~ [ ~ + ~ - 2w~ + ~ L d
S t e p k: O b t a i n the grid values for the k - t h step a p p r o x i m a t e solution,
d e n o t e d b y {w~}, v i a the following difference equation:
At 2(~,~) ~(~), + (~)~; - ~ < i < ~ ,
Since b y o u r a s s u m p t i o n G is b o u n d e d below positively and b a n d g are
b o u n d e d , t h e r e exists a unique bounded solution of (2.7) as soon as an
e v a l u a t i o n is specified for w a-1 (x)
Finally, we use linear interpolation to e x t e n d the grid values of
k
{w i } i = _ ~ t o all x C R to o b t a i n the k - t h step a p p r o x i m a t e solution wk(-)
Trang 10w Numerical approximation of the quasilinear PDE 239 Before we do the convergence analysis for this numerical scheme, let
us point out a s t a n d a r d localization idea which is essential in our future discussion, b o t h theoretically and computationally We first recall from
C h a p t e r 4 t h a t the (unique) classical solution of the Cauchy problem (2.2) (therefore (2.5)) is in fact the uniform limit of the solutions {u R} (R + oc)
to the initial-boundary problems:
( 2 2 ) ~
/ u, - ~ ( t , x, u) ~x~ - b(t, x, ~)u~ - ~(t, ~, ~) = 0,
t ~ ( t , x ) = g ( x ) , Ixl = R , 0 < t < T
It is conceivable t h a t we can also restrict the corresponding difference equa- tion (2.7) so t h a t - i 0 ~ i ~ io, for some i0 < co Indeed , if we denote w i~
to be the following localized difference equation
(2.7)io
k - k - 1 1
- ( ~ ) 5 x ( w ) ~ +
h u 2
o g(xi), - i 0 < i < i0;
wk io = g(x• o), k = 0 , 1 , 2 , - " ,
- i 0 _<i _<i0,
c i o ~ k ~
k is the uniform limit of twi L then by (A1) and (A2), one can show t h a t w i
as io + co, uniformly in i and k In particular, if we fix the mesh size h > 0, and let R = ioh, then the quantities
(2.8) m a x l u ( t k , x ~ ) wk[ and m a x ]uR(tk,xi) _ wi~O,k
i - - i o ~ i < _ i o
differ only by a error t h a t is uniform in k, and can be taken to be arbitrarily small as i0 (or ioh = R ) is sufficiently large Consequently, as we shall see later, if for fixed h and A t we choose R (or io) so large t h a t the error between the two quantities in (2.8) differ by O ( h + [Atl) , then we can replace (2.2) by (2.2)R, and (2.7) by (2.7)~ o without changing the desired results on the r a t e of convergence But on the other hand, since for the
i o , k
localized solutions the e r r o r l u R ( t k, x=t=io) w • o I 0 for all k = 0, 1, 2,- -, the m a x i m u m absolute value of the error ]u R ( t k, xi ) - w ~ ~ ], i = - i 0 , - ' - , i0, will always occur in an "interior" point of ( - R , R) Such an observation will be particularly useful when a maximum-principle a r g u m e n t is applies (see, e.g., T h e o r e m 2.3 below) Based on the discussion above, from now on
we will use the localized version of the solutions to (2.2) and (2.7) whenever necessary, without further specifications
To conclude this subsection we note t h a t the a p p r o x i m a t e solutions {wk(.) are defined only on the times t = t k, k = 0, 1 , - - - , N An approxi-
m a t e solution defined on [0, T] • ]R is defined as follows: for given h > 0