1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

markov processes gaussian processes and local times cup potx

632 538 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 632
Dung lượng 3,67 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

It presents the remarkable isomor-phism theorems of Dynkin and Eisenbaum and then shows how theycan be applied to obtain new properties of Markov processes by usingwell-established techn

Trang 2

0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0

This page intentionally left blank

Trang 3

0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0

stud-to this material through self-contained but harmonized “mini-courses”

on the relevant ingredients, which assume only knowledge of theoretic probability The streamlined selection of topics creates an easyentrance for students and experts in related fields

measure-The book starts by developing the fundamentals of Markov processtheory and then of Gaussian process theory, including sample path prop-erties It then proceeds to more advanced results, bringing the reader tothe heart of contemporary research It presents the remarkable isomor-phism theorems of Dynkin and Eisenbaum and then shows how theycan be applied to obtain new properties of Markov processes by usingwell-established techniques in Gaussian process theory This original,readable book will appeal to both researchers and advanced graduatestudents

i

Trang 4

0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0

Cambridge Studies in Advanced Mathematics

71 R Blei Analysis in Integer and Fractional Dimensions

72 F Borceux & G Janelidze Galois Theories

73 B Bollobas Random Graphs 2nd Edition

74 R M Dudley Real Analysis and Probability 2nd Edition

75 T Sheil-Small Complex Polynomials

76 C Voisin Hodge Theory and Complex Algebraic Geometry I

77 C Voisin Hodge Theory and Complex Algebraic Geometry II

78 V Paulsen Completely Bounded Maps and Operator Algebras

79 F Gesztesy & H Holden Soliton Equations and Their Algebra-Geometric Solutions I

81 S Mukai An Introduction to Invariants and Moduli

82 G Tourlakis Lectures in Logic and Set Theory I

83 G Tourlakis Lectures in Logic and Set Theory II

84 R A Bailey Association Schemes

85 J Carlson, S M¨ uller-Stach & C Peters Period Mappings and Period Domains

86 J J Duistermaat & J A C Kolk Multidimensional Real Analysis I

87 J J Duistermaat & J A C Kolk Multidimensional Real Analysis II

89 M C Golumbic & A N Trenk Tolerance Graphs

90 L H Harper Global Methods for Combinatorial Isoperimetric Problems

91 I Moerdijk & J Mrcun Introduction to Foliations and Lie Groupoids

92 J Koll´ ar, K E Smith & A Corti Rational and Nearly Rational Varieties

93 D Applebaum L´ evy Processes and Stochastic Calculus

95 M Schechter An Introduction to Nonlinear Analysis

96 R Carter Lie Algebras of Finite and Affine Type

97 H L Montgomery & R C Vaughan Multiplicative Number Theory

98 I Chavel Riemannian Geometry

99 D Goldfeld Automorphic Forms and L-Functions for the Group GL(n,R)

ii

Trang 5

0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0

MARKOV PROCESSES, GAUSSIAN PROCESSES, AND LOCAL TIMES

Trang 6

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São PauloCambridge University Press

The Edinburgh Building, Cambridge cb2 2ru, UK

First published in print format

isbn-13 978-0-521-86300-1

isbn-13 978-0-511-24696-8

© Michael B Marcus and Jay Rosen 2006

2006

Information on this title: www.cambridge.org/9780521863001

This publication is in copyright Subject to statutory exception and to the provision ofrelevant collective licensing agreements, no reproduction of any part may take placewithout the written permission of Cambridge University Press

isbn-10 0-511-24696-X

isbn-10 0-521-86300-7

Cambridge University Press has no responsibility for the persistence or accuracy of urlsfor external or third-party internet websites referred to in this publication, and does notguarantee that any content on such websites is, or will remain, accurate or appropriate

Published in the United States of America by Cambridge University Press, New Yorkwww.cambridge.org

hardback

eBook (NetLibrary)eBook (NetLibrary)hardback

Trang 7

Jane Marcus

and

Sara Rosen

Trang 9

2.9 Applications of the Ray–Knight Theorems 58

3 Markov processes and local times 62

3.3 Strongly symmetric Borel right processes 73

3.5 Killing a process at an exponential time 81

3.10 Moment generating functions of local times 115

4 Constructing Markov processes 121

vii

Trang 10

4.3 Diffusions 1444.4 Left limits and quasi left continuity 147

4.6 Continuous local times and potential densities 1624.7 Constructing Ray semigroups and Ray processes 164

5 Basic properties of Gaussian processes 1895.1 Definitions and some simple properties 189

5.3 Zero–one laws and the oscillation function 203

5.6 Processes with stationary increments 235

6 Continuity and boundedness of Gaussian processes 2436.1 Sufficient conditions in terms of metric entropy 2446.2 Necessary conditions in terms of metric entropy 2506.3 Conditions in terms of majorizing measures 255

7 Moduli of continuity for Gaussian processes 282

7.4 Local moduli of associated processes 324

Trang 11

9 Sample path properties of local times 396

9.2 A necessary condition for unboundedness 403

9.4 Continuity and boundedness of local times 410

10.5 Additional variational results for local times 482

11 Most visited sites of symmetric stable processes 497

11.2 Most visited sites of Brownian motion 504

11.6 Most visited sites of symmetric stable processes 523

12 Local times of diffusions 530

12.2 Eisenbaum’s version of Ray’s Theorem 534

12.4 Markov property of local times of diffusions 54312.5 Local limit laws forh-transforms of diffusions 549

13 Associated Gaussian processes 551

13.3 Infinitely divisible squares and associated processes 57013.4 Additional results aboutM -matrices 578

Trang 12

14 Appendix 58014.1 Kolmogorov’s Theorem for path continuity 580

14.3 Analytic sets and the Projection Theorem 583

Trang 13

We found it difficult to choose a title for this book Clearly we are notcovering the theory of Markov processes, Gaussian processes, and localtimes in one volume A more descriptive title would have been “A Study

of the Local Times of Strongly Symmetric Markov Processes ing Isomorphisms That Relate Them to Certain Associated GaussianProcesses.” The innovation here is that we can use the well-developedtheory of Gaussian processes to obtain new results about local times.Even with the more restricted title there is a lot of material to cover.Since we want this book to be accessible to advanced graduate students,

Employ-we try to provided a self-contained development of the Markov processtheory that we require Next, since the crux of our approach is that wecan use sophisticated results about the sample path properties of Gaus-sian processes to obtain similar sample path properties of the associatedlocal times, we need to present this aspect of the theory of Gaussianprocesses Furthermore, interesting questions about local times lead us

to focus on some properties of Gaussian processes that are not usuallyfeatured in standard texts, such as processes with spectral densities orthose that have infinitely divisible squares Occasionally, as in the study

of thep-variation of sample paths, we obtain new results about Gaussian

processes

Our third concern is to present the wonderful, mysterious phism theorems that relate the local times of strongly symmetric Markovprocesses to associated mean zero Gaussian processes Although someinkling of this idea appeared earlier in Brydges, Fr¨ohlich and Spencer(1982) we think that credit for formulating it in an intriguing and usableformat is due to E B Dynkin (1983), (1984) Subsequently, after our ini-tial paper on this subject, Marcus and Rosen (1992d), in which we useDynkin’s Theorem, N Eisenbaum (1995) found an unconditioned iso-morphism that seems to be easier to use After this Eisenbaum, Kaspi,

isomor-1

Trang 14

Marcus, Rosen and Shi (2000) found a third isomorphism theorem, which

we refer to as the Generalized Second Ray–Knight Theorem, because it

is a generalization of this important classical result

Dynkin’s and Eisenbaum’s proofs contain a lot of difficult torics, as does our proof of Dynkin’s Theorem in Marcus and Rosen(1992d) Several years ago we found much simpler proofs of these theo-rems Being able to present this material in a relatively simple way wasour primary motivation for writing this book

combina-The classical Ray–Knight combina-Theorems are isomorphisms that relate cal times of Brownian motion and squares of independent Brownian mo-tions In the three isomorphism theorems we just referred to, these the-orems are extended to give relationships between local times of stronglysymmetric Markov processes and the squares of associated Gaussian pro-cesses A Markov process with symmetric transition densities is stronglysymmetric Its associated Gaussian process is the mean zero Gaussianprocess with covariance equal to its 0-potential density (If the Markovprocess, say X, does not have a 0-potential, one can consider  X, the

lo-process X killed at the end of an independent exponential time with

mean 1/α The 0-potential density of  X is the α-potential density of X.)

As an example of how the isomorphism theorems are used and of thekinds of results we obtain, we mention that we show that there exists

a jointly continuous version of the local times of a strongly ric Markov process if and only if the associated Gaussian process has

symmet-a continuous version We obtsymmet-ain this result symmet-as symmet-an equivsymmet-alence, withoutobtaining conditions that imply that either process is continuous How-ever, conditions for the continuity of Gaussian processes are known, so

we know them for the joint continuity of the local times

M Barlow and J Hawkes obtained a sufficient condition for the jointcontinuity of the local times of L´evy processes in Barlow (1985) andBarlow and Hawkes (1985), which Barlow showed, in Barlow (1988), isalso necessary Gaussian processes do not enter into the proofs of theirresults (Although they do point out that their conditions are also nec-essary and sufficient conditions for the continuity of related stationaryGaussian processes.) This stimulating work motivated us to look for amore direct link between Gaussian processes and local times and led us

to Dynkin’s isomorphism theorem

We must point out that the work of Barlow and Hawkes just cited plies to all L´evy processes whereas the isomorphism theorem approachthat we present applies only to symmetric L´evy processes Neverthe-less, our approach is not limited to L´evy processes and also opens up

Trang 15

ap-the possibility of using Gaussian process ap-theory to obtain many oap-therinteresting properties of local times.

Another confession we must make is that we do not really stand the actual relationship between local times of strongly symmetricMarkov processes and their associated Gaussian processes That is, wehave several functional equivalences between these disparate objects andcan manipulate them to obtain many interesting results, but if one asks

under-us, as is often the case during lectures, to give an intuitive description ofhow local times of Markov processes and Gaussian process are related, wemust answer that we cannot We leave this extremely interesting ques-tion to you Nevertheless, there now exist interesting characterizations

of the Gaussian processes that are associated with Markov processes

We say more about this in our discussion of the material in Chapter 13.The isomorphism theorems can be applied to very general classes ofMarkov processes In this book, with the exception of Chapter 13, weconsider Borel right processes To ease the reader into this degree ofgenerality, and to give an idea of the direction in which we are going,

in Chapter 2 we begin the discussion of Markov processes by focusing

on Brownian motion For Brownian motion these isomorphisms are oldstuff but because, in the case of Brownian motion, the local times ofBrownian motion are related to the squares of independent Brownianmotion, one does not really leave the realm of Markov processes That

is, we think that in the classical Ray–Knight Theorems one can viewBrownian motion as a Markov process, which it is, rather than as aGaussian process, which it also is

Chapters 2–4 develop the Markov process material we need for thisbook Naturally, there is an emphasis on local times There is also

an emphasis on computing the potential density of strongly symmetricMarkov processes, since it is through the potential densities that weassociate the local times of strongly symmetric Markov processes withGaussian processes Even though Chapter 2 is restricted to Brownianmotion, there is a lot of fundamental material required to construct the

σ-algebras of the probability space that enables us to study local times.

We do this in such a way that it also holds for the much more generalMarkov processes studied in Chapters 3 and 4 Therefore, althoughmany aspects of Chapter 2 are repeated in greater generality in Chapters

3 and 4, the latter two chapters are not independent of Chapter 2

In the beginning of Chapter 3 we study general Borel right processeswith locally compact state spaces but soon restrict our attention tostrongly symmetric Borel right processes with continuous potential den-sities This restriction is tailored to the study of local times of Markov

Trang 16

processes via their associated mean zero Gaussian processes Also, eventhough this restriction may seem to be significant from the perspective

of the general theory of Markov processes, it makes it easier to duce the beautiful theory of Markov processes We are able to obtainmany deep and interesting results, especially about local times, relativelyquickly and easily We also considerh-transforms and generalizations of

intro-Kac’s Theorem, both of which play a fundamental role in proving theisomorphism theorems and in applying them to the study of local times.Chapter 4 deals with the construction of Markov processes We firstconstruct Feller processes and then use them to show the existence ofL´evy processes We also consider several of the finer properties of Borelright processes Lastly, we construct a generalization of Borel rightprocesses that we call local Borel right processes These are needed inChapter 13 to characterize associated Gaussian processes This requiresthe introduction of Ray semigroups and Ray processes

Chapters 5–7 are an exposition of sample path properties of Gaussianprocesses Chapter 5 deals with structural properties of Gaussian pro-cesses and lays out the basic tools of Gaussian process theory One of themost fundamental tools in this theory is the Borell, Sudakov–Tsirelsonisoperimetric inequality As far as we know this is stated without a com-plete proof in earlier books on Gaussian processes because the knownproofs relied on the Brun–Minkowski inequality, which was deemed to betoo far afield to include its proof We give a new, analytical proof of theBorell, Sudakov–Tsirelson isoperimetric inequality due to M Ledoux inSection 5.4

Chapter 6 presents the work of R M Dudley, X Fernique and M lagrand on necessary and sufficient conditions for continuity and bound-edness of sample paths of Gaussian processes This important workhas been polished throughout the years in several texts, Ledoux andTalagrand (1991), Fernique (1997), and Dudley (1999), so we can giveefficient proofs Notably, we give a simpler proof of Talagrand’s neces-sary condition for continuity involving majorizing measures, also due toTalagrand, than the one in Ledoux and Talagrand (1991) Our presen-tation in this chapter relies heavily on Fernique’s excellent monograph,Fernique (1997)

Ta-Chapter 7 considers uniform and local moduli of continuity of sian processes We treat this question in general in Section 7.1 In most

Gaus-of the remaining sections in this chapter, we focus our attention on valued Gaussian processes with stationary increments, {G(t), t ∈ R1},

real-for which the increments variance,σ2(t − s) := E(G(t) − G(s))2, is tively smooth This may appear old fashioned to the Gaussian purist but

Trang 17

rela-it is exactly these processes that are associated wrela-ith real-valued L´evyprocesses (And L´evy processes with values inR n have local times onlywhenn = 1.) Some results developed in this section and its applications

in Section 9.5 have not been published elsewhere

Chapters 2–7 develop the prerequisites for the book Except for tion 3.7, the material at the end of Chapter 4 relating to local Borel rightprocesses, and a few other items that are referenced in later chapters,they can be skipped by readers with a good background in the theory

Sec-of Gaussian and Markov processes

In Chapter 8 we prove the three main isomorphism theorems that weuse Even though we are pleased to be able to give simple proofs thatavoid the difficult combinatorics of the original proofs of these theorems,

in Section 8.3 we give the combinatoric proofs, both because they areinteresting and because they may be useful later on

Chapter 9 puts everything together to give sample path properties

of local times Some of the proofs are short, simply a reiteration ofresults that have been established in earlier chapters At this point

in the book we have given all the results in our first two joint papers

on local times and isomorphism theorems (Marcus and Rosen, 1992a,1992d) We think that we have filled in all the details and that many ofthe proofs are much simpler We have also laid the foundation to obtainother interesting sample path properties of local times, which we present

in Chapters 10–13

In Chapter 10 we consider thep-variation of the local times of

sym-metric stable processes 1 < p ≤ 2 (this includes Brownian motion).

To use our isomorphism theorem approach we first obtain results onthep-variation of fractional Brownian motion that generalize results of

Dudley (1973) and Taylor (1972) that were obtained for Brownian tion These are extended to the squares of fractional Brownian motionand then carried over to give results about the local times of symmetricstable processes

mo-Chapter 11 presents results of Bass, Eisenbaum and Shi (2000) on therange of the local times of symmetric stable processes as time goes toinfinity and shows that the most visited site of such processes is transient.Our approach is different from theirs We use an interesting bound forthe behavior of stable processes in a neighborhood of the origin due toMolchan (1999), which itself is based on properties of the reproducingkernel Hilbert spaces of fractional Brownian motions

In Chapter 12 we reexamine Ray’s early isomorphism theorem for the

h-transform of a transient regular symmetric diffusion, Ray (1963) and

Trang 18

give our own, simpler version We also consider the Markov properties

of the local times of diffusions

In Chapter 13, which is based on recent work of N Eisenbaum and

H Kaspi that appears in Eisenbaum (2003), Eisenbaum (2005), andEisenbaum and Kaspi (2006), we take up the problem of characteriz-ing associated Gaussian processes To obtain several equivalencies wemust generalize Borel right processes to what we call local Borel rightprocesses In Theorem 13.3.1 we see that associated Gaussian processesare just a little less general than the class of Gaussian processes thathave infinitely divisible squares Gaussian processes with infinitely di-visible squares are characterized in Griffiths (1984) and Bapat (1989)

We present their results in Section 13.2

We began our joint research that led to this book over 19 years ago

In the course of this time we received valuable help from R Adler, M.Barlow, H Kaspi, E B Dynkin, P Fitzsimmons, R Getoor, E Gin´e,

M Talagrand, and J Zinn We express our thanks and gratitude tothem We also acknowledge the help of P.-A Meyer

In the preparation of this book we received valuable assistance andadvice from O Daviaud, S Dhamoon, V Dobric, N Eisenbaum, S.Evans, P Fitzsimmons, C Houdr´e, H Kaspi, W Li, and J Rosinski

We thank them also

We are also grateful for the continued support of the National ScienceFoundation and PSC–CUNY throughout the writing of this book

1.1 Preliminaries

In this bookZ denotes the integers both positive and negative and IN or

sometimesN denotes the the positive integers including 0 R1 denotesthe real line and R+ the positive half line (including zero) R denotes

the extended real line [−∞, ∞] R n denotes n-dimensional space and

| · | denotes Euclidean distance in R n We say that a real number a is

positive ifa ≥ 0 To specify that a > 0, we might say that it is strictly

positive A similar convention is used for negative and strictly negative

Measurable spaces: A measurable space is a pair (Ω, F), where Ω is a set

andF is a sigma-algebra of subsets of Ω If Ω is a topological space, we

useB(Ω) to denote the Borel σ-algebra of Ω Bounded B(Ω) measurable

functions on Ω are denoted byB b(Ω)

Lett ∈ R+ A filtration ofF is an increasing family of sub σ-algebras

F t ofF, that is, for 0 ≤ s < t < ∞, F s ⊂ F t ⊂ F with F = ∪0≤t<∞ F t

Trang 19

(Sometimes we describe this by saying thatF is filtered.) To emphasize

a specific filtrationF tofF, we sometimes write (Ω, F, F t)

LetM and N denote two σ-algebras of subsets of Ω We use M ∨ N

to denote theσ-algebra generated by M ∪ N

Probability spaces: A probability space is a triple (Ω, F, P ), where (Ω, F)

is measurable space and P is a probability measure on Ω A random

variable, say X, is a measurable function on (Ω, F, P ) In general we

letE denote the expectation operator on the probability space When

there are many random variables defined on (Ω, F, P ), say Y, Z, , we

useE Y to denote expectation with respect toY When dealing with a

probability space, when it seems clear what we mean, we feel free to use

E or even expressions like E Y without defining them As usual, we let

ω denote the elements of Ω As with E, we often use ω in this context

without defining it

WhenX is a random variable we call a number a a median of X if

P (X ≤ a) ≥ 1

2 and P (X ≥ a) ≥ 1

Note thata is not necessarily unique.

A stochastic processX on (Ω, F, P ) is a family of measurable functions {X t , t ∈ I}, where I is some index set In this book, t usually represents

“time” and we generally consider {X t , t ∈ R+} σ(X r;r ≤ t) denotes

the smallestσ-algebra for which {X r;r ≤ t} is measurable Sometimes

it is convenient to describe a stochastic process as a random variable

on a function space, endowed with a suitableσ-algebra and probability

measure

In general, in this book, we reserve (Ω, F, P ) for a probability space.

We generally use (S, S, µ) to indicate more general measure spaces Here

µ is a positive (i.e., nonnegative) σ-finite measure.

Function spaces: Letf be a measurable function on (S, S, µ) The L p(µ)

(or simply L p), 1≤ p < ∞, spaces are the families of functions f for

Trang 20

is the family of sequences{a k } ∞

k=0of real or complex numbers such that



k=0 |a k | p < ∞ In this case, a k  p := (

k=0 |a k | p)1/p and a k  ∞:=sup0≤k<∞ |a k | We use  n

p to denote sequences in p withn elements.

Let m be a measure on a topological space (S, S) By an

approxi-mate identity or δ-function at y, with respect to m, we mean a family {f ,y; > 0} of positive continuous functions on S such that f ,y(x) dm(x) = 1 and each f ,yis supported on a compact neighborhoodK of

y with K  ↓ {y} as  → 0.

Let f and g be two real-valued functions on R1 We say that f is

asymptotic tog at zero and write f ∼ g if lim x →0 f (x)/g(x) = 1 We

say that f is comparable to g at zero and write f ≈ g if there exist

constants 0< C1 ≤ C2 < ∞ such that C1 ≤ lim inf x→0 f (x)/g(x) and

lim supx→0 f (x)/g(x) ≤ C2 We use essentially the same definitions atinfinity

Letf be a function on R1 We use the notation limy ↑↑x f (y) to be the

limit off (y) as y increases to x, for all y < x, that is, the left-hand (or

simply left) limit off at x.

Metric spaces: Let (S, τ ) be a locally compact metric or pseudo-metric

space A pseudo-metric has the same properties as a metric except that

τ (s, t) = 0 does not imply that s = t Abstractly, one can turn a

pseudo-metric into a pseudo-metric by making the zeros of the pseudo-pseudo-metric into anequivalence class, but in the study of stochastic processes pseudo-metricsare unavoidable For example, suppose thatX = {X(t), t ∈ [0, 1]} is a

real-valued stochastic process In studying sample path properties ofX

it is natural to consider (R1, | · |), a metric space However, X may be

completely determined by anL2metric, such as

d(s, t) := d X(s, t) := (E(X(s) − X(t))2)1/2 (1.4)(and an additional condition such as E X2(t) = 1 ) Therefore, it is

natural to also consider the space (R1, d) This may be a pseudo-metric

space sinced need not be a metric on R1

IfA ⊂ S, we set

τ (s, A) := inf

We use C(S) to denote the continuous functions on S, C b(S) to

de-note the bounded continuous functions onS, and C b+(S) to denote the

positive bounded continuous functions on S We use C κ(S) to denote

the continuous functions onS with compact support; C0(S) denotes the

functions onS that go to 0 at infinity Nevertheless, C0(S) denotes

in-finitely differentiable functions onS with compact support (whenever S

Trang 21

is a space for which this is defined) In all these cases we mean continuitywith respect to the metric or pseudo-metricτ

We say that a function is locally uniformly continuous on a measurableset in (S, τ ) if it is uniformly continuous on all compact subsets of (S, τ ).

We say that a sequence of functions converges locally uniformly on (S, τ )

if it converges uniformly on all compact subsets of (S, τ ).

Separability: LetT be a separable metric space, and let X = {X(t), t ∈

T } be a stochastic process on (Ω, F, P ) with values in R n X is said to

be separable if there is a countable setD ⊂ T and a P -null set Λ ⊂ F

such that, for any open setU ⊂ T and closed set A ⊂ R n,

t ∈D∩U |X(t, ω)| = inf

t ∈U |X(t, ω)|.

IfT is a separable metric space, every stochastic process X = {X(t),

t ∈ T } with values in R n has a separable version X = {  X(t), t ∈ T },

IfX is stochastically continuous, that is, lim t →t0P (|X(t) − X(t0)| >

) = 0, for every  > 0 and t0∈ T , then any countable dense set V ⊂ T

serves as the setD in the separability condition (sometimes called the

separability set) TheP -null set Λ generally depends on the choice of

Trang 22

With this normalization, Parseval’s Theorem is

Trang 23

Brownian motion and Ray–Knight Theorems

In this book we develop relationships between the local times of stronglysymmetric Markov processes and corresponding Gaussian processes Thiswas done for Brownian motion over 40 years ago in the famous Ray–Knight Theorems In this chapter, which gives an overview of significantparts of the book, we discuss Brownian motion, its local times, andthe Ray–Knight Theorems with an emphasis on those definitions andproperties which we generalize to a much larger class of processes insubsequent chapters Much of the material in this chapter is repeated

in greater generality in subsequent chapters

2.1 Brownian motion

A normal random variable with mean zero and variance t, denoted by

N (0, t), is a random variable with a distribution function that has

anticipation of usingp t as the transition density of a Markov process,

we sometimes usep t(x, y) to denote p t(y − x).

We give some important calculations involvingp t

11

Trang 24

whereC N is the rectangle in the complex plane determined by{x|−N ≤

x ≤ N} and {x − iλt| − N ≤ x ≤ N} (since exp(−z2/(2t)) is analytic),

and then take the limit asN goes to infinity.

Equation (2.3) is simply a rewriting of (2.2) It immediately gives(2) Equation (2.4), for z = 0, follows from (2) and the fact that the

density ofξ + ζ, the sum of the independent random variables ξ and ζ,

is given by the convolution of the densities ofξ and ζ For general z we

need only note that by a change of variables,

−∞ p s(x, y)p t(y, z) dy =



−∞ p s(x − z, y)p t(y, 0) dy.

For a slightly more direct proof of (2.4) considerp t(x, y) = p t(y − x)

as a function ofy for some fixed x The Fourier transform of p(y − x)

Trang 25

is e iλx p t(λ) Similarly, the Fourier transform of p t(z − y) is e iλz p t(λ).

By Parseval’s Theorem (1.10), the left-hand side of (2.4) is

direction and forx < 0 use the contour ( −ρ, ρ) ∪ (ρe iθ , π ≤ θ ≤ 2π) in

the counterclockwise direction.)

We define Brownian motion starting at 0 to be a stochastic process

W = {W t;t ∈ R+} that satisfies the following three properties:

(1) W has stationary and independent increments.

(2) W t law= N (0, t), for all t ≥ 0 (In particular W0≡ 0.)

(3) t → W t is continuous

Theorem 2.1.2 The three conditions defining Brownian motion are

consistent, that is, Brownian motion is well defined.

Proof We construct a Brownian motion starting at 0 We first struct a probability P on R R+, the space of real-valued functions{f(t),

con-t ∈ [0, ∞)} equipped with the Borel product σ-algebra B(R R+) LetX t

be the natural evaluationX t(f ) = f (t) We first define  P on sets of the

form{X t1∈ A1, , X t n ∈ A n } for all Borel measurable sets A1, , A n

inR and 0 = t0< t1< · · · < t n by setting



P (X t1 ∈ A1, , X t n ∈ A n) =

 n i=1

Trang 26

(2.4) The existence of P now follows from Kolmogorov’s Construction

Theorem

It is obvious, by (2.10), that the random variable (X t1, , X t n) hasprobability density function n

i=1 p t i −t i −1(z i−1 , z i) Hence, for 0 =t0<

t1< · · · < t n < v and measurable functions g and f ,

(1) It is obvious that it satisfies property (2)

We now show that{X t;t ∈ R+} has a continuous version To do this

we note that by properties (1) and (2) and the fact thatN (0, t − r) law

=

|t − r| 1/2 N (0, 1),



E ( |X t − X r | n) =|t − r| n/2 E ( |X1| n) (2.12)for all 0 ≤ r < t (It is also easy to check that N(0, t) has moments

of all orders.) Thus, by Kolmogorov’s Theorem (Theorem 14.1.1), forany fixed > 0 and some random variable C(T ), which is finite almost

(3) Since {X t;t ∈ R+} satisfies properties (1) and (2), to see that

W also satisfies properties (1) and (2) it suffices to note that for each

W t = X t almost surely This follows from (2.12) This shows thatBrownian motion starting at zero is a well-defined stochastic process

Remark 2.1.3 W ;t ∈ R } is a version of Brownian motion We

Trang 27

present another version that has useful properties LetC[0, ∞) be the

space of continuous real-valued functions {ω(t), t ∈ [0, ∞)} with the

topology of uniform convergence on compact sets and let B(C) be the

Borelσ-algebra of C[0, ∞) Let (  P , R R+, B(R R+)) denote the

probabil-W The map

W : (R R+, B(R R+))→ (C[0, ∞), B)

W t(f ) induces a map of the probability  P on

(R R+, B(R R+)) to a probability P on (C[0, ∞), B) Let W t := ω(t) {W t;t ∈ R+} is a Brownian motion on (C[0, ∞), B) This version of

Brownian motion, as a probability on the space of continuous functions,

is called the canonical version of Brownian motion

We mention two classical local limit laws for Brownian motion,{W t;

t ∈ R+} starting at 0 These laws are not difficult to prove, but we

shall not do so at this point since their proofs are included in the moregeneral Theorems 7.2.15 and 7.2.14; see also Example 7.4.13

Khintchine’s law of the iterated logarithm states that

lim sup

δ→0

W x+δ − W x



2δ log log(1/δ) = 1 a.s. (2.14)

L´evy’s uniform modulus of continuity states that, for anyT > 0,

2t log log t = 1 a.s. (2.16)

This is obtained by applying (2.14) toW := {W (t); t ∈ R+}, for W (t) :=

tW (1/t) for t = 0 and W (0) = 0, where {W (t); t ∈ R+} is a Brownian

motion The point here is thatW is also a Brownian motion We show

this in Remark 2.1.6

Remark 2.1.4 Let {W t , t ∈ R+} be a Brownian motion starting at

0 on the probability space (Ω, F0, P ), where F0 denotes the σ-algebra

generated byW s , 0 ≤ s < ∞ (We use a superscript for F0, reservingF

for the enlargement ofF0introduced in Section 2.3.) For eachx ∈ R1wedefine a probabilityP x onF0 by settingP x(F (W ·)) =P (F (x + W ·)),for all measurable functions F on F0 {W t;t ∈ R+;P x } is called a

Brownian motion starting atx.

Trang 28

Using the properties of Brownian motion one can easily show that, for

For eacht > 0 we define the transition operator P t:B b(R) → B b(R)

by

P t f (x) = E x(f (W t)) =



p t(x, z)f (z) dz (2.18)and takeP0=I, where I denotes the identity operator.

The Chapman–Kolmogorov equation (2.4) shows that {P t;t ∈ R+}

is a semigroup of operators, that is, for all f ∈ B b(R), P t+s f (·) =

P s(P t f )( ·) (This fact is often abbreviated by simply writing P s P t =

We interpretP t(x, A) as the probability that a Brownian motion,

start-ing at x at time zero, takes a value in A at time t Because of (2.19)

we callp t(x, z) the transition probability density function of Brownian

measurable functions, that is,

Trang 29

Using (2.18) we have

U α f (x) =



u α(x, z)f (z) dz. (2.23)

As in the case of P t, for measurable sets A ∈ B(R), set U α(x, A) =

U α I A(x) Because an equation similar to (2.19) holds in this case also,

we callu αtheα-potential density of Brownian motion.

The numerical value ofu α is given in (2.5) In subsequent work weconsider the 0-potential density of certain Markov processes Since, forBrownian motion,p t(x) ∼ 1/ √ t at infinity, (2.22) is infinite when α = 0.

Therefore, we say that the 0-potential density of Brownian motion isinfinite

Remark 2.1.5 For later use we note that Brownian motion is

recur-rent, that is, it takes on all values infinitely often, or, as is often said, ithits all points infinitely often To see this note that {W (t), t ∈ R+}

and {−W (t), t ∈ R+} have the same law Therefore, by (2.16) we

see that Brownian motion crosses the boundaries (t log log t) 1/2 and

−(t log log t) 1/2 infinitely often, and, since it is continuous, it hits allpoints in between infinitely often

Remark 2.1.6 Let W = {W (t); t ∈ R+} be a Brownian motion and

consider W = {W (t); t ∈ R+}, where W (t) = tW (1/t) for t = 0 and

W (0) = 0 For t > s we write W (t) = (W (t) − W (s)) + W (s) Then,

using the fact that W has independent increments, we see that for all

s, t ∈ R+

EW (t)W (s) = s ∧ t = EW (t)W (s). (2.24)

We show in Section 5.1 thatW and W are both mean zero Gaussian

pro-cesses, and, since mean zero Gaussian processes are determined uniquely

by their covariances, they are equivalent as stochastic processes (see, inparticular, Example 5.1.10) Here we give a direct proof of this equiv-alence, which does not require us to explicitly mention Gaussian pro-cesses

Note that for any 0< t1< · · · < t n anda1, · · · , a n ∈ R1,

Trang 30

Therefore, using the independence of the increments ofW , we see that

is a normal random variable and necessarily

It follows from this and (2.24) thatW and W have the same finite joint

distributions ConsequentlyW satisfies the first two conditions in the

definition of Brownian motion The fact thatW is continuous at any

t = 0 is obvious The continuity at t = 0 follows from (2.12) and (2.13)

withW instead of X Hence W is a Brownian motion.

Remark 2.1.7 Let 0 < t < · · · < t and letC denote the symmetric

Trang 31

n × n matrix with C i,j=t j ∧ t k,i, j = 1, , k Since

unless alla i = 0, we have that C is strictly positive definite and hence

invertible In this case the distribution of (B t1, , B t n) inR n has sity

den-1(2π) n/2

det(C) e

−(x,C −1 x)/2

(2.29)with respect to Lebesgue measure To see this we compute characteristicfunctions By a change of variables

1(2π) n/2

det(C)



e i

n j=1 λ j x j



e i(C 1/2 λ) j x j e −x2j /2 dx j

=e −

n j=1 (C 1/2 λ)2j /2

=e −(C 1/2 λ, C 1/2 λ)/2=e −(λ,Cλ)/2

Comparing this with (2.27) shows that (B t1, , B t n) has probabilitydensity (2.29)

2.2 The Markov property

Let{W t , t ∈ R+} be a Brownian motion Let F0

t denote the σ-algebra

for all s, t ≥ 0 and f ∈ B b(R).

SinceP s f (W t) isW tmeasurable, (2.31) implies that, for allf ∈ B b(R),

E x

f (W t+s)| F0

=E x(f (W t+s)| W t), (2.32)

Trang 32

which says that the future of the process{W t;t ≥ 0}, given all its past

values up to the present, say timet0, only depends on its present value,

W (t0) When (2.31) holds for any stochastic process {W t;t ≥ 0}, we

say that{W t;t ≥ 0} satisfies the simple Markov property.

Proof We give two proofs of (2.31) In the first we use the fact that if

Z is A measurable and Y is independent of A, then

W t i ≤ b i }, where 0 = t0 < t1 < < t n ≤ t, it suffices to verify (2.34)

for sets of this form More generally we show that, for functions of theform n

i=1 f i(W t i) withf i ∈ B b(R), i = 1, , n and 0 = t0< t1< <

i=1

f i(W t i)P s f (W t)



, (2.35)thus completing the second proof

Trang 33

increasing family ofσ-algebras with F0

t ⊆ G tfor allt ≥ 0 We say that

the Brownian motion {W t , t ∈ R+} is a simple Markov process with

respect to{G t;t ≥ 0} if, for all x ∈ R1,

E x(f (W t+s)| G t) =P s f (W t) (2.38)for alls, t ≥ 0 and f ∈ B b(R).

Let {W t , t ∈ R+} be a Brownian motion on the probability space

(Ω, F0) We assume that there exists a family {θ t , t ∈ R+} of shift

operators forW , that is, operators θ t: (Ω, F0)→ (Ω, F0) with

θ t ◦ θ s=θ t+s and W t ◦ θ s=W t+s ∀s, t ≥ 0. (2.39)For example, for the canonical version of Brownian motion, we can takethe shift operatorsθ t to be defined by

θ t(ω)(s) = ω(t + s) ∀ s, t ≥ 0. (2.40)

In general, for any random variableY on (Ω, F0),Y ◦ θ t:=Y (θ t)

Lemma 2.2.2 If {W t , t ∈ R+} is a simple Markov process with respect

to {G t;t ≥ 0}, then

E x(Y ◦ θ | G) =E W t(Y ) (2.41)

Trang 34

for all F0 measurable functions Y and all x ∈ R1.

Proof It suffices to prove (2.41) for Y of the form Y = n

i=1 g i(W t i),with 0 < t1 < < t n and g i ∈ B b(S), i = 1, , n In this case

g i(W t+t i)E x

g n(W t+t n)| G t+t n−1



(2.42)and, by (2.38),

Clearly the right-hand side of (2.41) is also equal to this

We now provide an alternative, more explicit proof for the importantcase in whichG t=F0

t for allt ≥ 0 Thus we show that for Y as above

j=1 f j(W s j), with 0 < s1 < < s m ≤ t

andf ∈ B(S), j = 1, , m.

Trang 35

p t −s m(z m , y)p t1(y, y1)dy, we get (2.46).

In the development of the theory of Markov processes it is crucial

to extend the simple Markov property, which holds for fixed times, tocertain random times Let (S, G, G t , P ) be a probability space, where G t

is an increasing sequence of σ-algebras and, as usual, G = ∪ t ≥0 G t Arandom variableT with values in [0, ∞] is called a G t stopping time if

{T ≤ t} is G tmeasurable for allt ≥ 0.

LetT be a G tstopping time and let

SetG t+ =∩ h>0 G t+h G t+ is an increasing sequence of σ-algebras It

is easy to check thatT is a G+ stopping time (that is,{T ≤ t} is G+

Trang 36

measurable for allt ≥ 0) if and only if {T < t} is G t measurable for all

t Similar to G T we defineG T+={A ⊆ S | A ∩ {T ≤ t} ∈ G t+, ∀t} One

can check thatG T+ ={A ⊆ S | A ∩ {T < t} ∈ G t , ∀t}.

Remark 2.2.3 We wrote the last three paragraphs on stopping times

without specifically mentioning Brownian motion because they are evant for general Markov processes We did this so that in Chapter

rel-3 we can discuss stopping times without defining them again Severalconcepts that are developed in this chapter for the study of Brownianmotion are presented in this way

Let W = {W (t), t ∈ R+} be a Brownian motion on (Ω, F0, F0

Proof Suppose that A ⊆ R1 is open Then T A < t if and only if

W s ∈ A for some rational number 0 < s < t Therefore {T A < t } ∈ F0

t.Let d(x, A) := inf {|x − y|, y ∈ A} Since d(x, A) is continuous, the

sets A n ={x ∈ R1| d(x, A) < 1/n} are open, and if A ⊆ R1 is closed,

A n ↓ A Using the facts that A is closed in the first equation and A n isopen in the last equation, we see that ift > 0,

{W s ∈ A n for some rational 1/m ≤ s ≤ t}.

We use the continuity of Brownian motion in the last line Thus{T A ≤ t} ∈ F0

t fort > 0 The case of t = 0 is immediate since, for a closed set

A, {T A= 0} = {W0∈ A} ∈ F0

The next lemma, which generalizes the simple Markov property ofBrownian motion so that it holds for stopping times, is called the strongMarkov property for Brownian motion

Trang 37

Lemma 2.2.5 If the Brownian motion {W t , t ∈ R+} is a simple Markov process with respect to {G t;t ≥ 0}, then for any G t+ stopping time T

E x

f (W T +s)1{T <∞} | G T+



=P s f (W T)1{T <∞} (2.52)

for all s ≥ 0, x ∈ R1, and bounded Borel measurable functions f.

Proof We need to show two things: first thatP s f (W T)1{T <∞}isG T+

measurable for each bounded Borel measurable functionf and, second,

of these assertions it suffices to consider thatf is also continuous.

LetT n =[2

n T ] + 1

2n , that is,T n =j/2 nif and only if (j −1)/2 n ≤ T < j/2 n and check that, for each n, T n is a G t stopping time andT n ↓ T

Furthermore,T n < ∞ if and only if T < ∞.

Using the continuity of{W t , t ≥ 0} we have

We proceed to verify (2.53) We claim that this equation holds with

T replaced by T n Once this is established, (2.53) itself follows by takingthe limit as n goes to infinity and using continuity Replace T by T n

in the left-hand side of (2.53) and note that since A ∈ G T+ we have

A ∩ {T n =j/2 n } ∈ G j/2 n Then, using the simple Markov property, wesee that

Trang 38

This shows that (2.53) holds withT replaced by T n.

Remark 2.2.6 A similar proof shows that for anyG tstopping timeT

E x

f (W T +s)1{T <∞} | G T



=P s f (W T)1{T <∞} (2.55)

for alls ≥ 0, x ∈ R1 and bounded Borel measurable functionsf.

In the same way we obtained (2.41) using (2.38), we use (2.52) toobtain the next lemma

Lemma 2.2.7 The strong Markov property (2.52) for the G t+ stopping time T can be extended to

E x

Y ◦ θ T1{T <∞} | G T+



=E W T(Y ) 1 {T <∞} (2.56)

for all F0 measurable functions Y and x ∈ R1.

Using (2.55), a proof similar to the proof of Lemma 2.2.7 shows thatfor theG tstopping timeT

E x

Y ◦ θ T1{T <∞} | G T



=E W T (Y ) 1 {T <∞} (2.57)for allF0measurable functionsY and x ∈ R1

Remark 2.2.8 Since a constant time is anF0

t stopping time, it followsfrom (2.56) and (2.57) that, for anyt ≥ 0,

it suffices to verify it forZ of the form Z = (Y ◦ θ t)V , where V ∈ F0

t,since such functions generate F0 For Z of this form (2.59) follows

t By (2.59) applied toZ = 1 A we see that 1A  = 1A,P x

almost surely, that is,P x(A∆A ) = 0

Remark 2.2.8 leads to the next important lemma

Lemma 2.2.9 (Blumenthal Zero–One Law for Brownian

Mo-tion) Let A ∈ F0

0 + Then, for any x ∈ R1, P x(A) is either zero or one.

Trang 39

Proof Note that F0 =σ(W0), the σ-algebra generated by W0 Thusany setA ∈ F0 must be of the form A =W0−1(B) for some Borel set

B ⊆ R1 ThereforeP x(A ) = P x(W0 ∈ B) = 1 B(x), so that P x(A ) iseither zero or one Applying the results stated in the second paragraph ofRemark 2.2.8, in the caset = 0, we see that for A ∈ F0

0 +,P x(A∆A ) = 0for someA ∈ F0 ThusP x(A) is either zero or one.

The following simple lemma is used in the next section

Lemma 2.2.10 Let (S, G, G t , P ) be a probability space, where G t is an increasing family of σ-algebras and let S and T be G t stopping times Then the sets

{S ≤ T } , {S < T }, and {S = T } ∈ G T (2.60)

Proof To see this simply use the facts that

{S ≤ T } ∩ {T ≤ t} = {S ≤ t} ∩ {T ≤ t} ∩ {S ∧ t ≤ T ∧ t} ∈ G t (2.61)and

Trang 40

Since forα > 0, e −αT x =e −αT x1{T x <∞}, and W T x =x, by the strong

continuous int, we have

P0(W t ≥ x) = 1

2 P0(T x ≤ t) (2.69)The lemma follows from the fact that

P0(T x ≤ t) = P0

sup

t, t ≥ 0 We do this by using a construction referred to as

the standard augmentation ofF0

t with respect toP x, or just standardaugmentation, when it is clear to whichσ-algebras and probability mea-

sures we are referring Heuristically, F0 is augmented with null sets

... thatW and W are both mean zero Gaussian

pro-cesses, and, since mean zero Gaussian processes are determined uniquely

by their covariances, they are equivalent as stochastic processes. .. (2.46).

In the development of the theory of Markov processes it is crucial

to extend the simple Markov property, which holds for fixed times, tocertain random times Let (S, G, G t... the increments ofW , we see that

is a normal random variable and necessarily

It follows from this and (2.24) thatW and W have the same finite joint

distributions

Ngày đăng: 27/06/2014, 17:20