1. Trang chủ
  2. » Giáo án - Bài giảng

From Markov Chains to Non-Equilibrium Particle Systems, Second Edition

610 277 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 610
Dung lượng 23,59 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

FROM MARKOV CHAINS TO NON-EQUILIBRIUM PARTICLE SYSTEMS 2nd Edition Copyright 0 2004 by World Scientific Publishing Co.. The book starts with some new contributions to thc classical subj

Trang 3

This page intentionally left blank

Trang 5

Published by

World Scientific Publishing Co Pte Ltd

5 Toh Tuck Link, Singapore 596224

USA ofice: Suite 202, 1060 Main Street, River Edge, NJ 07661

UK ofice: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library

FROM MARKOV CHAINS TO NON-EQUILIBRIUM PARTICLE SYSTEMS (2nd Edition)

Copyright 0 2004 by World Scientific Publishing Co Pte Ltd

AN rights reserved This book, or parts there06 may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permissionfrom the Publisher

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance

Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA In this case permission to photocopy is

not required from the publisher

ISBN 981-238-81 1-7

Printed in Singapore

Trang 6

Contents

Preface to the First Edition ix

Preface to the Second Edition xi

Chapter 0 An Overview of the Book: Starting from Markov Chains 1

0.1 Three Classical Problems for Markov Chains 1

0.2 Probability Metrics and Coupling Methods 6

0.3 Reversible Markov Chains 13

0.4 Large Deviations and Spectral Gap 15

0.5 Equilibrium Particle Systems 17

0.6 Non-equilibrium Particle Systems 19

Part I General Jump Processes 21

Chapter 1 Transition Function and its Laplace Transform 23 1.1 Basic Properties of Transition Function 23

1.2 The q-Pair 27

1.3 Differentiability 38

1.4 Laplace Transforms 51

1.5 Appendix 57

1.6 Notes 61

Chapter 2 Existence and Simple Constructions of Jump Processes

2.1 Minimal Nonnegative Solutions

2.4 Kolmogorov Equations and q-Condition 2.5 Entrance Space and Exit Space

2.2 Kolmogorov Equations and Minimal Jump Process 2.3 Some Sufficient Conditions for Uniqueness

2.6 Construction of q-Processes with Single-Exit q-Pair 2.7 Notes

62

62

70

79

85

88

93

96

Chapter 3 Uniqueness Criteria 97

3.1 Uniqueness Criteria Based on Kolmogorov Equations 97

3.2 Uniqueness Criterion and Applications 102

3.3 Some Lemmas 113

3.4 Proof of Uniqueness Criterion 115

3.5 Notes 119

Trang 7

vi CONTENTS

Chapter 4 Recurrence Ergodicity and

Invariant Measures 120

4.1 Weak Convergence 120

4.2 General Results 124

4.3 Markov Chains: Time-discrete Case 130

4.4 Markov Chains: Time-continuous Case 139

4.5 Single Birth Processes 151

4.6 Invariant Measures 166

4.7 Notes 171

Chapter 5 Probability Metrics and Coupling Methods 173

5.1 Minimum Lp-Metric 173

5.2 Marginality and Regularity 184

5.3 Successful Coupling and Ergodicity 195

5.4 Optimal Markovian Couplings 203

5.5 Monotonicity 210

5.6 Examples 216

5.7 Notes 223

Part I1 Symmetrizable Jump Processes 225 Chapter 6 Symmetrizable Jump Processes and Dirichlet Forms 227

6.1 Reversible Markov Processes 227

6.2 Existence 229

6.4 General Representation of Jump Processes 233

6.5 Existence of Honest Reversible Jump Processes 243

6.6 Uniqueness Criteria 249

6.7 Basic Dirichlet Form 255

6.8 Regularity, Extension and Uniqueness 265

6.3 Equivalence of Backward and Forward Kolmogorov Equations 233

6.9 Notes 270 Chapter 7 Field Theory

7.1 Field Theory

7.2 Lattice Field

7.3 Electric Field

7.4 Transience of Symmetrizable Markov Chains 7.5 Random Walk on Lattice Fractals

7.7 Notes

7.6 A Comparison Theorem

272

272

276

280

284

298

300

302

Trang 8

CONTENTS vii

Chapter 8 Large Deviations 303

8.1 Introduction to Large Deviations 303

8.2 Rate Function 311

8.3 Upper Estimates 320

8.4 Notes 329

Chapter 9 Spectral Gap 330

9.1 General Case: an Equivalence 330

9.2 Coupling and Distance Method 9.3 Birth-Death Processes 348

9.4 Splitting Procedure and Existence Criterion 359

9.5 Cheeger’s Approach and Isoperimetric Constants 368

9.6 Notes 380

340

Part I11 Equilibrium Particle Systems 381

Chapter 10 Random Fields 383

10.1 Introduction 383

10.2 Existence 387

10.3 Uniqueness 391

10.4 Phase Transition: Peierls Method 397

10.5 Ising Model on Lattice Fractals 399

10.6 Reflection Positivity and Phase Transitions 406

10.7 Proof of the Chess-Board Estimates 416

10.8 Notes 421

Chapter 11 Reversible Spin Processes and Exclusion Processes 422

11.1 Potentiality for Some Speed Functions 422

11.2 Constructions of Gibbs States 425

11.3 Criteria for Reversibility 432

11.4 Notes 446

Chapter 12 Yang-Mills Lattice Field 447

12.1 Background 447

12.2 Spin Processes from Yang-Mills Lattice Fields 448

12.3 Diffusion Processes from Yang-Mills Lattice Fields 457

12.4 Notes 466

Trang 9

viii CONTENTS

Systems 467

Chapter 13 Constructions of the Processes 469

13.1 Existence Theorems for the Processes 469

13.2 Existence Theorem for Reaction-Diffusion Processes 486

13.3 Uniqueness Theorems for the Processes 493

13.4 Examples 502

13.5 Appendix 510

13.6 Notes 513

Chapter 14 Existence of Stationary Distributions and Ergodicity 514

14.1 General Results 514

14.2 Ergodicity for Polynomial Model 521

14.3 Reversible Reaction-Diffusion Processes 532

14.4 Notes 538

Chapter 15 Phase Transitions 539

15.1 Duality 539

15.2 Linear Growth Model 542

15.3 Reaction-Diffusion Processes with Absorbing State * 547

15.4 Mean Field Method 550

15.5 Notes 554

Chapter 16 Hydrodynamic Limits 555

16.1 Introduction: Main Results 555

16.2 Preliminaries 559

16.3 Proof of Theorem 16.1 564

16.4 Proof of Theorem 16.3 570

16.5 Notes 571

Bibliography 572

Author Index 589

Subject Index 593

Trang 10

Preface t o the First Edition

The main purpose of the book is to introduce some progress on proba- bility theory and its applications to physics, made by Chinese probabilists, especially by a group at Beijing Normal University in the past 15 years Up

t o now, most of the work is only available for the Chinese-speaking people

In order to make the book as self-contained as possible and suitable for a

wider range of readers, a fundamental part of the subject, contributed by many mathematicians from different countries, is also included The book starts with some new contributions to thc classical subject Markov chains, then goes t o the general jump processes and symmetrizable jump processes, equilibrium particle systems and non-equilibrium particle systems, Accord-

ingly the kook is divided into four parts An elementary overlook of the kook is presented in Chapter 0 Some notes on thc bibliographies and open problems arc collected in the last section of each chapter It is hoped that the book could be useful for both experts and newcomers, not only for math- ematicians but also for the researchers in related areas such as mathematical physics, chemistry and biology

The present book is based on the book “Jump Processes and Particle Systems” by the author, published five years ago by the Press of Beijing

Normal University About 1/3 of the material is newly added Even for the

materials in the Chinese edition, they are either reorganized or simplified Some of them are removed A part of the Chinese book was used several times €or graduate students, the materials in Chapter 0 was even used twice for undergraduate students in a course on Stochastic Processes Moreover, the gitlley proof of the present book has bcen used for gradiintc students in their second and third semesters

T h e author would like to express his warmest gratitude to Professor Z T

Hou, Professor D W, Stroock and Professor S 3 Yan for their teachings and advices Their influences are contained almost everywhere in the book

In the past 15 years, the author has been benefited from a large number

of colleagues, friends and students, it is too many to list individually here However, most of their names appear in the “Notes” sections, as well as in the Bibliography and in the Index of the book Their contributions and co-

operations are greatly appreciated The author is indebted to Professor x F

Liu, Y 3 Li, B M Wang, X L Wmg, J Wu, S Y Zhang and Y H Zhang for reading the galley proof, correcting errors and ixlproving the quality of the presentations It is a nice chance l o acknowledge thc financial support during thr past years by fi’ok Ying-Tung Educational Foundation, Founda- tion of Institution of Higher Education for Doctoral Program, Foundation

of State Education Commission for Outstanding Young Teachers and the

ix

Trang 11

X PREFACE T O THE FIRST EDITION

National Natural Science Foundation of China Thanks are also expressed

to the World Scientific for their efforts on publishing the book

M F Chen Beijing November 18, 1991

Trang 12

Preface t o the Second Edition

The main change of this second edition is Chapter 5 on “Probability Met-

rics and Coupling Methods“ and Chapter 9 on “Spectral Gap” (or equiva- lently, “the first non-trivial eigcnvalue”) I Actually, these two cha.pters have

been rewritten, within the original text In the former chapter, the topic of

“optimal Markovian couplings” is added and the “stochastic cornprtrability” for jump processes is cornplited In tlis latter cliapt,er, t,wo general results

on estimating spectral gap l y couplings and two dual variational formula for spectral gap of birth-death processes are added Moreover, a generalized

Cheeger’s approach is renewed for unbounded jiirnp processes Next, Sectiorr 4.5 on “Single Birth Processes” and Section 14.2 on “Ergodicity of Reaction-

diffusion Processes“ are updated But the original technical Section 14.3 is

removed Besides, a large number of recent publications are included Nu-

merous modifications, improvements or correct’ions are made in almost every page It is hoped that, t,he serious effort could improve the quality of the book and bring the reader to enjoy some of the recent developments

Roughly speaking, this book deals with two subjects: Markov Jump Pro-

cesses (Parts I and 1) and Interacting Particle Systems (Parts m and IV) If one is interested only in the second subject, it is not necessary to read all of

t,he first niae chapters, but instead, may have a look at Chapters 4, 5, 7: 9

plus s 2 3 or so A quick way to read the book is glancing at the element,ary Chapter 0, to get some impression about what studied in the hook, to have some test of the results, arid to choose what for the further reading Some

t.irnes, 1 feel crazy to writ’e such a thick book, this is due to the wider range

of topics Even though it can be shorten easily by moving sonic details but

the resulting book would be much less readable Anyhow, I belicve that the

reader can make the book thin and thin

A concrete model t.hroughout the whole book is Schlogl’s (second) rnodeI,

which is introduced at the beginning (Example 0.3) to show the power of our first main result and discussed right after the last theorem (Theorem

16.3) of bhe book about its unsolved problems This model, completely dif- ferent from Ising model, is typical from non-equilibrium statistical physics Its generalization is t.he polynomial model or more generally, the class of

reaction-diffusion processes Locally, these models are Markov chains But

even in t,his case, the uniqueness problem of the process was opened for sev- eral years, though everyone working in this field believes so From physical point of view, the Markov chains should be ergadic and this is finally proved

in Chapter 4, Thus, to study the phase tra.nsit.ions, we have to go to the infinite dimensional setting The first hard stone is the construction of the corresponding Markov processes For which, the matherna.tical tool is pre-

xi

Trang 13

xii PREFACE T O T H E SECOND EDITION

pared in Chapter 5 and the construction is done in Chapter 13 The model

is essentially irreversible, it can be reversible (equilibrium) only in a special

case The proof of a criterion for the reversibility is prepared In Chapter 7

arid completed in Chapter 14 The topics studicd in almost, every chapter

are either led by or related to Schlligl’s rnodel, even though sometimes it is

not explicitJy mentioned Actually, the last four chapters are all devoted t o

the reaction-diffusion processes

The Schlijgl model possesses thc main characters of the current mathemat- ics: infinite dimensional, non-linear, complex systems and so on It provides

us a chance to re-examine the well developed finite dimcnsional mathematics,

to create new mathematical tools or new research topics It is not surprising that many ideas and results from different branches of mathematics, as well

ti physics, are used in the book However, it is surprising that the methods

developed in this book turn out to have a dccp application to Rierriaxiniari

geometry and spectral theory ‘l’his is clearly a different story Since there are so much progress made in the past ten years or more, a large part of the new materials are out of the scope of this book, the author has decided to write a separate book under the title “Eigenvalues, Inequalities and Ergodic Theory”

It is a pleasure t o recall the fruitful cooperation with my previous students and colleagues: Y H Mao, F Y Wang, Y Z Wang, S Y Zhang, Y H

Zhang et al Their contributions heighten remarkably the quality of the book

The author acknowledges the financial support during the past years by the Research Fund for Doctoral Program of Higher Education, the National Natural Science Foundation of China, the Qiu Shi Science and Technology Foundation and the 973 Project Thanks are also expressed to the World Scientific for their efforts on publishing this new edition of the book

M F Chen Beijing August 29, 2003

Trang 14

Chapter 0

In this chapter, we introduce some background of the topics, as well as some results and ideas, studied in this book We emphasize Markov chains, and discuss our problems by using the language as elementary and concrete

c2s possible Besides, in order to save the space of this section, we omit most

of the references which will be pointed out in the related “Notes” sections

0.1 Three Classical Problems for Markov Chains

For a given transition rate (Le., a Q-matrix Q = ( q z j ) on a countable state:

space), the uniqueness of the Q-semigroup P ( t ) = ( P t 3 ( t ) ) , the recurrence and the positive recurrence of the corresponding Markov chain are three fun- damental and clmsical problems, treated in many textbooks As an addition, this seclion inlroduceu some practical results motivated from the study of a

type of interacting particle systems, reaction-diffusion processes

Definition 0.1 Let E be a countable set Suppose that ( P z J ( t ) ) is a sub- Markov transition probability matrix having the following properties

(3) Jump condition limt-o Pij(t) = Sij for all i , j E E It is well-known

that for such a ( P z j ( t ) ) , we have a Q-matrix Q = ( q i j ) deduced by

Trang 15

2 0 AN OVERVIEW OF THE BOOK

Because of the &-condition, we often call P ( t ) = (Pij(t)) a Q-process

Unless otherwise st,ated, t,hroughout this chapter, we suppose that the Q-matrix Q = ( q i j ) is totally stable and conservative That is

4% < C X ] , C 3 f z q ” 2J = qi, i E (0.1)

T h e first problem of our study is when there is only one Q-process P ( t ) =

(Pij(t)) for a given Q-matrix Q = ( q i j ) (Then, the matrix Q is often called

regular) This problem was solved by Feller (1957) and Reuter (1957)

Theorem 0.2 (Uniqueness criterion) For a given &-matrix Q = ( q i j ) ,

the Q-process ( P i j ( t ) ) is unique if and only if (abbrev, iff) the equation

(A + q ; ) U i = C Q i j U j , 0 < ui < 1, i E E (0.2)

j # l

has only the trivial solution uz = 0 for some (equivalently, for all) X > 0

Certainly, this criterion has rna.ny applicatjons For instance, it gives us

a cornpl& answer to the birth-death processes (cf; Corollary 0.8 below) However, it seems hard to apply the above criterion directly to the following

examples

Example 0.3 (SchIiigl’s modcl) Let S be a finite set and IT = X:, where Z+ = { O , l , } The model is defined by the following Q-matrix Q -r ( q ( q y) : z,y E E ) :

[ A 1 (“’2”’) + A 1 if y = z + e,

i f y = z - e , + e , for other y # z,

4 ( 4 = - d x > 4 = c q ( W A

Y#X

where 2 = ( ~ ( u ) : u E S ) , (1) is the usual combination, XI, ’ 1 ! A 4 are positive constants, ( p ( u , v ) : u,’u E S) is a transition probability matrix on S and eU is

t h e element in E having value 1 a t u and 0 elsewhere

The Schlogl model is a model of chemical reaction with diffusion in a

container Suppose that the container consists of small vessels In each vessel u f S , there is a reaction described by a birth-death process The birth and death rates are given, respectively, by the above first two lines in

the definition of (q(x, y)) Moreover, suppose that between any two vessels u

and w, there is a diffusion, with rate given by the third line of the definition

This model was introduced by F Schlogl (1972) as a typical model of non-

equilibrium systems See Haken (1983) for related references

Trang 16

0.1 THREE CLASSICAL PROBLEMS FOR MARKOV CHAINS 3

Example 0.4 (Dual chain of spin system) Let S be a countable set, and

X be the set of all finite subsets of 5' For A E X , let IAl denote the number

of elements in A For various concrete models, their &-matrices ( q ( A , B ) :

A , B E X ) usually satisfy the following condition:

for some constant C, c E R := (-m, m) A particular case is that

u E A F : I'A(A\u)=B

where

4.) 2 0, supc(.u) < CO?

U

and supuc(u) C A p ( u , A ) JAl < 00 Then (0.3) holds with C = 0 and c =

Intuitively, we can interpret the last Markov chain as follows Let A be the set; of sites occupied by particles (finite!) At each site there is at most

one particle Then the process evolves in the following way: each u E A

is removed from A at rate C(U) and is replaced by a set F with probability

p ( u , F ) ; when an attempt is made to put a point at site u which is already

occupied, the two points annihilate one another T h e dual chain of a spin system is often used as a dual process of an infinite particle system This dual approach is one of the main powerful tools in the study of infinite particle systems (cf Liggett (1985), Chapter 3, Section 4)

Theorcrn 0.5 Let Q = ( q Z J ) be a Q-matrix on E Suppose t h a t there exist

a sequence {En}y and a non-negative function 9 such that

SUP,, 44 C F P b , F ) llFl - 11

Now, we State our first main result

If in addition

.1

holds for some c c R , then the Q-process is unique

To cornpare this theorem with Criterion 0.2, we reformulate Criterion 0.2

as follows

Trang 17

4 0 AN OVERVIEW 01.’ THE BOOK

Theorem 0.6 (Alternative uniqueness criterion) Given a Q-matrix

Q = ( q i j ) , for the uniqueness of the &-process, it is sufficient t h a t the inequality

has no bounded solution (pi : i E E ) with sup,y, > 0 for some (equivalently, for all) X 2 0 Conversely, these conditions plus p 2 0 are also necessary

Take E,, =; { i E E : qp < n} By ‘l’heorem 0.5, we have the following result

Corollary 0.7 tf there exist a function ‘p: cpi 3 q i , i E E , and a constant

c E R such t h a t (0.4) holds, then t h e &-process is unique

To see these results are practical, for Schlogl’s model (Example 0.3), we

can either take cp(x) = c[l + (CUES ~ ( u ) ) ~ ] and apply Corollary 0.7, or take

cp(z) = c[l + C u z ( u ) ] and apply Theorem 0.5 with En = {i : i ,< n } ,

where c is a constant chosen by a simple computation For Example 0.4,

simply take cp(A) = c[1 + lAl] for a suitable c and apply Theorern 0.5 with

En = { A : \At < n} For instance, for Schlagl’s modeI, when C,, X ( U ) is large, then (0.4) should hold because the order of the death rate is higher than thc one of the birth rate On the other hand, for bounded X U ~(u), we

can choose c large enough so that (0.4) also holds

Next, we consider a typical case Let E = {0,1,2, ~ } = Z+ Suppose

that the solution ( u i ) to the equation

is non-decreasing: ui t as i 1, then, from Criterion 0.2, it is easy to see

that the process is unique iff lim+m w.i= 00 On the other hand, if we take

En = {i E Z, : i < n } : c = X and qpi = ui,i E E , then t h e hypotheses of Theorem 0.5 fire re.duced lo the condition: limi-+wyi = l i n ~ + ~ u i = 00,

which is the same as above Thus, the conditions of Theorem 0.5 are not only sufficient but also necessary for this particular case This remark plus the next result gives us another view of justifying the power of Theorem 0.5

Corollary 0.8 For the single birth Q-matrix on E = Z+:

(but there is no restriction t o the death rates), the Q-process is unique iff

C m k = 00, where

00

Trang 18

0.1 THREE CLASSICAL PROBLEMS FOR MARKOV CHAINS 5

The key t o prove this corollary is the non-decreasing property mentioned above, of the solution to (0.5) (cf Theorem 3.16)

Now, we go to the next topic: recurrence It is well known that for a reg- ular Q, the corresponding Ivrarkov chain is recurrence iff so is its embedding chain See Chung (1967) Here, we would like to menlion a more precise formula Note that for a given @matrix Q = ( q z 3 ) , we always have the mini- mal Q-process (Piyin(t)), which C B K ~ be obtained by the following procedure, Let P,"(t) = 0 and

then for fixed i , j E E and t 2 0, p,',"'(t) T P;'"(t) as n

d c EX+, the set { i E E : hi < d } is finite

Theorem 0.10 An irreducible Q-matrix Q = ( q i j ) is regular with recurrent

P ( t ) iff the equation

C 9 i j Y j 6 qiyi, i $ H

j # i

has a compact solution (yi) for some finite H # 8

The last topic is about the positive recurrence

Theorem 0.11 Given an irreducible Q-matrix Q = ( q i j ) , suppose that there exist a compact function h a n d constants K 2 0, y > 0 or K = 7 = 0 such

t h a t

C qij(hj - hi) < K - vhi, i E E (0.6)

j

Then the Markov chain is positive recurrent (exponentially ergodic) and hence

has uniquely a stationary distribution

Trang 19

6 0 AN OVEIZVIEW OF T H E BOOK

To apply this theorem to Schliigl’s model (Example 0.3), take h ( z ) =

CUES z ( u ) and an arbitrary q > 0 Then one can find a K < 00 such that the above inequality holds Hence, Schlogl’s model is always ergodic in finite dimensions As for Example 0.4, since the empty set 0 is an absorbing state, the answer is obvious Finally, consider the linear growth model:

qi,i+1 = xi + 6, 42,Z-l = pi, A, p,6 > 0,

qi,j = 0 for other j # if 1, i , j E Z+

It is well known that this model is positive recurrent if and only if X <

p Recall that this conclusion is usually obtained by studying three series, respectively, to show the regularity of Q, the recurrence and finally the positive recurrence of the chain (cf Example 4.56 for details) However,

it is obvious that Theorem 0.11 is applicable if and only if X < p , for the

natural choice that hi = i (i E Z+) Thus, Theorem 0.11 is sharp for this model and its advantage should be clear now

Roughly speaking, the three problems discussed above consist of the sub- jects of the subsequent four chapters Actually, we deal with the general case where the Q-matrix may not be conservative and furthermore the state space is allowed to be general too Certainly, some results for the general state space are natural generalization of that for the discrete state space However, it should be pointed out that the generalization is not trivial in many situations, for instance, the differentiability for the transition functions

(see Section 1.3) Another case is the following As we will see in Chapter

4, the ergodic theory for Markov chains are now quite complete but at the moment, our knowledge about the th,eory for general jump processes is still incomplete

For general totally stable Q-matrix (ix., qi < 00 for all i), the uniqueness problem had been open for a long period and was eventually solved by Hou (11,974) for Mnrkav chains arid Chen and Zheng (1982) for the general setup Th,e general uniqueness criterion is given in Chapter 3

0.2 Probability Mctrics and Coupling Methods

The coupling technique has a long history and now has many applications

It is one of the basic tools used in the book In this section, we discuss the relation between couplings and probability metrics, and introduce some coupling methods for Markov chains Some preliminary applications are also introduced

Definition 0.12 Let Pk be a probability measure on a measurable space ( E k ,

&&), k = 1 , 2 A probability measure p on (El x E2,8; x 8 2 ) is called a cou- pling of PI and P2 if is has the following marginality:

F(B1 x E2) = Pi(B1), F(E1 x Bz) = P2(B2), BI, E &I,, k = 1,2

Trang 20

0.2 PROBABILITY METRICS A N D COUPLING METHODS 7

Similarly, for given two processes (X,k:)t>~ valued in ( E k , 8 k ) with distribution

P,+, k = 1,2, a proccss (Xt)120 - valued in x g2) with distribution

x E z ,

is called a coupling of (X,') and (X;) if

h o r n our point of view, the coupling technique is a natural way to obtain some upper estimate for the probability metrics, and for different rnetrics, the effective couplings can be different For this reason, we begin our study with recalling some results on probability metrics, and then come back to the coupling methods

Let ( E , p , 8 ) be a separable: complete metric space with met.ric p and Bore1 cr-algebra 4' Given a sequence of probability measures P, on ( E , G),

we say that P, converges weakly to P if

for all bounded continuous functions f For this convergence, it is well-known

that we have the Levy-Prohorov metric:

tu(P1,PZ) = i n f { J : P I ( A ) < P 2 ( A 6 ) + S a n d P 2 ( A ) < P I ( A ' ) + S

for all closed set A E €'}

where A8 = {x : p ( z , g / ) < 6 for all y E A } Now, we are going t o introduce

a probability metric W p ( p >, 1) which is still less popular

As we know, in probability theory, we usually consider the following con- vergence for real random variablcv on a probability space:

Trang 21

8 0 AN OVERVIEW OF THE BOOK

(52, 9, IF) such that Jn - P,, ( - P and Cn 5 5, where 5 - P means that

5 has distribution P Thus, all the convergence above are intrinsically the same, except the LP-convergence In other words, if we want to find another intrinsic metric on the space of all probability measures, we should consider

an analogue of the LP-convergence

Let (1, J2: (R, 9, P) + ( E , p, 8 ) The usual LP-metric is defined by

Suppose that (i - Pi, i = 1 , 2 and (el, 5 2 ) - p Then

.-.,

Certainly, P is a coupling of PI and Pz However, if we ignore our reference frame ( R , 9 , P ) , then there are a lot of choices of F , for given Pl and PZ

Thus, the intrinsic metric should be defined as follows:

Definition 0.13 The metric defined above is called the minimum LP-

distance or the probability Lp-metric or W,-metric Briefly, we write

w = w1

In the literature, this metric has several different names: Kantorovich

Wasserstein metric, Hutchinson metric and so on Here, we choose the intrinsic name to avoid the confusion of the history In this book, we deal only with the metrics: w, W = W l ) W2 and the total variation:

I t is interesting to note that if we use the discrete metric

0 i f z = y

1 if z # y,

4 x 7 9 ) = {

then the distance of total variation is again a minimum L1-metric with re-

spect t o the metric d:

Theorem 0.14 V(PI,P2) := i n f , - ~ d ( z l , z 2 ) P ( d z , , d z 2 ) = il]Pl -P211var

Trang 22

0.2 PROBABILITY METRICS A N D COUPLING METHODS 9

As we mentioned before, W is usually stronger than w More precisely,

Theorem 0.17 (Dowson and Landau (1982), Givens and Shortt (1984), 01-

kin and Pukelsheim (1982)) Let Pk be the normal distribution on (Rd,93(Rd))

( d 2 1) with mean value m k and covariance matrix M k , k = 1 , 2 Then

where t r M denotes the trace of M

For general Pk, not necessarily the normal distribution, a characterization

of W z (PI ,P2) was obtained by Ruschendorf and Rachev (1990)

Fortunately, in most cases, what we need is only certain upper estimates For instance, to prove that Wp(Pn, P ) -+ 0 as n + 00, we need only to find out an upper estimate of W,(P,,P), which goes t o zero as n t 00 Noting that

-

any coupling measure P, will give us an upper estimate Thus, our main

task is to choose a coupling to make the above right-hand side as smaller as possible

We now study the coupling methods for Markov chains Suppose that we are given two Q-processes Pikjk ( k ) ( t ) with regular Q-matrices ($jk) on state

P1

Trang 23

10 0 AN OVERVIEW OF THE BOOK

space E k , k = 1, 2, respectively We want to find some coupling Q-process

P ( t ; i l , i 2 ; j1, j z ) with &-matrix (G(i1, i 2 ; j , , j z ) ) on the product state space

El x E2 having the marginality:

Define

j l

where f is a bounded function on E l Similarly, we can define R2 and s2

E1 (resp f on E2) as a bivariable function on El x E2, it is not difficult to prove that condition (0.8) implies the following marginality for operators

corresponding to (qizjz) (2) and ( Q ( i l , i 2 ; j l , j 2 ) ) , respectively Regarding f on

Any 6 satisfying (0.9) is called a coupling operator

Before going further, let us introduce some examples of coupling operators

In the following examples, f is a bounded function on El x E2, il E E l ,

i 2 E E2

Example 0.18 (Independent coupling)

This trivial example already shows that a coupling operator always exists

Example 0.19 (Classical coupling) Take El = E2 = E and let the two

marginal Q-matrix be the same ( q i j ) Set

where A = {(il, i2) E E2 : il = i z } , g(k) = f(k, k) and

a s defined above

Trang 24

0.2 P R O B A B I L I T Y METRICS AND COUPLING METHODS 11

Example 0.20 (Basic coupling)

where a A b = min{a, b ) and a+ = max{a, 0)

Example 0.21 (Coupling of marching soldiers) Take E = {0,1,2, * + }

and set

here we have used the convention qij = 0 for all i E E and j $! E

Let us now consider a birth-death process with regular Q-matrix:

Then for two copies of the process starting from i l and i 2 , respectively, we have

Example 0.22 (Coupling by reflection) For il 6 22, we take

By exchanging i1 and i2, we get the expression of … for the case that i1 > i2

Trang 25

12 0 AN OVERVIEW OF THE BOOK

Hopefully, we have introduced enough examples to show that there are many choices of a coupling operator 6 Indeed, there are infinitely many choices in the case of E being infinite For instance, for every I' c E 2 ,

-

G f ( i 1 7 i2) = T r ( i 1 7 i 2 ) 6 C f ( i 1 , i 2 ) $ 1 F c ( i 1 > i 2 ) stbf(il,i2)

is a coupling operator Now, to use the coupling technique, a basic prob- lem we should study is the regularity of coupling operators Note that the dimension of a coupling process is the sum of the dimensions of the marginals Hence, a coupling process is usually more difficult to handle than the marginals However, for the above problem, we do have a complete answer

Theorem 0.23 If the marginals are regular jump processes, then so is every coupling Markov process Conversely, if a Markovian coupling is a regular jump process, then so are i t s marginals Furthermore, in the regular case, (0.8) and

A coupling is called successful if P z l ~ z z [ T < ca] = 1 and

Suppose that a successful coupling does exist, then

IIP(t,il>.) -'(t,i2,*)llvar < 2*1'i2 [ T > t ] - 0 a st - - t O

Furthermore, if the process has a stationary distribution T , then

and so the process is ergodic

tonicity for Markov chains

As another application of the coupling technique, we discuss the mono-

Trang 26

0 3 REVERSIBLE MARKOV CHAINS 13

Definition 0.24 Take E = {0,1,2, } and let ( X k ( t ) ) i a o (k = 1,2) be

two copies of a Markov chain ( X ( t ) ) t a o with different s t a r t i n g points If

then we say t h a t the chain is monotone

One way to prove the monotonicity is using the coupling method For

example, applying the basic coupling to a Markov chain with regular Q-

matrix Q = (yij) on Z+, we find that the condition:

is sufficient for the monotonicity of the Markov chain Of course, if we use a different coupling, we will find different sufficient condition for the rnonotonicity From this point of view, it is believable that condition (0.10) is not necessary for the monotonicity A complete solution for the monotonicity

for general jump processes, as well as other topics discussed in this section,

are treated in Chapter 5

The above two sections are based on Chen (1989a, 1991d), respectively

0.3 Reversible Markov Chains

Definition 0.25 Let (Xt)t>o be a Markov process defined on (a, 9, IF') with countable state space E The process is called reversible if for any n 2 1,

0 < i l < < it, with

and any il, ' , in c E ,

[X,, = i l , ,xi, = in] = P [X,, = i n , , Xt,, = ill (Q.11)

Clearly, a reversible Markov chain (Xi) should be stationary That is,

xi := P [X, = i ] is independent of t 2 0 Actually,

Due t o the Markov property, (0.11) is equivalent to

(0.12)

Trang 27

14 0 AN OVERVIEW OF THE BOOK

This implies that

T i q i j = rjq ji) i , j E E , t 2 0 (0.13) Since it is easy to get Q = ( q i j ) in practice, but not ( P i j ( t ) ) , we should start our study from a given Q-matrix Thus, we are now at; the position

as at the beginning of Section 0.1 Fur a given Q-matrix Q (yij) which is

reversible with respect; to a probability measure (ni) in the sense of (0.13),

we would like to know when there is one, and when there is precise one, such Q-process (Pij (5)) so that (0.12) holds

To state our main results, let us relax the probability measure (ri) by

an arbitrary but non-trivial measure (7riTi) Then, we call Q = ( q i j ) (resp

( P i j ( t ) ) ) symmetrizable with respect t o (ni) if (0.13) (resp (0.12)) holds Finally, in this section, the only assumption for the Q-matrix ( q i j ) is the total stability: y i < 30 €or aIl i E E

Theorem 0.26 The minimal Q-process (Piyin(t)) is reversible (resp symme- trizable) with respect to (T*) iff so is its Q-matrix

Theorem 0.27 With respect t o a probability measure ( x i ) , the reversible Q-process is unique iff the following conditions hold

( q i j ) is reversible with respect t o ( ~ i ) ~

Equation (0.2) has only the trivial solution

Ti ( Y i - Cj+ 4ij) < 0 ,

general (ni): we have the following result

Theorem 0.28 With respect t o a measure (q), there exits precisely one symmetrizable Q-process if the following conditions hold

(1) Q = ( y i j ) is symmetrizable with respect t o (ni)

(2) C( T, (qi - Cj+i q i j ) < go or infi C j P;;~*’(A) > 0

(3) The only sumrnable solution t o Equation (0.2) is zero

We guess that condition (2) is still stronger than to be necessary Thus,

a complete criterion for the uniqueness of symmetrizable Q-process remains open Besides, even though we have known a great deal about the general Q-process (cf Section 3 4 , we only have a partial solution to the following problem

Open Problem 0.29 What is the uniqueness criterion for honest reversible

(resp syrnmetrizable) @process? Here, “honest” means that C j Pij(t) = 1 for all i E h‘ and t 2 0

(1)

(2)

(3)

For

Trang 28

0.4 LARGE DEVIATIONS AND SPECTRAL GAP 15

In the study of symmetrizable Q-process, a new question arises How can

we justify whether a given Q-matrix is symrnetriz.able with respect to some

measure (ri)? As a nice exercise, one may try to answer the question by himself for t h e Schliigl model In general, this question is answered by us& ari analogue of the classical field theory in analysis It is interesting that the same idea can also be used ta study t,hc recurrence for symmetrizable Markov chnhs (see Chapter 7 for details)

0,4 Large Dcviations and Spectral Gap

Markov chains consist of a nice class of stochastic processes, not, only for

their a lot of applications hut also for t,he!j, concrete beh.avior and simplicity

Tn the regular case, the paths of a hllarkov h a i n are simply step functions almost surely We can even see thc jump law: starting from a state .i, the chain stays in i for a while according to the exponential distribution with parameter qi Then, t.he chain jumps to j ( # i ) according to the distribution qij/yi (provided 4% > 0) Because of this reason, a large part, of the theory

of stochastic processes was begun from hdarkov chains Cowessely, Markov

chains can be used to justify the power of a general theory for stochastic processes

Let us discuss the two topics expressed by the title of this section In the Donsker-Varadhan's large deviation theory (an introduction to the the-

ory is presented in Section 8.1), we are interested in the entropy (=rate

function) :

(0.14)

and

- 1 upper estimate: lim - log Qt,.i(C) < - inf I(p),

1

lower estimate: lim - log Q,,i(G) 2 - inf I ( p ) ,

We should explain the notation used here Let (Xt)t>o be a Markov chain with transition probability P ( t > = (Pij(t)) a.nd let Pi be the probability that the chain starts from i E E Next, let 9 ( E ) be bhe set of probabilities on

E , endowed with the weak topology Set

and Q t , i = Pi o L t ' Considering P ( t ) as an operator on b&: the set of bounded functions with uniform norm? it induces an infinitesimal generator

Trang 29

16 0 -4N OVERVIEW O F T H E BOOK

L with domain g ( L ) Let g + ( L ) be the set of strictly positive functions in

In view of Markov chains, the entropy given by (0.14) is not satisfactory since 9 ( L ) is quite poor, even the indicator Iti) (i E E) is usually not in

g ( L ) However, we have

Thesrcm 0.30 Given a regular Q-matrix Q = ( q i j ) ,

m)*

(1) if p E 9(&) satisfies C , piqi < 00, then

where %? = F ( L ) or anyone of the following sets:

&+ = (f : f 2 E > O for some E > O } ,

€ O = (f : 0 < f < w},

b@+ = &8 n &+, b@ = b 8 n go9 (2) If (gij) is reversible with respect to some 7r E P ( E ) , then for every

p E 9 ( E ) , we have an explicit expression as follows

This theorem is proved in Chapter 8 Some upper estimates are also stu- died there Roughly speaking, the large deviations say that the exponential convergence rate of &t,z(C) is described by the entropy - iiifPEc I ( p ) For

reversible Markov processes, we have a different way 1,o look at the expo-

nential corivergeiicc rate: IIP(l)f - ~ ( j ) l l < 11s - ~ ( f ) l l exp[-et], where 11 + 11

is the norm in L2(7r) and ~ ( f ) is the mcan of f with respect to 7r Let 0

denote the largest value of E > 0 The curlstant u is the rate we are looking for As usual, the cortvergcnce rate is related to some spectral gap Let L

denote the generator with domain g ( L ) induced by P ( t ) on L 2 ( r ) and let

gap(L) denote the infimum of the spectra of -L restricted t o the orthogonal complement of the constant function 1 Then, we have the following result

Theorem 0.31

Trang 30

0.5 EQUILIBRIUM PARTICLE SYSTEMS 17

For finite Markov chains 1vit.h Q-matrix &, gap(Q) is nothing but the first

non-trivial eigenvalue of -& Estimating gap(Q) is a traditional hard topic

in mathematics '1'0 compute gap(&) explicitly, one has to stop when t h e order of Q is higher than five Surprisingly, we do have in a particular case a complete solution to the problem eve11 for some infinite matrices Consider the birth-death &-matrix: q i , i + l = bi > 0 ( i 2 0), qi,i-1 = ai > 0 ( i 3 1) arid

q , j == 0 for all other i # j Suppose that the process is ergodic and so we

have a stationary distribution

nefine

= { {w~}+o : wi is strictly increasing in i and C i xiwi 3 0},

"w - = { { w i } i > o : there exists k : 1 < k < m so that wi = wiAk, w is

strictly increasing in [0, k] and xi ~ i w i = 0},

Note that g i s simply a modification of W Hence, only two notations W

and l(w) are essential here

Theorem 0.32 For the ergodic birth-death process as above, the following

conclusions hold

(1) Variational: forrnula for the lower bound gap(D) = sup inf l i ( w ) - I ,

W G W 220

( 2 ) Variational formula for the upper bound: gap(D) = inf-sup &(w)-'

(3) Explicit bounds and explicit criterion: 26-' 3 gap(D) 2 (46)-1 In

W E W i&J

particular, gap(D) > 0 iff S < 00

The study on spectral gap is the aim of Chapter 9

0.5 Equilibrium Particle Systems

Let us start from the simplest case Consider a Q-process (P(t)) with

otherwise, the model is trivial Then

Trang 31

18 0 AN OVERVIEW OF THE BOOK

As the limit of Pij(t) ( t -+ oo), we obtain the stabionary distribution

n-1 a / ( + b ) , T+I T b/(a + b )

In other words, there is only one stationary distribution, denoted by 1 9 )= 1

‘l’hc above process is a rnodel with single particle having two states f l

If we consider finite number of particles, say it’ E N, N 2 2, Then the state

space becomes {-1, + l } N Thc system can be nlso described by a Q-process

(its operator is given by (0.15) below replacing Zd with N ) and we still have

What will happen if we replace N with a countable set? For instance, consider a particle system on the regular lattice Z d At each site u E Zd,

there is a particle with two states f l Then the whole configurations consist

of our state space {-l,+l}Zd, which is no longer countable Hence, the system can notj be described by a Q-process Now, we use c ( u , x ) , instead

A particular choice of c(u, x) is c ( u , x) = exp

rl‘hcIi we obtain the famous Ising model in

For d 3, the picture is similar for a critical point ,Bid) > 0 But for

d = I, we have 1 9= 1 It should be clear now that the k i n g model exhibits phase transitions which depend on the dimension d Actually, t,his model has

attracted a lot of attention in statistical physics, even in the 2-dimensional

case (see for instance, McCoy and Wu (1973))

The Ising model, as well as a fundcamental part of the theory of random

fields, including the typical methods-the Peierls method and the reflection positivity method for studying the phase transitions, are presented in Part JX

Based on the field theory, we introduce some simple criteria for the reversibi- lity of spin processes and exclusion processes Besides, two new developments

astatistical physics Now,

Trang 32

0.6 NON-EQUILIBRIUM PARTICLE SYSTEMS 19

in the field are included The first one is to use the lattice fractals instead the regular lattice Then, we do obtain some interesting results For example, the Ising model on lattice Sierpinski gasket h m no phase transitions in any dimension bul the model on lattice Sierpinski carpet does exhibit the phase transitions in any dimension The other one is to use some groups as the spin space instead of {-1, I l}, the latter one seems mainly suitable for the mctallic phasc transitions at low temperature Howcvm, new progress on

the superconductivity has been made recently by using ccramics instead of

ferromagnetics This explains why we have to consider more general spin space instead of {-1, +1}

0.6 Non-equilibrium Particle Systems

The Ising model discussed in the last section belongs to the equilibrium statistical physics Having the knowledge about the equilibrium systems in mind, it is natural to ask what we can do for the non-equilibrium systems

A typical example is Schlogl's model (Example 0.3), replacing the finite set

S with infinite one S = Z d Thc formal generator can be written ~LS follows:

where X I , , A4 and (p(u, u)) are the same as before, e, is the unit vector

in E = Z s having value 1 at u E Z d and 0 elsewhere This model is a special reaction-diffusion process studied in the last part of the book I t may be helpful for our readers to compare the Schlogl model with the Ising model (1) Clearly, the state space E = (-1, +l}"d for Ising model is compact

and so is L@(E) Thus, the process has at least one stationary dis- tribution But for Scl-ilogl model, the state space E = Z"+" is neither cornpact nor locally compact

(2) 'l'he k i n g model is reversible, ils local Gibbs distributions are explicit But the Schlijgl model hns no such advantage, except in a special case

(3) The generator ol the king model is locally bounded but it is not so

for the Schlogl model

These facts show that thc non-equilibrium particlc systems are more difficult

to handle than the equilibrium systems

Trang 33

20 0 AN OVERVIEW OF T H E BOOK

To construct an infinite dimensional Schlogl model, take a sequence {A,}

of finite subsets of Zd so that A, t Z d Then, we have a sequence of Markov chains {P,(t) : n 2 l} The next step is to prove that {Pn(t,x, -) : n 2 l}

is a Cauchy sequence Thus, we have to use a probability metric, say W , for instance:

w ( P , ( ~ , x, .), ~ , ( t , x, .)) + 0 as m 2 n + 00

From this line of the construction, we see a relation between the Markov chains and the interacting particle systems Locally, particle systems are Markov chains At this point, it explains why the title of the book is chosen The constructions, the uniqueness of the processes as well as 15 concrete models are presented in Chapter 13

I t will be proved in Chapter 14 that the reaction-diffusion processes of- ten have at least one stationary distribution and sometimes they are ergodic The reversible reaction-diffusion processes are always ergodic For some spe- cial models, we will prove that there more than one stationary distributions That is, the processes exhibit phase transitions (Chapter 15)

Finally, we turn to discuss the relation between the processes and par- tial differential equations It is known that the generator of &dimensional Brownian motion {Bt}t>o is the Laplacian a2/ax? Moreover, for suitable g, f ( t , x) := IE,g(Bt) satisfies the linear equation:

where V is a polynomial, there is no hope t o find a Markov process valued

in Rd with such a generator since for a Markov process, its generator must

be linear Nevertheless, under some hypotheses on the initial distribution of

the process and on the initial function p, it will be proved in the last chapter

of the book that a limit of some mean of a scaled reaction-diffusion process provides a solution to Eq (0.16) In other words, a reaction-diffusion process describes the microscopic behavior, and Eq (0.16) describes the macroscopic behavior of a non-equilibrium system In the last chapter, we will also prove that some solution to Eq (0.16) are asymptotically stable but some of them are not This result represents the critical phenomena of the systems, which corresponds more or less t o the phase transitions for the microscopic pro- cesses

Trang 34

PART I

GENERAL

J U M P PROCESSES

Trang 35

This page intentionally left blank

Trang 36

Chapter 1

Transition Function and its Laplace Transform

In this chapter, we first study some basic properties of sub-Markovian transition function of a jump process: continuity and differentiability From which, we deduce the transition intensity, q-pair Next, we study the one-to- one correspondence between the transition function of a jump process and its Laplace transform This enables us t o use the fundamental tool, Laplace transform, instead of the transition function itself in the subsequent study

As well-known, the advantage of using the Laplace transform is reducing the integral equations to the linear algebraic ones

1.1 Basic Properties of Transition Function

Throughout the book, we use the following notations Let ( E , 8 ) and

( X , 93) be two measurable spaces Denote by f E €,'A3 the measurable map- ping from ( E , &) to (X, 3) However, if X = with Borel a-algebra 93 =

B ( R ) , we simply use the same notation € t o denote the set of all measurable functions from ( E , € ) to (R,B(R)) Similarly, let r € (resp r€+, b&, b€+,

8+) denote the set of all measurable real-valued (resp non-negative real- valued, bounded real-valued, bounded non-negative and non-negative but may be +m) functions Finally, let Y (resp Y+, p+) denote the set of all a-additive set functions (resp finite measures, a-finite measures)

Unless otherwise stated, the state space ( E , € ) considered in the book is

a Pofash space with Borel a-algebra 8 Recall that a Polish space is a

separable topological space that can be metrized by means of a complete metric

Definition 1.1 We call P ( t , z , A ) (t 2 0, z E E , A E €) a (sub-Markovian)

transition function of a jump process if the following conditions hold

(1) For each t 2 0 and A E €, P ( t , , A ) E .8+

(2) For each t 2 0 and z E E , P ( t , z, .> E 9+ and P ( t , z, E ) < 1

(3) Chapman-Kolmogorov equation (abbrev CK-equation)

For each t , s 2 0, z E E and A E €,

P ( t + s, 2, A ) = J P ( t , z, d y ) P ( s , y, A )

(4) For each z E E and A E 8, limt,o P ( t , z, A ) = P(O,z, A ) = d(z, A ) ,

where 6(., A ) is the indicator of A , also denoted by 1,

23

Trang 37

24 1 TRANSITION F U N C T I O N A N D ITS LAPLACE TILANSFO

In this definition, the crucial point distinguishing to the transition func-

tion of a gcrieral Markov process is the last condition (4), which means the continuity at the origin and hence often quoted as continuous condition

However, because of this condition, the sample of the process are step func- tions, at least before the explosion time Since this reason, we also call (4)

the j u m p condition In many cmcs, we do not want to distinguish different jump processes with the same transition function Hence we olten call the transition function itself a jump process In particiilar, we call it a

Markav chain in the case of E being a countable set., denot.ed by matrix

(Pij(t) : i , j E E ) A jump process P ( t , z , A ) is called honest, if for each

t 2 0 arid z E f3, P ( t , 5 , h’) = 1 Otherwise, it is called non-honest

Theorem 1.2 For each 2 and g E b&+, Y ( 1 , z , dy)f(y) i s uniformly conti-

nuous in t uniformly in f with I f 1 6 9 In particular, P ( t , 5 , A ) is uniformly continuous in t uniformly in A

Proof: By conditions (3) and (2) of Definition 1.1, it follows that

In bhe last step we have used the fact that la - bl < c for ail a , b E [O:c]

Thus, we have

Now: the first assertion follows from this and Definition 1.1 ( 4 )

T h e next result shows a nice property of jump processes Even though we

need the result only in a few of cases, it is still included for completeness

Theorem 1.3 Let P ( t , z , A ) be a j u m p process on ( E , 8 ) Then for each

z E E and A E 8 , either P ( , z , A ) = 0 or P(a, ,A) > 0

Proof: If P ( t , x 7 A ) is not honest, we may introduce a fictitious state A $! E ,

such that EA := E U {A} is again a Polish space and A is an isolated state Moreover, € c &A := cr(& u {A}) Let

I

Trang 38

1.1 B A S I C PROPERTIES O F TRANSITION FUNCTION 25

Then we obt,ain an honest jump process P(t, x, A ) (t 3 0, 2 E EA, A E E n)

Clearly, if P(t, 2, A ) possesses the properties described in the theorem, then

so does P ( t , x , A ) Hence, we need only consider the honest jump processes

By CK-equation, we have

P(t + s, x, A) 3 P ( s , s , (x}) P ( t , 2 , A )

Hence, if P ( t , .,A) > 0, then for all > 1, P ( s , x , A ) > 0 Froin this and

the continuit.y of Y ( - , x , A ) , it follows t b t P ( t , x , A ) > 0 for all t whenever

z E A Furthermore, for x $ A, there exists u(x, A ) E [O, 031 such that

,P(t, x, A ) 2 0, if 0 6 t < u(x, A),

P ( t , x, A ) > 0, if t > u(5, A ) (1.2)

Thus, what we need to prove is that for each 2 C E arid A E 6 , 5 .$ A ,

either u(z, A ) = 0 or u(z, A ) = 03

Suppose that

0 < u ( 5 , A ) < 00 (1.3)

for some x and A Fix z and A Set uo = u(x,A), u(y) = u(y,A) and v(y) = u ( y ) A Obviously, u and LJ are measurable Since ( E , G ) is

a Polish space, we can construct a Markov process X ( t ) on a probability

space ( Q , 9: P) with t,ransition function P ( t , x, .) and initial state X ( 0 ) = 2

(cf Neveu (1965), p.83, Corollary), Let

Yo(t) = V ( X ( t ) )

Then Yo(0) = uo: 0 < Yo(t) < uo Moreover

By the dominated convergence theorem, the right-hand side tends to 1 as

h + 0 So Yo is contiiiuous in probability On the other hand, since E can

be embedded into a compact space X so that the completion ( E , p) of E in

X is again compact with metric p There exists a measurable and separable

version Y of Yo such that 0 < Y ( t ) < uo = Y ( 0 ) Now, it suffices to show tha t there exists A E 9 such that P (A) = 0 and

-4ctually, (1.4) gives us P ( t , T , {y : u ( y ) < u o } ) = 0 for almost all t and then for all t 2 0 because of the coritintiity of the transition function Notc that

s

Trang 39

26 1 TRANSITION FUNCTION AND ITS LAPLACE TRANSFO

u(y) = 0 whenever y E A It follows that P ( t , 5 , A ) = 0 for all t 2 0 This

is in contradiction with (1,2)

Next, we use three steps to prow (1-4)

a) Prove that Z ( t ) := Y ( t ) + t is non-decreasing in t

We first prove that

On the other hand, by (1.2), '(2) < '(9) - h implies that P ( v ( y ) - h, z , A ) >

0 Hence, (1.5) follows from (1.6)

Next, by (1.5), we have

This shows that

Y ( t + h) 3 Y ( t ) - h, P-a.s (1.7)

By the separability, we may choose an exceptional set so that (1.7) holds for

all t , h 2 0 This proves a) In what follows, we will ignore the exceptional set

lini 1 A(t, /~ ) e - ~ d t = 0

h+O 0

Thus, we can choose a sequence {h,}, h, 3 0, such that

oc,

By the Fubini theorem, there is ZI set N with Lebesgue measure zero so that

c, A(t, h,) < 00 for all t $ N Then, by the Borel-Cantelli lemma, we have

P [ Y ( t + h,) = Y ( t ) for sufficient large n ] = 1, t $ N

Trang 40

In this section, we study the right derivat.ives at the origin for a jump

process We first deal with the diagonals

Theorem 1.4 For each z E E , the limit

Proof: Fix z e E By CK-equation, we have

From condition (4) of Definition 1.1, we see that the right-hand side is posi- tive for large enough n Hence P ( t , 2, {x}) > 0 for all t 2 0 Thus, for fixed

3c, we may define

f ( t ) := - log P (t, z, {z}) E [ O , o o )

P ( t + s,z, { X } ) 3 P ( t , 5, (4) P b , 5 ) {.})*

(1.10)

By using CK-equation again, we obtain

This shows t.hat f is sub-additive:

(1.11) (1.8)

Ngày đăng: 18/10/2014, 09:00

TỪ KHÓA LIÊN QUAN

w