1. Trang chủ
  2. » Khoa Học Tự Nhiên

Brownian dynamics at boundaries and interfaces in physics chemistry and biology

340 742 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 340
Dung lượng 4,35 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Brownian dynamics serve as mathematical models for the diffusive motion of croscopic particles of various shapes in gaseous, liquid, or solid environments.. Brownian dynamics simulations

Trang 1

Applied Mathematical Sciences

Zeev Schuss

Brownian

Dynamics at

Boundaries and Interfaces

In Physics, Chemistry, and Biology

Trang 4

Brownian Dynamics

at Boundaries and Interfaces

In Physics, Chemistry, and Biology

123

Trang 5

School of Mathematical Sciences

Tel Aviv University

Tel Aviv, Israel

ISSN 0066-5452

ISBN 978-1-4614-7686-3 ISBN 978-1-4614-7687-0 (eBook)

DOI 10.1007/978-1-4614-7687-0

Springer New York Heidelberg Dordrecht London

Library of Congress Control Number: 2013944682

Mathematics Subject Classification (2010): 60-Hxx, 60H30, 62P10, 65Cxx, 82C3, 92C05, 92C37, 92C40, 35-XX, 35-B25, 35Q92

© Author 2013

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law.

The use of general descriptive names, registered names, trademarks, service marks, etc in this tion does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

publica-While the advice and information in this book are believed to be true and accurate at the date of lication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect

pub-to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media ( www.springer.com )

Trang 6

Brownian dynamics serve as mathematical models for the diffusive motion of croscopic particles of various shapes in gaseous, liquid, or solid environments Therenewed interest in Brownian dynamics is due primarily to their key role in molec-ular and cellular biophysics: diffusion of ions and molecules is the driver of alllife Brownian dynamics simulations are the numerical realizations of stochasticdifferential equations (SDEs) that model the functions of biological microdevicessuch as protein ionic channels of biological membranes, cardiac myocytes, neuronalsynapses, and many more SDEs are ubiquitous models in computational physics,chemistry, biophysics, computer science, communications theory, mathematical fi-nance theory, and many other disciplines Brownian dynamics simulations of therandom motion of particles, be it molecules or stock prices, give rise to mathe-matical problems that neither the kinetic theory of Maxwell and Boltzmann norEinstein’s and Langevin’s theories of Brownian motion could predict

mi-Kinetic theory, which assigns probabilities to configurations of ensembles ofparticles in phase space, assumes that the ensembles are in thermodynamic equi-librium, which means that no net current is flowing through the system Thus it isnot applicable to the description of nonequilibrium situations such as conduction

of ions through protein channels, nervous signaling, calcium dynamics in cardiacmyocytes, the process of viral infection, and countless other situations in molecularbiophysics

The motion of individual particles in the ensemble is not described in sufficientdetail to permit computer simulations of the atomic or molecular individual mo-tions in a way that reproduces all macroscopic phenomena The Einstein statisticalcharacterization of the motion of a heavy particle undergoing collisions with themuch smaller particles of the surrounding medium lays the foundation for computersimulations of the Brownian motion However, pushing Einstein’s description be-yond its range of validity leads to artifacts that baffle the simulators: particles movewithout velocity, so there is no telling when they enter or leave a given domain.Theoretically, they cross and recross interfaces an infinite number of times in anyfinite time interval Thus the simulation of Brownian particles in a small domainsurrounded by a continuum becomes problematic The Langevin description, whichincludes velocity, partially remedies the problem There is, however, a price to pay:the dimension, and therefore the computational complexity, is doubled

v

Trang 7

Computer simulations of diffusion with reflection or partial reflection at theboundary of a domain, such as at the cellular membrane, are unexpectedly com-plicated Both the discrete reflection and partial reflection laws of the simulatedtrajectories are not very intuitive in their peculiar dependence on the geometry ofthe boundary and on the local anisotropy of the diffusion tensor The latter is thehallmark of the diffusion of shaped objects A case in point is the diffusion of astiff rod, whose diffusion tensor is clearly anisotropic (see Sect.7.7) It is not apriori clear what should be the reflection law of the rod when one of its ends hitsthe impermeable boundary of the confining domain This issue has been a thorn inthe side of simulators for a long time, which may be explained by the unexpectedmathematical complexity of the problem It is resolved in Sects.2.5and2.6.The behavior of random trajectories near boundaries of the simulation imposes avariety of boundary conditions on the probability density of the random trajectoriesand its functionals The quite intricate connection between the boundary behavior

of random trajectories and the boundary conditions for the partial differential tions is treated here with special care The analysis of the mathematical issues thatarise in Brownian dynamics simulations relies on Wiener’s discrete path integralrepresentation of the transition probability density of the random trajectories thatare created by the discrete simulation As the simulation is refined, the Wiener in-tegral representation leads to initial and boundary value problems for partial differ-ential equations of elliptic and parabolic types that describe important probabilisticquantities These include probability density functions (pdfs), mean first passagetimes, density of the mean time spent at a point, survival probability, probabilityflux density, and so on Green’s function and its functionals play a central role inexpressing these quantities analytically and in determining their interrelationships.The analysis provides the means for determining the relationship between the timestep in a simulation and the boundary concentrations

equa-Key mathematical problems in running Brownian or Langevin simulations clude the following questions: What is the “correct” boundary behavior of the ran-dom trajectories? What is the effect of their boundary behavior on statistics, e.g.,

in-on the pdf? What boundary behavior should be chosen to produce a given boundarybehavior of the pdf? How can the higher-dimensional Langevin dynamics be ade-quately approximated by coarser Brownian dynamics? How should one choose thetime step in a simulation? Another curse of computer simulations of random motion

is the ubiquitous phenomenon of rare events It is particularly acute in molecularbiophysics, where the simulated particles have to hit small targets or to squeezethrough narrow passages This is the case, for example, in simulating ionic fluxthrough protein channels of biological membranes Finding a small target is an im-portant problem in Brownian dynamics simulations Can the computational effort

be reduced by providing analytical information about the process? While numericalanalysis gives error estimates for given simulation schemes on finite time intervals,simulations are often required to produce estimates of unlimited random quantitiessuch as first passage times or their moments Thus we need to know how muchcomputational effort is needed for an estimate of the random escape time from anattractor or a confining domain

Trang 8

In this book, we address these and additional mathematical problems of puter simulation of Itô-type SDEs The book is not concerned with numericalanalysis, that is, with the design of simulation schemes and the analysis of theirconvergence, but rather with the more fundamental questions mentioned above Theanalysis presented in this book not only is applicable to the Euler scheme, but canalso be applied to many other simulation schemes While the singular perturba-tion methods for the analysis of rare events that are due to small noise relative

com-to large drift were thoroughly discussed inSchuss(2010b,2011), the analysis ofrare events due to the geometry of the confining domain requires new mathemati-cal methods The “narrow escape problem” in diffusion theory, which goes back toLord Rayleigh, is to calculate the mean first passage time of a diffusion process to asmall absorbing target on an otherwise reflecting boundary of a bounded domain Itincludes also the problem of diffusing from one compartment to another through anarrow passage, a situation that is often encountered in molecular and cellular bio-physics and frustrates numerical simulations The new mathematical methods forresolving this problem are presented here in great analytical detail

The exposition in this book is kept at an intermediate level of mathematical rigor.Experience shows that mathematical rigor and applications can hardly coexist inthe same course; excessive rigor leaves no room for in-depth development of ana-lytical methods and tends to turn off students interested in scientific applications.Therefore, the book contains only the minimal mathematical rigor required for un-derstanding the mathematical concepts and for enabling the students to use theirown judgment of what is correct and what requires further theoretical study Alltopics require a basic knowledge of SDEs and of asymptotic methods in the theory

of partial differential equations, as presented, for example, inSchuss(2010b) Theintroductory review of stochastic processes in Chap.1should not be mistaken for

an expository text on the subject Its role it to establish terminology and to serve

as a refresher on SDEs The role of the exercises is give the reader an nity to examine his/her mastery of the subject Other texts on stochastic dynamicsinclude, among other titles, (Arnold 1998;Friedman 2007;Gihman and Skorohod1972;McKean 1969;Øksendal 1998;Protter 1992) Texts on numerical analysis

opportu-of stochastic differential equations include (Allen and Tildesley 1991;Kloeden andPlaten 1992;Milstein 1995;Risken 1996;Robert and Casella 1999;Doucet et al.2001; Kloeden 2002; Milstein and Tretyakov 2004;Honerkamp 1994) A solidtraining in partial differential equations of mathematical physics and in the asymp-totic methods of applied mathematics can be derived from the study of classical textssuch as (Zauderer 1989;O’Malley 1974;Kevorkian and Cole 1985) or (Bender andOrszag 1978) Many of the applications and examples in this book concern molec-ular and cellular biophysics, especially in the context of neurophysiology Basicfacts on these subjects should not be acquired from mathematicians or physicists,but rather from professional elementary texts on the subjects, such as (Alberts et al.1994;Hille 2001;Koch 1999;Koch and Segev 2001;Sheng et al 2012;Cowan et

al 2003;Yuste 2010;Baylog 2009) Wikipedia should be consulted for clarifyingbiochemical and physiological terminology

Trang 9

This book is aimed at applied mathematicians, physicists, theoretical chemists,and physiologists who are interested in modeling, analysis, and simulation of mi-crodevices of microbiology A special topics course from this book requires goodpreparation in the theory of SDEs, such as can be found in Schuss (2010b).Alternatively, some of the topics discussed in this book can be interspersed betweenthe topics of a more general course as applications and illustrations of the generaltheory.

The book contains exercises and worked-out examples Hands-on training instochastic processes, as my long teaching experience shows, consists in solving theexercises, without which understanding is only illusory

Acknowledgments Much of the material presented in this book is based on my

collaboration with D Holcman, A Singer, B Nadler, R.S Eisenberg, and manyother scientists and students, whose names are listed next to mine in the authorindex

Trang 10

List of Figures

2.1 Reflected trajectories 61

2.2 Oblique and normal reflections 69

2.3 Marginal density of x(T ) with oblique reflection 78

2.4 Marginal density of y(T ) with oblique reflection 79

2.5 Numerical solution the FPE with oblique reflection 79

2.6 Another numerical solution of the FPE with oblique reflection 80

2.7 The reflection law of X tin Ω 81

2.8 Marginal density of x(T ) with normal and oblique reflections 82

2.9 Marginal density of y(T ) with normal and oblique reflections 83

3.1 Typical baths separated by membrane with channel 96

3.2 Simulation in (0, 1) with normal initial distribution 102

3.3 Simulation in (0, 1) with initial residual of the normal distribution 102

3.4 Concentration profiles with time-step-independent injection rate 103

3.5 Concentration profile with time-step-dependent injection rate 103

3.6 Concentration vs displacement of a Langevin dynamics simulation 106

4.1 The domain D and its complement in the sphere D R 130

5.1 Variance of fluctuations in the fraction of bound sites 143

5.2 Schematic drawing of a synapse between two neurons 145

5.3 Model of a dendritic spine 146

6.1 Double-well potential surface 166

6.2 Contours and trajectories 167

6.3 A potential well with a single metastable state 167

6.4 Dumbbell-shaped domain 168

7.1 Escaping Brownian trajectory 200

7.2 Composite domains 200

7.3 Receptor movement on the neuronal membrane 201

7.4 An idealized model of the synaptic cleft 201

ix

Trang 11

7.5 Leak trajectory 202

7.6 Escape through a funnel 202

7.7 Funnel formed by a partial block 203

7.8 A small opening near a corner of angle α 211

7.9 Narrow escape from an annulus 212

7.10 Escape near a cusp 212

7.11 Escape to the north pole 214

7.12 A surface of revolution with a funnel 218

7.13 Conformal image of a funnel 220

7.14 Drift of a projected Brownian motion 226

7.15 Narrow straits formed by a cone-shaped funnel 227

7.16 Rod in strip 235

7.17 Conformal image of a rod in a strip 240

7.18 Boundary layers 241

7.19 NET from a domain 244

7.20 Organization of the neuronal membrane 246

8.1 Probability to exit through a single pump on the neck membrane 278

8.2 Exit probability in a synaptic cleft 280

8.3 Exit probability in a synaptic cleft, 20 AMPAR channels in PSD 280

Trang 12

List of Symbols

We use interchangeably· and E(·) to denote expectation (average) of a random

variable, butE(· | ·) and Pr{· | ·} to denote conditional expectation and conditional

δ(x) Dirac’s delta function (functional)

det(A) The determinant of the matrix A

E(x), x The expected value (expectation) of x

Δ, Δ x Greek uppercase delta, the Laplace operator (with respect to x):

e(t) The estimation error process: ˆx(t) − x(t)

F The sample space of Brownian events

J [x( ·)] Functional of the trajectory x( ·)

L2[a, b] Square integrable functions on the interval [a, b]

Mn,m Space of n × m real matrices

m1∧ m2 The minimum min{m1, m2}

xi

Trang 13

n ∼ N (μ, σ2) The random variable n is normally distributed with mean

μ and variance σ2

N ∼ N (μ, Σ) The random vector N is normally distributed with mean

μ and covariance matrix Σ

∇, ∇x Greek nabla, the gradient operator (with respect to x):

Pr{event} The probability of event

p X (x) The probability density function of the vector X

Q The rational numbers

R, R d The real line, the d-dimensional Euclidean space

V x The partial derivative of V with respect to x : ∂V /∂x

tr(A) Trace of the matrix A

Var(x) The variance of x

w(t), v(t) Vectors of independent Brownian motions

x, f (x) Scalars—lowercase letters

x, f (x) Vectors—bold lowercase letters

x i The ith element of the vector x

x(·) Trajectory or function in function space

Trang 16

List of Acronyms

BKE Backward Kolmogorov equation

CKE Chapman–Kolmogorov equation

epdf Equilibrium probability density function

FPE Fokker–Planck equation

FPT First passage time

i.i.d Independent identically distributed

i.o Infinitely often

ODE Ordinary differential equation

OU Ornstein–Uhlenbeck process

PAV Pontryagin–Andronov–Vitt

pdf Probability density function

PDE Partial differential equation

PDF Probability distribution function

SDE Stochastic differential equation

TSR Transition state region

TST Transition state theory

(G)TS Generalized transition state

(G)TST Generalized transition state theory

xv

Trang 18

1.1 Definition of Mathematical Brownian Motion 1

1.1.1 Mathematical Brownian Motion inRd 3

1.1.2 Construction of Mathematical Brownian Motions 6

1.1.3 Analytical and Statistical Properties of Brownian Paths 7

1.2 Integration with Respect to MBM The Itô Integral 9

1.2.1 Stochastic Differentials 11

1.2.2 The Chain Rule and Itô’s Formula 12

1.3 Stochastic Differential Equations 14

1.3.1 The Langevin Equation 14

1.3.2 Itô Stochastic Differential Equations 18

1.3.3 SDEs of Itô Type 18

1.3.4 Diffusion Processes 22

1.4 SDEs and PDEs 23

1.4.1 The Kolmogorov Representation 24

1.4.2 The Feynman–Kac Representation and Terminating Trajectories 25

1.4.3 The Pontryagin–Andronov–Vitt Equation for the MFPT 26

1.4.4 The Exit Distribution 27

1.4.5 The PDF of the FPT 29

1.5 The Fokker–Planck Equation 30

1.5.1 The Backward Kolmogorov Equation 32

1.5.2 The Survival Probability and the PDF of the FPT 32

2 Euler’s Scheme and Wiener’s Measure 35 2.1 Euler’s Scheme for Itô SDEs and Its Convergence 35

2.2 The pdf of Euler’s Scheme inR and the FPE 37

2.2.1 Euler’s Scheme inRd 39

2.2.2 The Convergence of the pdf in Euler’s Scheme inRd 39

2.2.3 Unidirectional and Net Probability Flux 42

2.3 Brownian Dynamics at Boundaries 45

xvii

Trang 19

2.4 Absorbing Boundaries 46

2.4.1 Unidirectional Flux and the Survival Probability 50

2.5 Reflecting and Partially Reflecting Boundaries 52

2.5.1 Reflection and Partial Reflection in One Dimension 53

2.6 Partially Reflected Diffusion inRd 59

2.6.1 Partial Reflection in a Half-Space: Constant Diffusion Matrix 60

2.6.2 State-Dependent Diffusion and Partial Oblique Reflection 67

2.6.3 Curved Boundary 75

2.7 Boundary Conditions for the Backward Equation 82

2.8 Discussion and Annotations 85

3 Brownian Simulation of Langevin’s 89 3.1 Diffusion Limit of Physical Brownian Motion 90

3.1.1 The Overdamped Langevin Equation 90

3.1.2 Diffusion Approximation to the Fokker–Planck Equation 92

3.1.3 The Unidirectional Current in the Smoluchowski Equation 93

3.2 Trajectories Between Fixed Concentrations 94

3.2.1 Trajectories, Fluxes, and Boundary Concentrations 96

3.3 Connecting a Simulation to the Continuum 99

3.3.1 The Interface Between Simulation and the Continuum 100

3.3.2 Brownian Dynamics Simulations 100

3.3.3 Application to Channel Simulation 104

3.4 Annotation 107

4 The First Passage Time to a Boundary 111 4.1 The FPT and Escape from a Domain 111

4.2 The PDF of the FPT and the Density of the Mean Time Spent at a Point 115

4.3 The Exit Density and Probability Flux Density 119

4.4 Conditioning 120

4.4.1 Conditioning on Trajectories that Reach A Before B 121

4.5 Application of the FPT to Diffusion Theory 125

4.5.1 Stationary Absorption Flux in One Dimension 125

4.5.2 The Probability Law of the First Arrival Time 126

4.5.3 The First Arrival Time for Steady-State Diffusion inR3 129

4.5.4 The Next Arrival Times 132

4.5.5 The Exponential Decay of G(r, t) 133

Trang 20

5 Brownian Models of Chemical Reactions in Microdomains 135

5.1 A Stochastic Model of a Non-Arrhenius Reaction 137

5.2 Calcium Dynamics in Dendritic Spines 144

5.2.1 Dendritic Spines and Their Function 144

5.2.2 Modeling Dendritic Spine Dynamics 148

5.2.3 Biological Simplifications of the Model 149

5.2.4 A Simplified Physical Model of the Spine 150

5.2.5 A Schematic Model of Spine Twitching 150

5.2.6 Final Model Simplifications 151

5.2.7 The Mathematical Model 152

5.2.8 Mathematical Simplifications 152

5.2.9 The Langevin Equations 152

5.2.10 Reaction–Diffusion Model of Binding and Unbinding 154

5.2.11 Specification of the Hydrodynamic Flow 155

5.2.12 Chemical Kinetics of Binding and Unbinding Reactions 157

5.2.13 Simulation of Calcium Kinetics in Dendritic Spines 158

5.2.14 A Langevin (Brownian) Dynamics Simulation 159

5.2.15 An Estimate of a Decay Rate 159

5.2.16 Summary and Discussion 162

5.3 Annotations 163

6 Interfacing at the Stochastic Separatrix 165 6.1 Transition State Theory of Thermal Activation 168

6.1.1 The Diffusion Model of Activation 169

6.1.2 The FPE and TST 170

6.2 Reaction Rate and the Principal Eigenvalue 172

6.3 MFPT 175

6.3.1 The Rate κabs(D), MFPTτ(D), an Eigenvalue λ1(D) 176

6.3.2 MFPT for Domains of Types I and II inRd 177

6.4 Recrossing, Stochastic Separatrix, Eigenfunctions 179

6.4.1 The Eigenvalue Problem 182

6.4.2 Can Recrossings Be Neglected? 186

6.5 Accounting for Recrossings and the MFPT 188

6.5.1 The Transmission Coefficient kTR 192

6.6 Summary and Discussion 193

6.6.1 Annotations 194

7 Narrow Escape inR2 199 7.1 Introduction 199

7.1.1 The NET Problem in Neuroscience 199

7.1.2 NET, Eigenvalues, and Time-Scale Separation 203

Trang 21

7.2 A Neumann–Dirichlet Boundary Value Problem 204

7.2.1 The Neumann Function and an Integral Equation 205

7.3 The NET Problem in Two Dimensions 207

7.4 Brownian Motion in Dire Straits 218

7.4.1 The MFPT to a Bottleneck 218

7.4.2 Exit from Several Bottlenecks 223

7.4.3 Diffusion and NET on a Surface of Revolution 224

7.5 A Composite Domain with a Bottleneck 227

7.5.1 The NET from Domains with Bottlenecks inR2 andR3 230

7.6 The Principal Eigenvalue and Bottlenecks 231

7.6.1 Connecting Head and Neck 231

7.6.2 The Principal Eigenvalue in Dumbbell-Shaped Domains 232

7.7 A Brownian Needle in Dire Straits 234

7.7.1 The Diffusion Law of a Brownian Needle in a Planar Strip 235

7.7.2 The Turnaround Time τ L →R 237

7.8 Applications of the NET 242

7.9 Annotations 247

7.9.1 Annotation to the NET Problem 247

8 Narrow Escape inR3 249 8.1 The Neumann Function in Regular Domains inR3 249

8.1.1 Elliptic Absorbing Window 253

8.1.2 Second-Order Asymptotics for a Circular Window 256

8.1.3 Leakage in a Conductor of Brownian Particles 258

8.2 Activation Through a Narrow Opening 262

8.2.1 The Neumann Function 264

8.2.2 Narrow Escape 267

8.2.3 Deep Well: A Markov Chain Model 269

8.3 The NET in a Solid Funnel-Shaped Domain 272

8.4 Selected Applications in Molecular Biophysics 277

8.4.1 Leakage from a Cylinder 277

8.4.2 Applications of the NET 279

8.5 Annotations 281

Trang 22

Mathematical Brownian

Motion

1.1 Definition of Mathematical Brownian Motion

The basic concepts in the axiomatic definition of the one-dimensional Brownianmotion as a mathematical object are a space of events Ω, whose elementary events

are real-valued continuous functions ω = ω( ·) on the positive axis R+ The struction of the set of eventsF in Ω and of Wiener’s probability measure Pr{A} for

con-A ∈ F is given inSchuss(2010b) A continuous stochastic process is a function

w(t, ω) :R+× Ω → R such that for all ω ∈ Ω, the function w(t, ω) is a continuous

function of t and for all x ∈ R and t ∈ R+, the set{ω ∈ Ω : w(t, ω) ≤ x} is an

event inF Mathematical Brownian motion (MBM), often referred to as the Wiener

process, is defined as follows

Definition 1.1.1 (The MBM) A real-valued stochastic process w(t, ω) defined on

R+×Ω is an MBM if (1) w(0, ω) = 0 with probability 1, (2) w(t, ω) is a continuous

function of t for almost all ω ∈ Ω, and (3) For every t, s ≥ 0, the increment Δw(s, ω) = w(t + s, ω) − w(t, ω) is independent of w(τ, ω) for all τ ≤ t, and is a zero mean Gaussian random variable with variance

Z Schuss, Brownian Dynamics at Boundaries and Interfaces: In Physics, Chemistry,

and Biology, Applied Mathematical Sciences 186, DOI 10.1007/978-1-4614-7687-0 1,

© Author 2013

1

Trang 23

The second part of property (3) means that the probability distribution function(PDF) of an MBM is

F w (x, t) = Pr {ω ∈ Ω : w(t, ω) ≤ x | w(0, ω) = 0}

=12πt

It is well known (and easily verified) that f w (x, t)is the solution of the initial valueproblem for the diffusion equation

∂f w (x, t)

12

2f w (x, t)

∂x2 , limt ↓0 f w (x, t) = δ(x). (1.4)

It can be shown that a stochastic process satisfying these axioms actually exists(Schuss 2010b) Some of the properties of MBM follow from the axioms in astraightforward manner For example, note that (1) and (2) are not contradictory,

despite the fact that not all continuous functions vanish at time t = 0 Property

(1) asserts that all trajectories of the Brownian motion that do not start at the originare assigned probability 0 In view of the above, the Brownian paths are those con-tinuous functions that take the value 0 at time 0 That is, the Brownian paths are

conditioned on starting at time t = 0 at the point x0= w(0, ω) = 0 To emphasizethis point, we modify the notation of the Wiener probability measure (1.2) to Pr0{·}

Wiener(1923)

If (1.2) is replaced by

F w (x, t) = Pr {ω ∈ Ω : w(t, ω) ≤ x | w(0, ω) = x0}

=12πt

the measure Pr0{·}, is now assigned the probability 1 under the measure Pr x0{·}.

Similarly, replacing the condition t0= 0with t0= s and conditioning on w(s, ω) =

x0in (1.5) shifts the Wiener probability measure, now denoted by Prx0,s, so that

Prx ,s {ω ∈ Ω : w(t, ω) ∈ [a, b]} = Pr0{ω ∈ Ω : w(t − s, ω) ∈ [a − x0, b − x0]}.

Trang 24

This means that for all positive t, the increments of the MBM Δw(s, ω) = w(t +

s, ω) −w(t, ω), as functions of s, are MBMs, so that the probabilities of any

Brown-ian event of Δw(s, ω) are independent of t, that is, the increments of the MBM are

stationary Accordingly, the moments of the MBM are

We recall that the autocorrelation function of a stochastic process x(t, ω) is defined

as the expectation R x (t, s) = Ex(t, ω)x(s, ω).

Exercise 1.1 (Property 4) Using the notation t ∧ s = min{t, s}, prove that the

autocorrelation function of the MBM w(t, ω) is

2

1.1.1 Mathematical Brownian Motion in Rd

The set of eventsF in the product space

Trang 25

Consider the so called “cylinder” event for times 0≤ t1 < t2 < · · · < t k and

Definition 1.1.2 (The Wiener probability measure for a d-dimensional MBM).

The d-dimensional Wiener probability measure of a cylinder is defined as

Equation (1.4) implies that fw(x, t) satisfies the d-dimensional diffusion equation

and the initial condition

It can be seen from (1.9) that every rotation of the d-dimensional Brownian motion

is a d-dimensional Brownian motion Higher-dimensional stochastic processes are

defined as vector-valued processes

Definition 1.1.3 (Vector-valued processes) A vector-valued function x(t, ω) :

R+×Ω→ R d is called a stochastic process in (Ω, F) with continuous ries if (i) x(t, ω) is a continuous function of t for every ω ∈ Ω, and (ii) for every

trajecto-t ≥ 0 and x ∈ R d , the sets

Trang 26

and the pdf is defined as

f x(y, t) =

d F x(y, t)

∂y1∂y2 ∂y d (1.14)

The expectation of a matrix-valued function g(x) of a vector-valued process x(t, ω)

and the autocovariance matrix is defined as

Covx(t, s) = E [x(t) − Ex(t)] [x − Ex(s)] T

The autocovariance matrix of the d-dimensional Brownian motion is found from

(1.7) as

where I is the identity matrix.

Exercise 1.2 (Transformations preserving an MBM) Show, by verifying

prop-erties (1)–(3), that the following processes are Brownian motions: (i) w1(t) = w(t + s) − w(s); (ii) w2(t) = cw(t/c2), where c is any positive constant; (iii)

Exercise 1.3 (Changing scale) Give necessary and sufficient conditions on the

functions f (t) and g(t) such that the process w4(t) = f (t)w(g(t))is an MBM 2

Exercise 1.4 (The joint pdf of the increments of the MBM) Define

Exercise 1.5 (Radial MBM) Define radial MBM by y(t) = |w(t)|, where w(t) is

an n-dimensional MBM Find the pdf of y(t), the partial differential equation, and

Trang 27

1.1.2 Construction of Mathematical Brownian Motions

There are several constructions of MBMs, such as a Fourier series with randomcoefficients (Schuss 2010b) A simple construction of an MBM as a limit of aninfinite sequence of continuous random functions is as follows Consider a sequence

of standard Gaussian i.i.d random variables{Y k }, for k = 0, 1, , defined in

˜

Ω We denote by ω any realization of the infinite sequence {Y k } and construct a

continuous path corresponding to this realization We consider a sequence of binarypartitions of the unit interval,

, we refine by keeping the “old” points, that is, by setting

X2(t, ω) = X1(t, ω) for t ∈ T1, and in the “new” point, T2\ T1 =1

2[X1(0, ω) + X1(1, ω)] + 12Y2(ω) The process X2(t, ω)is defined

in the interval by linear interpolation between the points of T2 We proceed byinduction,

connect linearly between consecutive points

Thus X n+1(t) is a refinement of X n (t) Old points stay put! So far, for every

real-ization ω, we have constructed an infinite sequence of continuous functions It can

be shown (Schuss 2010b) that for almost all (in the sense of ˜Ω) realizations ω, the sequence X n (t)converges uniformly to a continuous function, thus establishing a

correspondence between ω and a continuous function Obviously, the

correspon-dence can be reversed in this construction

Trang 28

Exercise 1.6 (MBM at binary points) Show that at binary points, t k,n = k2 −n

for 0≤ k ≤ 2 n

, the process X n (t, ω)has the properties of the Brownian motion

Exercise 1.7 (L2convergence) Show that X n (t, ω) L → X(t, ω), where X(t, ω)2

Exercise 1.8 (Lévy’s construction gives an MBM) Show that if X1(t) and X2(t)

are independent Brownian motions on the interval [0, 1], then the process

t



− X2(1) for t > 1

Exercise 1.9 (Refinements) For a given sequence 0 = t0< t1 < · · · < t n = T,

consider the zero-mean independent Gaussian random variables Δw(t k)such that

EΔw2(t k ) = Δt k = t k − t k −1 If a Brownian trajectory is sampled at points

0 = t0< t1< · · · < t n = T according to the scheme

x(t k ) = x(t k −1 ) + Δw(t k ), k = 1, , n, (1.19)

or otherwise, how should the sampling be refined by introducing an additional pling point ˜t i such that t i < ˜ t i < t i+1? 2

sam-1.1.3 Analytical and Statistical Properties of Brownian Paths

The Wiener probability measure assigns probability 0 to several important classes ofBrownian paths These classes include all differentiable paths, all paths that satisfythe Lipschitz condition at some point, all continuous paths with bounded variation

on some interval, and so on The Brownian paths have many interesting properties(Itô and McKean 1996;Hida 1980;Rogers and Williams 2000); here we list only afew of the most prominent features of the Brownian paths

Although continuous, the Brownian paths are nondifferentiable at any point withprobability 1 (Paley et al 1933;Schuss 2010b) The level-crossing property of an

MBM is that for every level a, the times t such that w(t) = a form a perfect set (i.e.,

every point of this set is a limit of points in this set) Thus, when a Brownian path

reaches a given level at time t, it recrosses it infinitely many times in every interval

[t, t + Δt]

Exercise 1.10 (Level crossing) Use the scheme (1.19) with step size Δt = 0.5

to sample a Brownian path in the interval 0 ≤ t ≤ 1 and refine it several times at

binary points Count the number of crossings of a given level as the trajectory is

We suppress henceforward the variable ω ∈ Ω in the notation for a stochastic

process

Trang 29

Definition 1.1.4 (Markov process) A stochastic process ζ(t) on [0, T ] is called

a Markov process if for every pair of sequences 0 ≤ t0 < · · · < t n ≤ T and

x0, x1, , x n , its transition probability distribution function (TPDF) has the property

that if the process is observed at times t0, t1, , t n −1such that 0 ≤ t0 < · · · <

t n −1 ≤ T , its “future” evolution (at times t > t n −1) depends only on the “latest”

observation (at time t n −1).

Theorem 1.1.1 (The Chapman–Kolmogorov equation) The pdf of a Markov

process satisfies the Chapman–Kolmogorov equation (CKE)

are consequences of the Markov property Writing p(y, t | x, s) as a marginal density

of p(y, t, z, τ | x, s) and using these identities, we obtain (1.22)

Exercise 1.11 (MBM is a Markov process) Prove that MBM is a Markov process.

(HINT: Show that for any sequences 0 = t0< t1< · · · < t n and x0= 0, x1, , x n)

Trang 30

The CKE (1.22) implies that it suffices to know the two-point transition pdf of theBrownian motion

0y(s) ds (i) Prove that y(t) is a Markov process (ii) Prove that x(t) is not a

Markov process (iii) Prove that the two-dimensional process z(t) = (x(t), y(t)) is

1.2 Integration with Respect to MBM.

The Itô Integral

A stochastic process f (t, ω) is adapted to a Brownian motion w(t, ω) if it is ependent of the increments of the Brownian motion w(t, ω) “in the future,” that

ind-is, if f (t, ω) is independent of w(t + s, ω) − w(t, ω) for all s > 0 For example,

if f (x) is an integrable deterministic function, then the functions f (w(t, ω)) and

t

0f (w(s, ω)) ds are adapted We denote by H2[0, T ]the class of adapted stochastic

processes f (t, ω) on an interval [0, T ] such thatT

0 Ef2(s, ω) ds < ∞ Integration

with respect to white noise is defined in this class of stochastic processes The Itô

integral of a function f (t, ω) ∈ H2[0, T ] is defined by the sums over partitions

Note that the increment Δi w = w(t i , ω) − w(t i −1 , ω) is independent of f (t i −1 , ω),

because f (t, ω) is adapted It can be shown (see Schuss 2010b) that for everysequence of partitions of the interval such that maxi (t i − t i −1)→ 0, the sequence {σ n (t, ω) } converges in probability to the same limit, denoted by

(I)

t

0

f (s, ω) dw(s, ω)Pr= lim

maxi (t i −t i−1 )→0 σ n (t, ω), (1.25)

and called the Itô integral of f (t, ω) It can also be shown that the convergence

in (1.25) in the mean square sense is also uniform in t with probability one, that is,

on almost every trajectory of the Brownian motion w(t, ω) The Itô integral is also

Trang 31

an adapted stochastic process in Ω It takes different values on different realizations

ω of the Brownian trajectories If f (t) is an integrable deterministic function, then

the Itô integral is a zero-mean Gaussian random variable with variancet

0f2(s) ds,

which is written as

t

0

f (s) dw(s)

T

0

E [f(s)g(s)] ds. (1.28)

Property (1.26) follows from the construction of the Itô integral and the

indepen-dence of f (t) of the increments of the MBM w(t )− w(t )for all t ≤ t  ≤ t  It is

easy to see that properties (1.27) and (1.28) are equivalent

Exercise 1.13 (Integral of w(t, ω)) Show that

Trang 32

If the midpoint is chosen in the integral sum

is called the Stratonovich integral (seeSchuss 2010b)

Exercise 1.15 (Another Stratonovich sum) Use instead of (1.30) the sums

Exercise 1.16 (The Wong–Zakai correction) Show that if f (x, t) has a

continu-ous derivative of second order, then

a(s) ds +

t

0

Trang 33

We abbreviate this notation as

differential (1.36) does not satisfy the usual rule dx2= 2x dx 2

Example 1.2 (The differential of f (t)w(t)) If f (t) is a smooth deterministic

func-tion, then integration by parts is possible, so that

as in the classical calculus In this case, a(t) = f  (t)w(t) and b(t) = f (t) 2

1.2.2 The Chain Rule and Itô’s Formula

The essence of the differentiation rules is captured in the chain rule for

differenti-ating composite functions Consider n Itô differentiable processes dx i = a i dt +

'm

j=1b ij dw j for i = 1, 2, , n, where a i , b ij ∈ H2[0, T ] for i = 1, 2, , n, j =

1, 2, , m and w j are independent Brownian motions We consider a function

f (x1, x2, , x n , t) that has continuous partial derivatives of second order with

respect to x1, x2, , x n

and a continuous partial derivative with respect to t For an

n -dimensional process x(t) that is differentiable in the ordinary sense, the classical

Trang 34

Theorem 1.2.1 (Itô’s formula).

Exercise 1.17 (Itô’s formula in 1-D) Specialize Itô’s formula (1.38) to the

one-dimensional case: for a process x(t) with differential dx = a(t) dt + b(t) dw, where

a(t), b(t) ∈ H2[0, T ] , and a twice continuously differentiable function f (x, t),

Exercise 1.18 (Itô’s formula as the chain rule).

(i) Apply Itô’s formula (1.38) to the function f (x1, x2) = x1x2and obtain the

rule for differentiating a product

(ii) Apply Itô’s one-dimensional formula of Exercise1.17to the function f (x) =

ex Obtain a differential equation for the function y(t) = e αw (t).

Trang 35

(iii) Use the transformation y = log x to solve the linear stochastic differential

equation

dx(t) = ax(t) dt + bx(t) dw(t), x(0) = x0. (1.42)

Exercise 1.19 (Applications to moments).

(i) Use the one-dimensional Itô formula to prove

Eew (t)= 1 +1

2

t

0

Eew (s) ds = e t/2.

(ii) Calculate the first and the second moments of eaw (t) ,eiw (t) , sin aw(t), cos aw(t), where a is a real constant.

2

Exercise 1.20 (Rotation of white noise) Given two independent Brownian motions

w1(t), w2(t) and a process x(t) ∈ H2[0, T ] , define the processes u1(t), u2(t)bytheir differentials

du1(t) = − sin x(t) dw1(t) + cos x(t) dw2(t),

du2(t) = cos x(t) dw1(t) + sin x(t) dw2(t).

Show that u1(t) and u2(t)are independent Brownian motions 2

1.3 Stochastic Differential Equations

In the wake of Einstein’s 1905 theory (Einstein 1956) of the irregular motion ofsmall particles immersed in a viscous fluid, as observed by the botanist Brown in

1827,Langevin (1908) a more sophisticated model The motion of a particle of

mass m and radius a in a force field F (x, t) immersed in a fluid with viscosity

coefficient η is given by

where Ξ = (Ξ1, Ξ2, Ξ3)T is a vector of independent identically distributed

δ-correlated Gaussian white noises More specifically,

E Ξi (s1)Ξj (s2) = 2γk B T

m δ i,j δ(s1− s2), (1.44)

Trang 36

and Γ = 6πaη is the friction coefficient of a diffusing particle, according to Stokes’s

formula for the drag force on a sphere in a viscous laminar flow The dynamical

friction coefficient (per unit mass) is denoted by γ = Γ/m If the force F (x, t) can be derived from a potential, F (x, t) = −∇U(x, t), Langevin’s equation (1.43)

takes the form m¨ x+Γ ˙x+∇U(x, t) = Ξ The Langevin equation can be interpreted

as the system of integral equations

v(s) ds − 1

m

t

0

∇U(x(s), s) ds +

,

2γk B T

m w(t), (1.46)

where k B is Boltzmann’s constant and T is absolute temperature [this is the

expres-sion of Einstein’s fluctuation–dissipation principle (Schuss 2010b)] If γ = γ(x) isdisplacement-dependent, then the integral equation (1.45) can be understood as theItô system

∇U(x(s), s)

t

0

,

2γ(x(s))k B T

(1.48)

Exercise 1.21 (Maxwellian distribution of velocities) Solve the Langevin

equa-tion for the case of constant fricequa-tion and a free Brownian moequa-tion (i.e., F (x, t) = 0)

and prove that the transition pdf p(v, t | v0)converges to the Maxwellian

lim

t →∞ p (v, t | v0) =



m 2πk B T

3/2exp



− m |v|22k B T



2

Exercise 1.22 (Autocorrelation of the velocity process) The velocity process of a

free Brownian particle, defined in (1.46) and (1.48) for the case U (x) = 0, is called

the Ornstein–Uhlenbeck process (OU) or colored noise.

(i) Calculate the autocorrelation function of the one-dimensional OU process

R(t1, t2) =v(t1)v(t2) ,

and for constant t = t2− t1, find the limit

R(t) = lim

t1→∞ R(t1, t1+ t).

Trang 37

(ii) Prove that for every t1> 0,

lim

γ →∞

0

f (t2) γ

m2e−γ|t2−t1| dt2=

2

m2f (t1) (1.50)

for all test functions f (t) inR+.

(iii) Prove that under the conditions of Exercise1.21,

Exercise 1.24 (The conditional moments of the displacement) Calculate the first

and second conditional moments of the displacement, E[x(t) | x0 , v0],

E [x(t) − x0 |2 | x0, v0], given the initial conditions x(0) = x0, v(0) = v0

2

Exercise 1.25 (The unconditional variance).

(i) Use the Maxwell distribution of velocities (1.49) to prove that the tional second moment is

k B T /3maη This result was verified experimentally by Perrin (1908) The

one-dimensional diffusion coefficient is therefore given by D = k T /6maη

Trang 38

(iii) Use (1.53) to show that the short-time asymptotics ofE |x(t) − x0 |2are

Exercise 1.26 (The joint pdf of displacement and velocity) Prove that the joint

pdf of displacement and velocity for a one-dimensional free Brownian motion in a

constant external field V  (x) = gis as given byChandrasekhar(1943):

Exercise 1.27 (Reconcile the Einstein and Langevin approaches) Show that for

two disjoint time intervals (t1, t2) and (t3, t4), in the limit γ → ∞, the

incre-ments of the displacement of the free Brownian motion, Δ1x = x(t2)− x(t1)and

Δ3x = x(t4)− x(t3), are independent zero-mean Gaussian variables with ances proportional to the time increments Show that the increments Δ1x and Δ3x

vari-are zero-mean Gaussian variables and use (1.54) to show that property (3) of MBM

is satisfied (Schuss 2010b)

2

Trang 39

1.3.2 Itô Stochastic Differential Equations

Dynamics driven by white noise, often written as

˙

x = a(x, t) + B(x, t) ˙w, x(0) = x0, (1.60)are usually understood as the integral equation

x(t) = x(0) +

t

0

a(x(s), s) ds +

t

0

B(x(s), s) dw(s), (1.61)

where a(x, t) and B(x, t) are deterministic coefficients, which can be interpreted

in several different ways, depending on the interpretation of the stochastic gral in (1.61) as Itô, Stratonovich, or otherwise Different interpretations lead tovery different solutions and to qualitative differences in the behavior of the solution.For example, a noisy dynamical system of the form (1.60) may be stable if the Itôintegral is used in (1.61), but unstable if the Stratonovich or the backward integral

inte-is used instead Different interpretations lead to different numerical schemes for thecomputer simulation of the equation A different approach, based on path integrals,

is given in Chap.2

In modeling stochastic dynamics with equations of the form (1.61), a key tion arises: which of the possible interpretations is the right one to use? This ques-tion is particularly relevant if the noise is state-dependent, that is, if the coefficients

ques-B(x, t) depend on x This situation is encountered in many different applications,

e.g., when the friction coefficient or the temperature in Langevin’s equation is notconstant The answer to this question depends on the origin of the noise Uncorre-lated white noise (or nondifferentiable MBM) is an idealization of a physical processthat may have finite, though short, correlation time (or differentiable trajectories).This is illustrated in (1.51), where the correlated velocity process of a free Brownianparticle becomes white noise in the limit of large friction, or in Exercise1.27, wherethe displacement process becomes an MBM in that limit The white-noise approx-imation may originate in a model with discontinuous paths in the limit of small orlarge frequent jumps, and so on Thus, the choice of the integral in (1.61) is not arbi-trary, but rather derives from the underlying more microscopic model and from thepassage to the white-noise limit In certain situations, this procedure leads to an Itôinterpretation and in others to a Stratonovich interpretation The limiting proceduresare described in Chap.3

1.3.3 SDEs of Itô Type

First, we consider the one-dimensional version of equation (1.60) and interpret it inthe Itô sense as the output of an Euler numerical scheme of the form

x E (t + Δt) = x E (t) + a(x E (t), t)Δt + b(x E (t), t)Δw(t) (1.62)

in the limit Δt → 0 To each realization of a MBM w(t) = w(t, ω) constructed

numerically, e.g., by any of the methods mentioned in Sect.1.1.2, equation (1.62)

Trang 40

assigns a realization x E (t) = x E (t, ω)of the solution at grid points Because

Δw(t) = w(t + Δt) − w(t) is a Gaussian random variable, the right-hand side

of (1.62) can assume any value inR, so that x E (t)can assume any value at every

time t This implies that a(x, t) and b(x, t) have to be defined for all x ∈ R If for

each x ∈ R, the random coefficients a(x, t) and b(x, t) are adapted processes, say

of class H2[0, T ] for all T > 0, then the output process x E (t)is also an adaptedprocess

The output process at grid times t j = jΔt, given by

and the other of the Itô integralt

0b(x(s), s) dw(s), where every t is a limit of grid

points, t = lim Δt→0 t jand

x(t) = lim Δt→0 x(t j)

if the limit exists in some sense

If the coefficients a(x, t) and b(x, t) are adapted processes, (of class H2[0, T ]

for all T > 0), equation (1.60) is written in the Itô form

a(x(s), s) ds +

t

0

b(x(s), s) dw(s). (1.65)

The initial condition x0is assumed independent of w(t).

There are several different definitions of a solution to the stochastic differentialequation (1.64), including strong, weak, a solution to the martingale problem, pathintegral interpretation (see Chap.2), and so on Similarly, there are several differentnotions of uniqueness, including uniqueness in the strong sense, pathwise unique-ness, and uniqueness in probability law For the definitions and relationship betweenthe different definitions, seeLiptser and Shiryayev(1977) andKaratzas and Shreve(1991) We consider here only strong solutions (abbreviated as solutions) of (1.64)

Definition 1.3.1 (Solution of an SDE) A stochastic process x(t) is a solution of

the initial value problem (1.64) in the Itô sense if x(t)∈ H2[0, T ] for all T > 0 and

equation (1.65) holds for almost all ω∈ Ω.

We assume that the coefficients a(x, t) and b(x, t) satisfy the uniform Lipschitz

con-dition, that is, there exists a constant K such that

|a(x, t) − a(y, t)| + |b(x, t) − b(y, t)| ≤ K|x − y| (1.66)

for all x, y ∈ R, t ≥ 0, and ω ∈ Ω.

Ngày đăng: 13/03/2018, 14:48

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm