10.3 Microscopic derivation of the diffusion equation When solving partial differential equations such as the diffusion equation numerically, the deriva-tives are always discretized.. Th
Trang 1# i n c l u d e < i o m a n i p >
# i n c l u d e " l i h "
u s i n g namespace s t d ;
/ / F u n c t i o n t o r e a d i n d a t a f r o m s c r e e n , n o t e c a l l by r e f e r e n c e
v o i d i n i t i a l i s e (i n t& , i n t& , d o u b l e&) ;
/ / The Mc s a m p l i n g f o r random w a l k s
v o i d m c _ s a m p l i n g (i n t , i n t , d ou b le , i n t , i n t ) ;
/ / p r i n t s t o s c r e e n t h e r e s u l t s o f t h e c a l c u l a t i o n s
v o i d o u t p u t (i n t , i n t , i n t , i n t ) ;
i n t main ( )
{
i n t m a x _ t r i a l s , n u m b e r _ w a l k s ;
d o u b l e m o v e _ p r o b a b i l i t y ;
/ / Read i n d a t a
i n i t i a l i s e ( m a x _ t r i a l s , n u m b e r _ w a l k s , m o v e _ p r o b a b i l i t y ) ;
i n t w a l k _ c u m u l a t i v e = new i n t [ n u m b e r _ w a l k s + 1 ] ;
i n t w a l k 2 _ c u m u l a t i v e = new i n t [ n u m b e r _ w a l k s + 1 ] ;
f o r ( i n t w a l k s = 1 ; w a l k s < = n u m b e r _ w a l k s ; w a l k s ++) {
w a l k _ c u m u l a t i v e [ w a l k s ] = w a l k 2 _ c u m u l a t i v e [ w a l k s ] = 0 ;
} / / end i n i t i a l i z a t i o n o f v e c t o r s
/ / Do t h e mc s a m p l i n g
m c _ s a m p l i n g ( m a x _ t r i a l s , n u m b e r _ w a l k s , m o v e _ p r o b a b i l i t y ,
w a l k _ c u m u l a t i v e , w a l k 2 _ c u m u l a t i v e ) ;
/ / P r i n t o u t r e s u l t s
o u t p u t ( m a x _ t r i a l s , n u m b e r _ w a l k s , w a l k _ c u m u l a t i v e ,
w a l k 2 _ c u m u l a t i v e ) ;
d e l e t e [ ] w a l k _ c u m u l a t i v e ; / / f r e e memory
d e l e t e [ ] w a l k 2 _ c u m u l a t i v e ;
r e t u r n 0 ;
} / / end main f u n c t i o n
The input and output functions are
v o i d i n i t i a l i s e (i n t& m a x _ t r i a l s , i n t& n u m b e r _ w a l k s , d o u b l e&
m o v e _ p r o b a b i l i t y )
{
c o u t < < " N u b r o M o t e C a l t r a s = ";
c i n > > m a x _ t r i a l s ;
c o u t < < " N u b r o a t t e p t d w a l s = ";
c i n > > n u m b e r _ w a l k s ;
c o u t < < " M o e p r o b b i i t y = ";
c i n > > m o v e _ p r o b a b i l i t y ;
} / / end o f f u n c t i o n i n i t i a l i s e
Trang 2v o i d o u t p u t (i n t m a x _ t r i a l s , i n t n u m b e r _ w a l k s ,
i n t w a l k _ c u m u l a t i v e , i n t w a l k 2 _ c u m u l a t i v e ) {
o f s t r e a m o f i l e (" t e t w a k e s d t ") ;
f o r( i n t i = 1 ; i < = n u m b e r _ w a l k s ; i ++) {
d o u b l e x a v e r a g e = w a l k _ c u m u l a t i v e [ i ] / ( ( d o u b l e) m a x _ t r i a l s ) ;
d o u b l e x 2 a v e r a g e = w a l k 2 _ c u m u l a t i v e [ i ] / ( (d o u b l e) m a x _ t r i a l s ) ;
d o u b l e v a r i a n c e = x 2 a v e r a g e x a v e r a g ex a v e r a g e ;
o f i l e < < s e t i o s f l a g s ( i o s : : s h o w p o i n t | i o s : : u p p e r c a s e ) ;
o f i l e < < s e t w ( 6 ) < < i ;
o f i l e < < s e t w ( 1 5 ) < < s e t p r e c i s i o n ( 8 ) < < x a v e r a g e ;
o f i l e < < s e t w ( 1 5 ) < < s e t p r e c i s i o n ( 8 ) < < v a r i a n c e < < e n d l ;
}
o f i l e c l o s e ( ) ;
} / / end o f f u n c t i o n o u t p u t
The algorithm is in the functionmc_samplingand tests the probability of moving to the left or to the right by generating a random number
v o i d m c _ s a m p l i n g (i n t m a x _ t r i a l s , i n t n u m b e r _ w a l k s ,
d o u b l e m o v e _ p r o b a b i l i t y , i n t w a l k _ c u m u l a t i v e ,
i n t w a l k 2 _ c u m u l a t i v e ) {
l o n g idum ;
idum = 1; / / i n i t i a l i s e random number g e n e r a t o r
f o r ( i n t t r i a l = 1 ; t r i a l < = m a x _ t r i a l s ; t r i a l ++) {
i n t p o s i t i o n = 0 ;
f o r ( i n t w a l k s = 1 ; w a l k s < = n u m b e r _ w a l k s ; w a l k s ++) {
i f ( r a n 0 (& idum ) < = m o v e _ p r o b a b i l i t y ) {
p o s i t i o n + = 1 ;
}
e l s e {
p o s i t i o n = 1 ;
}
w a l k _ c u m u l a t i v e [ w a l k s ] + = p o s i t i o n ;
w a l k 2 _ c u m u l a t i v e [ w a l k s ] + = p o s i t i o np o s i t i o n ;
} / / end o f l o o p o v e r w a l k s
} / / end o f l o o p o v e r t r i a l s
} / / end m c _ s a m p l i n g f u n c t i o n
Fig 10.3 shows that the variance increases linearly as function of the number of time steps, as ex-pected from the analytic results Similarly, the mean displacement in Fig 10.4 oscillates around zero
Trang 30 20 40 60 80 100
2
Time steps t
Figure 10.3: Time development of
2 for a random walker 100000 Monte Carlo samples were used with the function ran1 and a seed set to 1
-0.04 -0.02 0 0.02 0.04
hx(t)i
Time steps t
Figure 10.4: Time development of hx(t)i for a random walker 100000 Monte Carlo samples were used with the function ran1 and a seed set to 1
Trang 4Exercise 10.1
Extend the above program to a two-dimensional random walk with probability1=4
for a move to the right, left, up or down Compute the variance for both thexandy
directions and the total variance
10.3 Microscopic derivation of the diffusion equation
When solving partial differential equations such as the diffusion equation numerically, the deriva-tives are always discretized Recalling our discussions from Chapter 3, we can rewrite the time derivative as
w(x; t)
t
w(i; n + 1) + w(i; n)
t
whereas the gradient is approximated as
D
2 w(x; t)
x 2
w(i + 1; n) + w(i 1; n) w(i; n)
(x) 2
resulting in the discretized diffusion equation
w(i; n + 1) + w(i; n)
t
w(i + 1; n) + w(i 1; n) w(i; n)
(x) 2
where n represents a given time step and i a step in the x-direction We will come back to the solution of such equations in our chapter on partial differential equations, see Chapter 16 The aim here is to show that we can derive the discretized diffusion equation from a Markov process and thereby demonstrate the close connection between the important physical process diffusion and random walks Random walks allow for an intuitive way of picturing the process
of diffusion In addition, as demonstrated in the previous section, it is easy to simulate a random walk
10.3.1 Discretized diffusion equation and Markov chains
A Markov process allows in principle for a microscopic description of Brownian motion As with the random walk studied in the previous section, we consider a particle which moves along the
x-axis in the form of a series of jumps with step lengthx = l Time and space are discretized and the subsequent moves are statistically indenpendent, i.e., the new move depends only on the previous step and not on the results from earlier trials We start at a positionx = jl = jx and move to a new positionx = ixduring a stept = , wherei 0andj 0are integers The original probability distribution function (PDF) of the particles is given by w
i (t = 0) where i refers to a specific position on the grid in Fig 10.2, withi = 0representingx = 0 The function
Trang 5vector For the Markov process we have a transition probability from a position x = jl to a positionx = ilgiven by
W ij () = W (il j l ) =
1 2
ji j j = 1
(10.27)
ij for the transition probability and we can represent it, see below, as a matrix Our
i
(t = )is now related to the PDF att = 0through the relation
w i (t = ) = W (j ! i)w
j
This equation represents the discretized time-development of an original PDF It is a microscopic way of representing the process shown in Fig 10.1 Since bothW andwrepresent probabilities, they have to be normalized, i.e., we require that at each time step we have
X
i w i
j
The further constraints are0 W
ij
1and0 w
j
1 Note that the probability for remaining
at the same place is in general not necessarily equal zero In our Markov process we allow only for jumps to the left or to the right
The time development of our initial PDF can now be represented through the action of the transition probability matrix applied n times At a time t
n
= n our initial distribution has developed into
w i (t n ) = X
j W ij (t n )w j
and defining
W (il jl n) = (W
n ())
we obtain
w i (n) =
X
j (W n ()) ij w j
or in matrix form
^ w(n) =
^ W n
The matrix ^
W can be written in terms of two matrices
^
1 2
^
^ R
Land ^
Rrepresent the transition probabilities for a jump to the left or the right, respectively For a4 4case we could write these matrices as
^
0
B B
1
C C A
Trang 6^
0
B B
1
C C A
However, in principle these are infinite dimensional matrices since the number of time steps are very large or infinite For the infinite case we can write these matrices R
ij
= Æ i;(j+1) and L
ij
= Æ
(i+1);j, implying that
^ L
^
^ R
^
and
^
^ R 1
(10.39)
To see that ^
L
^
^ R
^
L = 1, perform e.g., the matrix multiplication
^ L
^
X
k
^ L ik
^ R kj
= X
k Æ (i+1);k Æ k;(j+1)
= Æ i+1;j+1
= Æ i;j
and only the diagonal matrix elements are different from zero
For the first time step we have thus
^
1 2
^
^ R
and using the properties in Eqs (10.38) and (10.39) we have after two time steps
^ W 2 (2) =
1 4
^ L 2 +
^ R 2 + 2
^ R
^ L
and similarly after three time steps
^ W 3 (3) =
1 8
^ L 3 +
^ R 3 + 3
^ R
^ L 2 + 3
^ R 2
^ L
Using the binomial formula
n X
k=0
n k
^ k
^ b
n k
= (a + b)
n
we have that the transition matrix afterntime steps can be written as
^ W n (n)) =
1 2 n
n X
k=0
n k
^ R k
^ L
n k
or
^ W n (n)) =
1 n
n X
n k
^ L
n 2k
= 1 n
n X
n k
^ R 2k n
Trang 7and usingR
m
ij
= Æ i;(j+m) andL
m ij
= Æ (i+m);j we arrive at
W (il jl n) =
8
<
: 1 2 n
n 1 2 (n + i j
ji jj n
andn + i j has to be an even number We note that the transition matrix for a Markov process has three important properties:
It depends only on the difference in spacei j, it is thus homogenous in space
It is also isotropic in space since it is unchanged when we go from(i; j to( i; j
It is homogenous in time since it depends only the difference between the initial time and final time
If we place the walker at x = 0 at t = 0we can represent the initial PDF withw
i (0) = Æ
i;0 Using Eq (10.34) we have
w i (n) =
X
j (W n ()) ij w j (0) = X
j 1 2 n
n 1 2 (n + i j
Æ j;0
resulting in
w i (n) =
1 2 n
n 1 2 (n + i)
Using the recursion relation for the binomials
n + 1 1
2 (n + 1 + i))
=
n 1 2 (n + i + 1)
+
n 1 2
(10.50)
we obtain, definingx = il,t = nand setting
w(x; t) = w(il; n) = w
i
w(x; t + ) =
1 2 w(x + l t) +
1 2
and adding and subtractingw(x; t)and multiplying both sides withl
2
=we have w(x; t + ) w(x; t)
= l 2
2
w(x + l t) 2w(x; t) + w(x l t)
l 2
and identifying D = l
2
=2 and lettingl = x and = twe see that this is nothing but the discretized version of the diffusion equation Taking the limitsx ! 0andt ! 0we recover
w(x; t)
t
2 w(x; t)
x 2
;
the diffusion equation
Trang 810.3.2 Continuous equations
Hitherto we have considered discretized versions of all equations Our initial probability distri-bution function was then given by
w i (0) = Æ
i;0
; and its time-development after a given time stept = is
w i (t) = X
j
W (j ! i)w
j (t = 0):
The continuous analog tow
i (0)is
where we now have generalized the one-dimensional positionxto a generic-dimensional vector
x The KroeneckerÆfunction is replaced by theÆdistribution functionÆ(x)att = 0
The transition from a statejto a stateiis now replaced by a transition to a state with position
yfrom a state with positionx The discrete sum of transition probabilities can then be replaced
by an integral and we obtain the new distribution at a timet + tas
w(y; t + t) =
Z
and aftermtime steps we have
w(y; t + mt) =
Z
When equilibrium is reached we have
w(y) =
Z
We can solve the equation forw(y; t)by making a Fourier transform to momentum space The PDFw(x; t)is related to its Fourier transformw(k; ~ t)through
w(x; t) =
Z 1
1
and using the definition of theÆ-function
Æ(x) =
1 2
Z 1
1
we see that
~
We can then use the Fourier-transformed diffusion equation
w(k; ~ t)
2
~
Trang 9with the obvious solution
~ w(k; t) = w(k; ~ 0) exp
(Dk 2 t)
= 1 2
exp
(Dk 2 t)
Using Eq (10.58) we obtain
w(x; t) =
Z 1
1
dk exp [ikx ℄
1 2
exp
(Dk 2 t)
= 1 p 4Dt exp
(x 2
=4Dt)
1
1
It is rather easy to verify by insertion that Eq (10.63) is a solution of the diffusion equation The solution represents the probability of finding our random walker at position x at time t if the initial distribution was placed atx = 0att = 0
There is another interesting feature worth observing The discrete transition probabilityW itself is given by a binomial distribution, see Eq (10.47) The results from the central limit
theorem, see Sect ??, state that transition probability in the limit n ! 1 converges to the normal distribution It is then possible to show that
W (il jl n) ! W (y; x; t) =
1 p 4Dt
exp
2
=4Dt)
and that it satisfies the normalization condition and is itself a solution to the diffusion equation
10.3.3 Numerical simulation
In the two previous subsections we have given evidence that a Markov process actually yields
in the limit of infinitely many steps the diffusion equation It links therefore in a physical in-tuitive way the fundamental process of diffusion with random walks It could therefore be of interest to visualize this connection through a numerical experiment We saw in the previous subsection that one possible solution to the diffusion equation is given by a normal distribution
In addition, the transition rate for a given number of steps develops from a binomial distribution into a normal distribution in the limit of infinitely many steps To achieve this we construct in addition a histogram which contains the number of times the walker was in a particular position
x This is given by the variable probability, which is normalized in the output function We have omitted the initialization function, since this identical to program1.cpp of this chapter The array
probability extends from number_walksto+number_walks
programs/chap10/program2.cpp
/
1 dim random w a l k program
A w a l k e r m akes s e v e r a l t r i a l s s t e p s w i t h
a g i v e n number o f w a l k s p e r t r i a l
/
Trang 10# i n c l u d e < i o s t r e a m >
# i n c l u d e < f s t r e a m >
# i n c l u d e < i o m a n i p >
# i n c l u d e " l i h "
u s i n g namespace s t d ;
/ / F u n c t i o n t o r e a d i n d a t a f r o m s c r e e n , n o t e c a l l by r e f e r e n c e
v o i d i n i t i a l i s e (i n t& , i n t& , d o u b l e&) ;
/ / The Mc s a m p l i n g f o r random w a l k s
v o i d m c _ s a m p l i n g (i n t , i n t , d ou b le , i n t , i n t , i n t ) ;
/ / p r i n t s t o s c r e e n t h e r e s u l t s o f t h e c a l c u l a t i o n s
v o i d o u t p u t (i n t , i n t , i n t , i n t , i n t ) ;
i n t main ( )
{
i n t m a x _ t r i a l s , n u m b e r _ w a l k s ;
d o u b l e m o v e _ p r o b a b i l i t y ;
/ / Read i n d a t a
i n i t i a l i s e ( m a x _ t r i a l s , n u m b e r _ w a l k s , m o v e _ p r o b a b i l i t y ) ;
i n t w a l k _ c u m u l a t i v e = new i n t [ n u m b e r _ w a l k s + 1 ] ;
i n t w a l k 2 _ c u m u l a t i v e = new i n t [ n u m b e r _ w a l k s + 1 ] ;
i n t p r o b a b i l i t y = new i n t [ 2( n u m b e r _ w a l k s + 1 ) ] ;
f o r ( i n t w a l k s = 1 ; w a l k s < = n u m b e r _ w a l k s ; w a l k s ++) {
w a l k _ c u m u l a t i v e [ w a l k s ] = w a l k 2 _ c u m u l a t i v e [ w a l k s ] = 0 ;
}
f o r ( i n t w a l k s = 0 ; w a l k s < = 2n u m b e r _ w a l k s ; w a l k s ++) {
p r o b a b i l i t y [ w a l k s ] = 0 ;
} / / end i n i t i a l i z a t i o n o f v e c t o r s
/ / Do t h e mc s a m p l i n g
m c _ s a m p l i n g ( m a x _ t r i a l s , n u m b e r _ w a l k s , m o v e _ p r o b a b i l i t y ,
w a l k _ c u m u l a t i v e , w a l k 2 _ c u m u l a t i v e , p r o b a b i l i t y ) ;
/ / P r i n t o u t r e s u l t s
o u t p u t ( m a x _ t r i a l s , n u m b e r _ w a l k s , w a l k _ c u m u l a t i v e ,
w a l k 2 _ c u m u l a t i v e , p r o b a b i l i t y ) ;
d e l e t e [ ] w a l k _ c u m u l a t i v e ; / / f r e e memory
d e l e t e [ ] w a l k 2 _ c u m u l a t i v e ; d e l e t e [ ] p r o b a b i l i t y ;
r e t u r n 0 ;
} / / end main f u n c t i o n
The output function contains now the normalization of the probability as well and writes this to its own file
v o i d o u t p u t (i n t m a x _ t r i a l s , i n t n u m b e r _ w a l k s ,
i n t w a l k _ c u m u l a t i v e , i n t w a l k 2 _ c u m u l a t i v e , i n t
p r o b a b i l i t y ) {