1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Computational Physics - M. Jensen Episode 2 Part 4 ppsx

20 214 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 288,55 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The eigenvalues of A is defined through the matrix equation Ax x 13.1 where are the eigenvalues and x the corresponding eigenvectors... 13.1 is to perform a given number of similari

Trang 1

12.3 SIMULATION OF MOLECULAR SYSTEMS 229

protons In Fig 12.5 we show a plot of the potential energy

V (r; R) =

ke 2

ke 2

+ ke 2 R

Here we have fixed jRj = 2a

0, being 2 and 8 Bohr radii, respectively Note that in the region between jrj = jRj=2(units are r=a

0 in this figure, with a

0

= 0:0529) and

jrj = jRj=2the electron can tunnel through the potential barrier Recall that R=2 og R=2

correspond to the positions of the two protons We note also that ifRis increased, the potential becomes less attractive This has consequences for the binding energy of the molecule The binding energy decreases as the distance R increases Since the potential is symmetric with

" = 13:6eV

r=a 0

8 6

4 2

0 -2

-4 -6

-8

0 -10 -20 -30 -40 -50 -60

Figure 12.5: Plot of V (r; R ) for jRj=0.1 and 0.4 nm Units along thex-axis are r=a

0 The straight line is the binding energy of the hydrogen atom," = 13:6eV

respect to the interchange ofR ! Randr ! rit means that the probability for the electron

to move from one proton to the other must be equal in both directions We can say that the electron shares it’s time between both protons

With this caveat, we can now construct a model for simulating this molecule Since we have only one elctron, we could assume that in the limitR ! 1, i.e., when the distance between the two protons is large, the electron is essentially bound to only one of the protons This should correspond to a hydrogen atom As a trial wave function, we could therefore use the electronic wave function for the ground state of hydrogen, namely

100 (r) =

 1

 1=2 e r=a 0

(12.78)

Trang 2

230 CHAPTER 12 QUANTUM MONTE CARLO METHODS

Since we do not know exactly where the electron is, we have to allow for the possibility that the electron can be coupled to one of the two protons This form includes the ’cusp’-condition discussed in the previous section We define thence two hydrogen wave functions

1 (r; R) =

 1

a 3 0

 1=2 e

jr R=2j=a0

(12.79) and

2 (r; R) =

 1

a 3 0

 1=2 e jr+R=2j=a

0

(12.80) Based on these two wave functions, which represent where the electron can be, we attempt at the following linear combination

 (r; R) = C

 ( 1 (r; R) 

2

withC

a constant

in preparation

12.4 Many-body systems

He

Liquid4

He is an example of a so-called extended system, with an infinite number of particles The density of the system varies from dilute to extremely dense It is fairly obvious that we cannot attempt a simulation with such degrees of freedom There are however ways to circum-vent this problem The usual way of dealing with such systems, using concepts from statistical Physics, consists in representing the system in a simulation cell with e.g., periodic boundary conditions, as we did for the Ising model If the cell has lengthL, the density of the system is determined by putting a given number of particlesN in a simulation cell with volumeL

3

The density becomes then = N=L

3

In general, when dealing with such systems of many interacting particles, the interaction it-self is not known analytically Rather, we will have to rely on parametrizations based on e.g., scattering experiments in order to determine a parametrization of the potential energy The in-teraction between atoms and/or molecules can be either repulsive or attractive, depending on the distanceRbetween two atoms or molecules One can approximate this interaction as

V (R ) =

A R m B R n

where m; n are some integers and A; B constans with dimension energy and length, and with units in e.g., eVnm The constants and the integers are determined by the constraints

Trang 3

12.4 MANY-BODY SYSTEMS 231

V (R )

0.5 0.45

0.4 0.35

0.3 0.25

0.007 0.006 0.005 0.004 0.003 0.002 0.001 0 -0.001

Figure 12.6: Plot for the Van der Waals interaction between helium atoms The equilibrium position isr

0

that we wish to reproduce both scattering data and the binding energy of say a given molecule

It is thus an example of a parametrized interaction, and does not enjoy the status of being a fundamental interaction such as the Coulomb interaction does

A well-known parametrization is the so-called Lennard-Jones potential

V LJ (R ) = 4





 R

 12



 R

 6



where = 8:7910

4

eV and = 0:256nm for helium atoms Fig 12.6 displays this interaction model The interaction is both attractive and repulsive and exhibits a minimum atr

0 The reason why we have repulsion at small distances is that the electrons in two different helium atoms start repelling each other In addition, the Pauli exclusion principle forbids two electrons to have the same set of quantum numbers

Let us now assume that we have a simple trial wave function of the form

T

N Y i<j

f (r ij

where we assume that the correlation functionf (r

ij )can be written as

f (r ij ) = e

1 2 (b=r ij ) n

withbbeing the only variational parameter Can we fix the value ofnusing the ’cusp’-conditions discussed in connection with the helium atom? We see from the form of the potential, that it

Trang 4

232 CHAPTER 12 QUANTUM MONTE CARLO METHODS

diverges at small interparticle distances Since the energy is finite, it means that the kinetic energy term has to cancel this divergence at smallr Let us assume that electronsiandjare very close to each other For the sake of convenience, we replacer

ij

= r At smallrwe require then that

1

f (r) r 2

In the limitr ! 0we have

n 2 b n

4r 2n+2 +



 r

 12

resulting inn = 5and thus

f(r ij ) = e 1 (b=r ij )

with

T

N Y i<j e 1 2 (b=r ij )

as trial wave function We can rewrite the above equation as

T

1 2 P N i<j (b=r ij )

1 2 P N i<j u(r ij )

(12.90) with

u(r ij ) = (b=r

ij ) 5 :

For this variational wave function, the analytical expression for the local energy is rather simple The tricky part comes again from the kinetic energy given by

1

T (R) r 2 T

It is possible to show, after some tedious algebra, that

1

T (R) r 2 T

N X k=1 1

T (R) r 2 k T

5 N X i<k 1 r 7 ik : (12.92)

In actual calculations employing e.g., the Metropolis algorithm, all moves are recast into the chosen simulation cell with periodic boundary conditions To carry out consistently the Metropolis moves, it has to be assumed that the correlation function has a range shorter than

L=2 Then, to decide if a move of a single particle is accepted or not, only the set of particles contained in a sphere of radiusL=2centered at the referred particle have to be considered

in preparation

Trang 5

12.4 MANY-BODY SYSTEMS 233

in preparation

in preparation

Trang 7

Chapter 13

Eigensystems

13.1 Introduction

In this chapter we discuss methods which are useful in solving eigenvalue problems in physics

13.2 Eigenvalue problems

Let us consider the matrix A of dimension n The eigenvalues of A is defined through the matrix

equation

Ax ()

() x ()

(13.1) where

()

are the eigenvalues and x

()

the corresponding eigenvectors This is equivalent to a set ofnequations withnunknownsx

i

a 11 x 1

12 x 2 +   + a

1n x n

1 a

21 x 1

22 x 2 +   + a

2n x n

2

a n1 x 1

n2 x 2 +   + a

nn x n

n :

W can rewrite eq (13.1) as

() I

 x ()

= 0;

withI being the unity matrix This equation provides a solution to the problem if and only if the determinant is zero, namely

() I

= 0;

which in turn means that the determinant is a polynomial of degreeninand in general we will havendistinct zeros, viz.,

P



= n Y i=1 (

i

) :

235

Trang 8

236 CHAPTER 13 EIGENSYSTEMS

Procedures based on these ideas con be used if only a small fraction of all eigenvalues and eigenvectors are required, but the standard approach to solve eq (13.1) is to perform a given number of similarity transformations so as to render the original matrixAin: 1) a diagonal form or: 2) as a tri-diagonal matrix which then can be be diagonalized by computational very effective procedures

The first method leads us to e.g., Jacobi’s method whereas the second one is e.g., given by Householder’s algorithm for tri-diagonal transformations We will discuss both methods below

In the present discussion we assume that our matrix is real and symmetric, although it is rather straightforward to extend it to the case of a hermitian matrix The matrixA hasn eigenvalues



1

: : 

n(distinct or not) LetDbe the diagonal matrix with the eigenvalues on the diagonal

0

B B B B B B



 1

2

3

n 1

n

1

C C C C C C A

The algorithm behind all current methods for obtaning eigenvalues is to perform a series of similarity transformations on the original matrix A to reduce it either into a diagonal form as above or into a tri-diagonal form

We say that a matrixBis a similarity transform ofAif

T

T

1

The importance of a similarity transformation lies in the fact that the resulting matrix has the same eigenvalues, but the eigenvectors are in general different To prove this, suppose that

T

Multiply the first equation on the left byS

T

and insertS

T

S = IbetweenAandx Then we get

(S T AS)(S

T

T

which is the same as

T x



T x



Thusis an eigenvalue ofBas well, but with eigenvectorS

T

x Now the basic philosophy is to

 either apply subsequent similarity transformations so that

S T N : : S T 1 AS 1 : : S N

 or apply subsequent similarity transformations so that A becomes tri-diagonal Thereafter,

techniques for obtaining eigenvalues from tri-diagonal matrices can be used

Let us look at the first method, better known as Jacobi’s method

Trang 9

13.2 EIGENVALUE PROBLEMS 237

Consider a (n  n) orthogonal transformation matrix

0

B B B B B B B B B B



1

C C C C C C C C C C A

(13.8)

with property Q

T

1

It performs a plane rotation around an angle  in the Euclidean

n dimensional space It means that its matrix elements different from zero are given by

q

k

kl

l k

ii

ii

= 1 i 6= k i 6= l (13.9)

A similarity transformation

T

results in

b ik

ik

a il

b il

il

ik sin; i 6= k; i 6= l

b k

k 2

kl sin + a

l l sin 2



b

l l

l l 2

 + 2a kl sin + a

k sin 2



b l

k a

l l sin + a

kl 2

2

)

The angle is arbitrary Now the recipe is to choose so that all non-diagonal matrix elements

b

pq become zero which gives

2a kl a k a

l l

If the denominator is zero, we can choose = =4 Having defined throughz = tan2, we

do not need to evaluate the other trigonometric functions, we can simply use relations like e.g.,

2

1 2



1 p

1 + z 2



and

sin 2

1 2

 1

1 p

1 + z 2



The algorithm is then quite simple We perform a number of iterations untill the sum over the squared non-diagonal matrix elements are less than a prefixed test (ideally equal zero) The algorithm is more or less foolproof for all real symmetric matrices, but becomes much slower than methods based on tri-diagonalization for large matrices We do therefore not recommend the use of this method for large scale problems The philosophy however, performing a series of similarity transformations pertains to all current models for matrix diagonalization

Trang 10

238 CHAPTER 13 EIGENSYSTEMS

In this case the energy diagonalization is performed in two steps: First, the matrix is transformed

into a tri-diagonal form by the Householder similarity transformation and second, the tri-diagonal

matrix is then diagonalized The reason for this two-step process is that diagonalising a

tri-diagonal matrix is computational much faster then the corresponding tri-diagonalization of a general

symmetric matrix Let us discuss the two steps in more detail

The Householder’s method for tri-diagonalization

The first step consists in finding an orthogonal matrixQwhich is the product of(n 2)

orthog-onal matrices

1 Q 2 : : Q

n 2

each of which successively transforms one row and one column of A into the required

tri-diagonal form Only n 2 transformations are required, since the last two elements are

al-ready in tri-diagonal form In order to determine eachQ

i let us see what happens after the first multiplication, namely,

Q T 1 AQ 1

=

0 B B B B



a 11 e 1

e 1 a 0 22 a 0 22

0 2n

0 32 a 0 33

0 3n

0 n2 a 0 n3

0 nn

1 C C C C A

(13.16)

where the primed quantities represent a matrixA

0

of dimensionn 1which will subsequentely be transformed byQ

2 The factore

1 is a possibly non-vanishing element The next transformation produced byQ

2has the same effect asQ

1but now on the submatirxA

0

only

(Q 1 Q 2

1 Q 2

=

0

B B B B



a 11 e 1

e 1 a 0 22 e 2

2 a 00 33

00 3n

00 n3

00 nn

1

C C C C A

(13.17)

Note that the effective size of the matrix on which we apply the transformation reduces for every

new step In the previous Jacobi method each similarity transformation is performed on the full

size of the original matrix

After a series of such transformations, we end with a set of diagonal matrix elements

a 11

; a 0 22

; a 00 33 : : a

n 1 nn

and off-diagonal matrix elements

(13.19)

Trang 11

13.2 EIGENVALUE PROBLEMS 239

The resulting matrix reads

Q T

0

B B B B B B



a 11 e 1

e 1 a 0 22 e 2

2 a 00 33 e 3

(n 1)

n 2

e 1

1 a (n 1)

n 1

1

C C C C C C A

Now it remains to find a recipe for determining the transformationQ

nall of which has basicly the same form, but operating on a lower dimensional matrix We illustrate the method for Q

1

which we assume takes the form

Q 1

=



T



with 0

T

being a zero row vector, 0

T

= f0; 0;   g of dimension (n 1) The matrix P is symmetric with dimension ((n 1)  (n 1)) satisfyingP

2

T

= P A possible choice which fullfils the latter two requirements is

T

whereIis the(n 1)unity matrix anduis ann 1column vector with normu

T

u(inner product Note thatuu

T

is an outer product giving a awith dimension ((n 1)  (n 1)) Each matrix element ofPthen reads

P ij

ij 2u i u

whereiandj range from1ton 1 Applying the transformationQ

1 results in

Q T 1 AQ 1

=

 a 11 (Pv)

0



wherev

T

21

; a 31

;  ; a

n1

gand P must satisfy (Pv )

T

= fk; 0; 0;   g Then

T

withe

T

= f1; 0; 0; : : 0g Solving the latter equation gives usuand thus the needed transforma-tionP We do first however need to compute the scalarkby taking the scalar product of the last equation with its transpose and using the fact thatP

2

= I We get then

2

T

v = jv j

2

= n X i=2 a 2 i1

which determines the constantk = v Nowwe can rewrite Eq (13.25) as

T

(13.27)

Trang 12

240 CHAPTER 13 EIGENSYSTEMS

and taking the scalar product of this equation with itself and obtain

2(u T v) 2

= (v 2

21

which finally determines

2(u T v)

In solving Eq (13.28) great care has to be exercised so as to choose those values which make the right-hand largest in order to avoid loss of numerical precision The above steps are then repeated for every transformations till we have a tri-diagonal matrix suitable for obtaining the eigenvalues

Diagonalization of a tri-diagonal matrix

The matrix is now transformed into tri-diagonal form and the last step is to transform it into a diagonal matrix giving the eigenvalues on the diagonal The programs which performs these transformations are matrix A !tri-diagonal matrix !diagonal matrix

C: void trd2(doublea, int n, double d[], double e[])

void tqli(double d[], double[], int n, doublez) Fortran: CALL tred2(a, n, d, e)

CALL tqli(d, e, n, z) The last step through the functiontqli()involves several techniqcal details, but let us describe the basic idea in a four-dimensional example The current tri-diagonal matrix takes the form

0

B B



d 1 e 1

e 1 d 2 e 2 0

2 d 3 e 3

3 d 4

1

C C A :

As a first observation, if any of the elementse

i are zero the matrix can be separated into smaller pieces before diagonalization Specifically, if e

1

1 is an eigenvalue Thus, let us introduce a transformationQ

1

Q 1

= 0 B B



1 C C A

Then the similarity transformation

Q T 1 AQ 1

0

= 0

B B



d 0 1 e 0 1

e 0 1 d 2 e 2 0

2 d 3 e 0 3

1

C C A

...

a 11 e 1

e a 22 a 22

0 2n

0 32 a 33

0 3n

0 n2 a n3

0 nn... can rewrite Eq (13 .25 ) as

T

(13 .27 )

Trang 12< /span>

24 0 CHAPTER 13 EIGENSYSTEMS

and... obtain

2( u T v) 2< /small>

= (v 2< /small>

21

which finally determines

2( u T v)

In solving Eq (13 .28 ) great care

Ngày đăng: 07/08/2014, 12:22

TỪ KHÓA LIÊN QUAN