Neumann Series, Separable (Degenerate) Kernels

Một phần của tài liệu TOAN CHO VAT LY (Trang 1032 - 1043)

Many and probably most integral equations cannot be solved by the specialized integral transform techniques of the preceding section. Here we develop three rather general tech- niques for solving integral equations. The first, due largely to Neumann, Liouville, and Volterra, develops the unknown functionϕ(x)as a power series inλ, whereλis a given constant. The method is applicable whenever the series converges.

The second method is somewhat restricted because it requires that the two variables ap- pearing in the kernelK(x, t )be separable. However, there are two major rewards: (1) The relation between an integral equation and a set of simultaneous linear algebraic equations is shown explicitly, and (2) the method leads to eigenvalues and eigenfunctions—in close analogy to Section 3.5.

Third, a technique for numerical solution of Fredholm equations of both the first and second kind is outlined. The problem posed by ill-conditioned matrices is emphasized.

Neumann Series

We solve a linear integral equation of the second kind by successive approximations; our integral equation is the Fredholm equation,

ϕ(x)=f (x)+λ b

a

K(x, t )ϕ(t ) dt, (16.56) in whichf (x)=0. If the upper limit of the integral is a variable (Volterra equation), the following development will still hold, but with minor modifications. Let us try (there is no guarantee that it will work) to approximate our unknown function by

ϕ(x)≈ϕ0(x)=f (x). (16.57)

This choice is not mandatory. If you can make a better guess, go ahead and guess. The choice here is equivalent to saying that the integral or the constantλis small. To improve this first crude approximation, we feedϕ0(x)back into the integral, Eq. (16.56), and get

ϕ1(x)=f (x)+λ b

a

K(x, t )f (t ) dt. (16.58) Repeating this process of substituting the newϕn(x)back into Eq. (16.56), we develop the sequence

ϕ2(x)=f (x)+λ b

a

K(x, t1)f (t1) dt1

+λ2 b

a

b a

K(x, t1)K(t1, t2)f (t2) dt2dt1 (16.59)

and

ϕn(x)= n i=0

λiui(x), (16.60)

where

u0(x)=f (x), u1(x)=

b a

K(x, t1)f (t1) dt1,

(16.61) u2(x)=

b a

b a

K(x, t1)K(t1, t2)f (t2) dt2dt1, un(x)=

ã ã ã

K(x, t1)K(t1, t2)ã ã ãK(tn−1, tn)ãf (tn) dtnã ã ãdt1. We expect that our solutionϕ(x)will be

ϕ(x)= lim

n→∞ϕn(x)= lim

n→∞

n i=0

λiui(x), (16.62)

provided that our infinite series converges. We may conveniently check the convergence by the Cauchy ratio test, Section 5.2, noting that

λnun(x)≤λnã |f|maxã |K|nmaxã |b−a|n, (16.63) using|f|maxto represent the maximum value of|f (x)|in the interval[a, b]and|K|max to represent the maximum value of |K(x, t )| in its domain in the x, t-plane. We have convergence if

|λ| ã |K|maxã |b−a|<1. (16.64) Note thatλ|un(max)|is being used as a comparison series. If it converges, our actual series must converge. If this condition is not satisfied, we may or may not have convergence.

A more sensitive test is required. Of course, even if the Neumann series diverges, there still may be a solution obtainable by another method.

To see what has been done with this iterative manipulation, we may find it helpful to rewrite the Neumann series solution, Eq. (16.59), in operator form. We start by rewriting Eq. (16.56) as

ϕ=λKϕ+f, whereKrepresents the integral operatorb

a K(x, t )[ ]dt. Solving forϕ, we obtain ϕ=(1−λK)−1f.

Binomial expansion leads to Eq. (16.59). The convergence of the Neumann series is a demonstration that the inverse operator(1−λK)−1exists.

Example 16.3.1 NEUMANNSERIESSOLUTION

To illustrate the Neumann method, we consider the integral equation ϕ(x)=x+1

2 1

−1

(t−x)ϕ(t ) dt. (16.65)

To start the Neumann series, we take

ϕ0(x)=x. (16.66)

Then

ϕ1(x)=x+1 2

1

−1

(t−x)t dt=x+1 2

1 3t3−1

2t2x

1

−1

=x+1 3. Substitutingϕ1(x)back into Eq. (16.65), we get

ϕ2(x)=x+1 2

1

−1

(t−x)t dt+1 2

1

−1

(t−x)1

3dt=x+1 3−x

3. Continuing this process of substituting back into Eq. (16.65), we obtain

ϕ3(x)=x+1 3−x

3− 1 32, and by induction

ϕ2n(x)=x+ n s=1

(−1)s−13−s−x n s=1

(−1)s−13−s. (16.67) Lettingn→ ∞, we get

ϕ(x)=3 4x+1

4. (16.68)

This solution can (and should) be checked by substituting back into the original equation,

Eq. (16.65).

It is interesting to note that our series converged easily even though Eq. (16.64) is not satisfied in this particular case. Actually Eq. (16.64) is a rather crude upper bound onλ.

It can be shown that a necessary and sufficient condition for the convergence of our series solution is that|λ|<|λe|, where λe is the eigenvalue of smallest magnitude of the cor- responding homogeneous equation[f (x)=0)]. For this particular exampleλe=√

3/2.

Clearly,λ=12< λe=√ 3/2.

One approach to the calculation of time-dependent perturbations in quantum mechanics starts with the integral equation for the evolution operator

U (t, t0)=1− i

¯ h

t t0

V (t1)U (t1, t0) dt1. (16.69a) Iteration leads to

U (t, t0)=1− i

¯ h

t t0

V (t1) dt1+ i

¯ h

2 t t0

t1 t0

V (t1)V (t2) dt2dt1+ ã ã ã. (16.69b)

The evolution operator is obtained as a series of multiple integrals of the perturbing poten- tialV (t ), closely analogous to the Neumann series, Eq. (16.60). ForV =V0, independent oft, the evolution operator becomes (see Exercise 3.4.13, replacet→t, and construct Ufrom products ofT (t+t, t )as in Eq. (4.26))

U (t1, t0)=exp

−i

¯

h(t−t0)V0

.

A second and similar relationship between the Neumann series and quantum mechanics appears when the Schrửdinger wave equation for scattering is reformulated as an integral equation. The first term in a Neumann series solution is the incident (unperturbed) wave.

The second term is the first-order Born approximation, Eq. (9.203b) of Section 9.7.

The Neumann method may also be applied to Volterra integral equations of the second kind, Eq. (16.4) or Eq. (16.56) with the fixed upper limit,b, replaced by a variable,x. In the Volterra case the Neumann series converges for allλas long as the kernel is square integrable.

Separable Kernel

The technique of replacing our integral equation by simultaneous algebraic equations may also be used whenever our kernelK(x, t )is separable, in the sense that

K(x, t )= n j=1

Mj(x)Nj(t ), (16.70)

wheren, the upper limit of the sum, is finite. Such kernels are sometimes called degener- ate. Our class of separable kernels includes all polynomials and many of the elementary transcendental functions; that is,

cos(t−x)=costcosx+sintsinx. (16.70a) If Eq. (16.70) is satisfied, substitution into the Fredholm equation of the second kind, Eq.

(16.2), yields

ϕ(x)=f (x)+λ n j=1

Mj(x) b

a

Nj(t )ϕ(t ) dt, (16.71) interchanging integration and summation. Now, the integral with respect totis a constant,

b a

Nj(t )ϕ(t ) dt=cj. (16.72) Hence Eq. (16.71) becomes

ϕ(x)=f (x)+λ n j=1

cjMj(x). (16.73)

This gives us ϕ(x), our solution, once the constants ci have been determined. Equa- tion (16.73) further tells us the form of ϕ(x): f (x), plus a linear combination of the x-dependent factors of the separable kernel.

We may findci by multiplying Eq. (16.73) byNi(x)and integrating to eliminate the x-dependence. Use of Eq. (16.72) yields

ci=bi+λ n j=1

aijcj, (16.74)

where

bi= b

a

Ni(x)f (x) dx, aij= b

a

Ni(x)Mj(x) dx. (16.75) It is perhaps helpful to write Eq. (16.74) in matrix form, withA=(aij):

b=c−λAc=(1−λA)c, (16.76a)

or3

c=(1−λA)−1b. (16.76b)

Equation (16.76a) is equivalent to a set of simultaneous linear algebraic equations (1−λa11)c1−λa12c2−λa13c3− ã ã ã =b1,

−λa21c1+(1−λa22)c2−λa23c3− ã ã ã =b2, (16.77)

−λa31c1−λa32c2+(1−λa33)c3− ã ã ã =b3, and so on.

If our integral equation is homogeneous,[f (x)=0], then b=0. To get a solution, we set the determinant of the coefficients ofciequal to zero,

|1−λA| =0, (16.78)

exactly as in Section 3.5. The roots of Eq. (16.78) yield our eigenvalues. Substituting into (1−λA)c=0, we find theci, and then Eq. (16.73) gives our solution.

Example 16.3.2

To illustrate this technique for determining eigenvalues and eigenfunctions of the homoge- neous Fredholm equation, we consider the case

ϕ(x)=λ 1

−1

(t+x)ϕ(t ) dt. (16.79)

Here (compare with Eqs. (16.71) and (16.77))

M1=1, M2(x)=x, N1(t )=t, N2=1.

Equation (16.75) yields

a11=a22=0, a12=23, a21=2; b1=0=b2.

3Notice the similarity to the operator form of the Neumann series.

Equation (16.78), our secular equation, becomes

1 −2λ

3

−2λ 1

=0. (16.80)

Expanding, we obtain

1−4λ2

3 =0, λ= ±

√3

2 . (16.81)

Substituting the eigenvaluesλ= ±√

3/2 into Eq. (16.76), we have c1∓ c2

√3=0. (16.82)

Finally, with a choice ofc1=1, Eq. (16.73) gives ϕ1(x)=

√3 2 (1+√

3x), λ=

√3

2 , (16.83)

ϕ2(x)= −

√3 2 (1−√

3x), λ= −

√3

2 . (16.84)

Since our equation is homogeneous, the normalization ofϕ(x)is arbitrary.

If the kernel is not separable in the sense of Eq. (16.70), there is still the possibility that it may be approximated by a kernel that is separable. Then we can get the exact solution of an approximate equation, an equation that approximates the original equation. The solution of the separable approximate kernel problem can then be checked by substituting back into the original, unseparable kernel problem.

Numerical Solution

There is extensive literature on the numerical solution of integral equations, and much of it concerns special techniques for certain situations. One method of fair generality is the replacement of the single integral equation by a set of simultaneous algebraic equations.

And again matrix techniques are invoked. This simultaneous algebraic equation–matrix approach is applied here to two different cases. For the homogeneous Fredholm equation of the second kind this method works well. For the Fredholm equation of the first kind the method is a disaster. First we deal with the disaster.

We consider the Fredholm integral equation of the first kind, f (x)=

b a

K(x, t )ϕ(t ) dt, (16.84a)

withf (x)andK(x, t )known andϕ(t )unknown. The integral can be evaluated (in prin- ciple) by quadrature techniques. For maximum accuracy the Gaussian method is recom- mended (if the kernel is continuous and has continuous derivatives). The numerical quadra- ture replaces the integral by a summation,

f (xi)= n k=1

AkK(xi, tk)ϕ(tk), (16.84b)

with Ak the quadrature coefficients. We abbreviate f (xi) as fi, ϕ(tk) as ϕk, and AkK(xi, tk)as Bik. In effect we are changing from a function description to a vector–

matrix description, with thencomponents of the vector(fi)defined as the values of the function at thendiscrete points[f (xi)]. Equation (16.84b) becomes

fi= n k=1

Bikϕk, a matrix equation. Inverting(Bik), we obtain

ϕ(xk)=ϕk= n k=1

Bki−1fi, (16.84c)

and Eq. (16.84a) is solved — in principle. In practice, the quadrature coefficient–kernel ma- trix is often “ill-conditioned” (with respect to inversion). This means that in the inversion process small (numerical) errors are multiplied by large factors. In the inversion process all significant figures may be lost and Eq. (16.84c) becomes numerical nonsense.

This disaster should not be entirely unexpected. Integration is essentially a smoothing operation.f (x) is relatively insensitive to local variation ofϕ(t ). Conversely,ϕ(t )may be exceedingly sensitive to small changes inf (x). Small errors inf (x) or inB−1 are magnified and accuracy disappears. This same behavior shows up in attempts to invert Laplace transforms numerically.

When the quadrature–matrix technique is applied to the integral equation eigenvalue problem, the symmetric kernel, homogeneous Fredholm equation of the second kind,4

λϕ(x)= b

a

K(x, t )ϕ(t ) dt, (16.84d)

the technique is far more successful. Replacing the integral by a set of simultaneous alge- braic equations (numerical quadrature), we have

λϕi= n k=1

AkKikϕk, (16.84e)

withϕi =ϕ(xi), as before. The pointsxi,i=1,2, . . . , n, are taken to be the same (nu- merically) astk,k=1,2, . . . , n, soKik will be symmetric. The system is symmetrized by multiplying byA1/2i so that

λ A1/2i ϕi

= n k=1

A1/2i KikA1/2k A1/2k ϕk

. (16.84f)

ReplacingA1/2i ϕi byψi andA1/2i KikA1/2k bySik, we obtain

λψ=Sψ, (16.84g)

withSsymmetric (since the kernelK(x, t )was assumed symmetric). Of course,ψ has componentsψi=ψ (xi). Equation (16.84g) is our matrix eigenvalue equation, Eq. (3.136).

4The eigenvalueλhas been written on the left side, multiplying the eigenfunction, as is customary in matrix analysis (Section 3.5). In this formλwill take on a maximum value.

The eigenvalues are readily obtained by calling a canned eigenroutine.5For kernels such as those of Exercise 16.3.15 and using a 10-point Gauss–Legendre quadrature, the eigenrou- tine determines the largest eigenvalue to within about 0.5 percent for the cases where the kernel has discontinuities in its derivatives. If the derivatives are continuous, the accuracy is much better.

Linz6has described an interesting variational refinement in the determination ofλmaxto high accuracy. The key to his method is Exercise 17.8.7. The components of the eigenfunc- tion vector are obtained from Eq. (16.84d) withϕ(tk)now known andϕi=ϕ(xi)generated as required. (Thexi are no longer tied to thetk.)

Exercises

16.3.1 Using the Neumann series, solve (a) ϕ(x)=1−2

x 0

t ϕ(t ) dt, (b) ϕ(x)=x+

x 0

(t−x)ϕ(t ) dt, (c) ϕ(x)=x−

x 0

(t−x)ϕ(t ) dt.

ANS. (a)ϕ(x)=e−x2. 16.3.2 Solve the equation

ϕ(x)=x+1 2

1

−1

(t+x)ϕ(t ) dt

by the separable kernel method. Compare with the Neumann method solution of Section 16.3.

ANS.ϕ(x)=12(3x−1).

16.3.3 Find the eigenvalues and eigenfunctions of ϕ(x)=λ

1

−1

(t−x)ϕ(t ) dt.

16.3.4 Find the eigenvalues and eigenfunctions of ϕ(x)=λ

2π 0

cos(x−t )ϕ(t ) dt.

ANS.λ1=λ2=π1, ϕ(x)=Acosx+Bsinx.

5See W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, 2nd ed., Cambridge, UK: Cambridge University Press (1992), Chapter 11, for details, references, and computer codes. The symbolic software Mathematica and Maple also include matrix functions for computing eigenvalues and eigenvectors.

6P. Linz, On the numerical computation of eigenvalues and eigenvectors of symmetric integral equations. Math. Comput. 24:

905 (1970).

16.3.5 Find the eigenvalues and eigenfunctions of y(x)=λ

1

−1

(x−t )2y(t ) dt.

Hint. This problem may be treated by the separable kernel method or by a Legendre expansion.

16.3.6 If the separable kernel technique of this section is applied to a Fredholm equation of the first kind (Eq. (16.1)), show that Eq. (16.76) is replaced by

c=A−1b.

In general the solution for the unknownϕ(t )is not unique.

16.3.7 Solve

ψ (x)=x+ 1

0

(1+xt )ψ (t ) dt by each of the following methods:

(a) the Neumann series technique, (b) the separable kernel technique, (c) educated guessing.

16.3.8 Use the separable kernel technique to show that ψ (x)=λ

π 0

cosxsint ψ (t ) dt

has no solution (apart from the trivialψ=0). Explain this result in terms of separability and symmetry.

16.3.9 Solve

ϕ(x)=1+λ2 x

0

(x−t )ϕ(t ) dt by each of the following methods:

(a) reduction to an ODE (find the boundary conditions), (b) the Neumann series,

(c) the use of Laplace transforms.

ANS.ϕ(x)=coshλx.

16.3.10 (a) In Eq. (16.69a) takeV =V0, independent oft. Without using Eq. (16.69b), show that Eq. (16.69a) leads directly to

U (t−t0)=exp

−i

¯

h(t−t0)V0

. (b) Repeat for Eq. (16.69b) without using Eq. (16.69a).

16.3.11 Givenϕ(x)=λ1

0(1+xt )ϕ(t ) dt, solve for the eigenvalues and the eigenfunctions by the separable kernel technique.

16.3.12 Knowing the form of the solutions can be a great advantage, for the integral equation ϕ(x)=λ

1 0

(1+xt )ϕ(t ) dt,

assumeϕ(x)to have the form 1+bx. Substitute into the integral equation. Integrate and solve forbandλ.

16.3.13 The integral equation ϕ(x)=λ

1 0

J0(αxt )ϕ(t ) dt, J0(α)=0, is approximated by

ϕ(x)=λ 1

0

1−x2t2 ϕ(t ) dt.

Find the minimum eigenvalueλand the corresponding eigenfunctionϕ(t )of the ap- proximate equation.

ANS.λmin=1.112486, ϕ(x)=1−0.303337x2. 16.3.14 You are given the integral equation

ϕ(x)=λ 1

0

sinπ xt ϕ(t ) dt.

Approximate the kernel by

K(x, t )=4xt (1−xt )≈sinπ xt.

Find the positive eigenvalue and the corresponding eigenfunction for the approximate integral equation.

Note. ForK(x, t )=sinπ xt,λ=1.6334.

ANS.λ=1.5678, ϕ(x)=x−0.6955x2 (λ+=√

31−4, λ−= −√ 31−4).

16.3.15 The equation

f (x)= b

a

K(x, t )ϕ(t ) dt has a degenerate kernelK(x, t )=n

i=1Mi(x)Ni(t ).

(a) Show that this integral equation has no solution unlessf (x)can be written as f (x)=

n i=1

fiMi(x), with thefi constants.

(b) Show that to any solutionϕ(x)we may addψ (x), providedψ (x)is orthogonal to allNi(x):

b a

Ni(x)ψ (x) dx=0 for alli.

16.3.16 Using numerical quadrature, convert ϕ(x)=λ

1 0

J0(αxt )ϕ(t ) dt, J0(α)=0, to a set of simultaneous linear equations.

(a) Find the minimum eigenvalueλ.

(b) Determineϕ(x)at discrete values ofxand plotϕ(x)versusx. Compare with the approximate eigenfunction of Exercise 16.3.13.

ANS. (a)λmin=1.14502.

16.3.17 Using numerical quadrature, convert ϕ(x)=λ

1 0

sinπ xt ϕ(t ) dt to a set of simultaneous linear equations.

(a) Find the minimum eigenvalueλ.

(b) Determineϕ(x)at discrete values ofxand plotϕ(x)versusx. Compare with the approximate eigenfunction of Exercise 16.3.14.

ANS. (a)λmin=1.6334.

16.3.18 Given a homogeneous Fredholm equation of the second kind λϕ(x)=

1 0

K(x, t )ϕ(t ) dt.

(a) Calculate the largest eigenvalueλ0. Use the 10-point Gauss–Legendre quadrature technique. For comparison the eigenvalues listed by Linz are given asλexact. (b) Tabulateϕ(xk), where thexkare the 10 evaluation points in[0,1].

(c) Tabulate the ratio 1 λ0ϕ(x)

1 0

K(x, t )ϕ(t ) dt forx=xk. This is the test of whether or not you really have a solution.

(a) K(x, t )=ext.

ANS.λexact=1.35303.

(b) K(x, t )= +1

2x(2−t ), x < t,

1

2t (2−x), x > t.

ANS.λexact=0.24296.

(c) K(x, t )= |x−t|.

ANS.λexact=0.34741.

(d) K(x, t )=

+x, x < t, t, x > t.

ANS.λexact=0.40528.

Note. (1) The evaluation pointsxi of Gauss–Legendre quadrature for[−1,1]may be linearly transformed into[0,1],

xi[0,1] =12

xi[−1,1] +1 .

Then the weighting factorsAi are reduced in proportion to the length of the interval:

Ai[0,1] =12Ai[−1,1].

16.3.19 Using the matrix variational technique of Exercise 17.8.7, refine your calculation of the eigenvalue of Exercise 16.3.18(c)[K(x, t )= |x−t|]. Try a 40×40 matrix.

Note. Your matrix should be symmetric so that the (unknown) eigenvectors will be or- thogonal.

ANS. (40-point Gauss–Legendre quadrature) 0.34727.

Một phần của tài liệu TOAN CHO VAT LY (Trang 1032 - 1043)

Tải bản đầy đủ (PDF)

(1.196 trang)