Heat Flow, or Diffusion, PDE

Một phần của tài liệu TOAN CHO VAT LY (Trang 625 - 636)

Here we return to a special PDE to develop fairly general methods to adapt a special solu- tion of a PDE to boundary conditions by introducing parameters that apply to other second- order PDEs with constant coefficients as well. To some extent, they are complementary to the earlier basic separation method for finding solutions in a systematic way.

We select the full time-dependent diffusion PDE for an isotropic medium. Assuming isotropy actually is not much of a restriction because, in case we have different (constant) rates of diffusion in different directions, for example in wood, our heat flow PDE takes the form

∂ψ

∂t =a2∂2ψ

∂x2 +b2∂2ψ

∂y2 +c2∂2ψ

∂z2, (9.219)

if we put the coordinate axes along the principal directions of anisotropy. Now we sim- ply rescale the coordinates using the substitutionsx=aξ, y=bη, z=cζ to get back the original isotropic form of Eq. (9.219),

∂t =∂2

∂ξ2 +∂2

∂η2 +∂2

∂ζ2 (9.220)

for the temperature distribution function(ξ, η, ζ, t )=ψ (x, y, z, t ).

For simplicity, we first solve the time-dependent PDE for a homogeneous one- dimensional medium, a long metal rod in thex-direction, say,

∂ψ

∂t =a2∂2ψ

∂x2, (9.221)

where the constanta measures the diffusivity, or heat conductivity, of the medium. We attempt to solve this linear PDE with constant coefficients with the relevant exponential product Ansatzψ=eαxãeβt, which, when substituted into Eq. (9.221), solves the PDE with the constraintβ=a2α2for the parameters. We seek exponentially decaying solutions for large times, that is, solutions with negativeβ values, and therefore setα=iω,α2=

−ω2for realωand have

ψ (x, t )=eiωxe−ω2a2t=(cosωx+isinωx)e−ω2a2t. Forming real linear combinations we obtain the solution

ψ (x, t )=(Acosωx+Bsinωx)e−ω2a2t,

for any choice ofA, B, ω, which are introduced to satisfy boundary conditions. Upon sum- ming over multiplesnωof the basic frequency for periodic boundary conditions or inte- grating over the parameterωfor general (nonperiodic boundary conditions), we find a solution,

ψ (x, t )= A(ω)cosωx+B(ω)sinωx e−a2ω2tdω, (9.222) that is general enough to be adapted to boundary conditions att=0, say. When the bound- ary condition gives a nonzero temperatureψ0, as for our rod, then the summation method

applies (Fourier expansion of the boundary condition). If the space is unrestricted (as for an infinitely extended rod), the Fourier integral applies.

• This summation or integration over parameters is one of the standard methods for gen- eralizing specific PDE solutions in order to adapt them to boundary conditions.

Example 9.8.1 A SPECIFICBOUNDARYCONDITION

Let us solve a one-dimensional case explicitly, where the temperature at time t =0 is ψ0(x)=1=const. in the interval betweenx= +1 andx= −1 and zero forx >1 and x <1. At the ends,x= ±1, the temperature is always held at zero.

For a finite interval we choose the cos(lπ x/2)spatial solutions of Eq. (9.221) for integer l, because they vanish atx= ±1. Thus, att=0 our solution is a Fourier series,

ψ (x,0)= ∞

l=1

alcosπ lx

2 =1, −1< x <1 with coefficients (see Section 14.1.)

al= 1

−1

1ãcosπ lx 2 = 2

lπsinπ lx 2

1 x=−1

= 4 π lsinlπ

2 = 4(−1)m

(2m+1)π, l=2m+1; al=0, l=2m.

Including its time dependence, the full solution is given by the series ψ (x, t )= 4

π ∞

m=0

(−1)m 2m+1cos

(2m+1)π x 2

e−t ((2m+1)π a/2)2, (9.223) which converges absolutely for t >0 but only conditionally at t=0, as a result of the discontinuity atx= ±1.

Without the restriction to zero temperature at the endpoints of the given finite interval, the Fourier series is replaced by a Fourier integral. The general solution is then given by Eq. (9.222). Att =0 the given temperature distributionψ0=1 gives the coefficients as (see Section 15.3)

A(ω)= 1 π

1

−1

cosωx dx= 1 π

sinωx ω

1 x=−1

=2 sinω

π ω , B(ω)=0.

Therefore

ψ (x, t )= 2 π

0

sinω

ω cos(ωx)e−a2ω2tdω. (9.224) In three dimensions the corresponding exponential Ansatz ψ=eikãr/a+βt leads to a solution with the relation β = −k2= −k2 for its parameter, and the three-dimensional

form of Eq. (9.221) becomes

∂2ψ

∂x2 +∂2ψ

∂y2 +∂2ψ

∂z2 +k2ψ=0, (9.225)

which is the Helmholtz equation, which may be solved by the separation method just like the earlier Laplace equation in Cartesian, cylindrical, or spherical coordinates under appropriately generalized boundary conditions.

In Cartesian coordinates, with the product Ansatz of Eq. (9.35), the separatedx- andy- ODEs from Eq. (9.221) are the same as Eqs. (9.38) and (9.41), while thez-ODE, Eq. (9.42), generalizes to

1 Z

d2Z

dz2 = −k2+l2+m2=n2>0, (9.226) where we introduce another separation constant,n2, constrained by

k2=l2+m2−n2 (9.227)

to produce a symmetric set of equations. Now, our solution of Helmholtz’s Eq. (9.225) is labeled according to the choice of all three separation constantsl, m, nsubject to the constraint Eq. (9.227). As before thez-ODE, Eq. (9.226), yields exponentially decaying solutions∼e−nz. The boundary condition atz=0 fixes the expansion coefficientsalm, as in Eq. (9.44).

In cylindrical coordinates, we now use the separation constantl2for thez-ODE, with an exponentially decaying solution in mind,

d2Z

dz2 =l2Z >0, (9.228)

so Z∼e−lz, because the temperature goes to zero at large z. If we set k2+l2=n2, Eqs. (9.53) to (9.54) stay the same, so we end up with the same Fourier–Bessel expansion, Eq. (9.56), as before.

In spherical coordinates with radial boundary conditions, the separation method leads to the same angular ODEs in Eqs. (9.61) and (9.64), and the radial ODE now becomes

1 r2

d dr

r2dR

dr

+k2R−QR

r2 =0, Q=l(l+1), (9.229) that is, of Eq. (9.65), whose solutions are the spherical Bessel functions of Section 11.7.

They are listed in Table 9.2.

The restriction thatk2be a constant is unnecessarily severe. The separation process will still work with Helmholtz’s PDE fork2as general as

k2=f (r)+ 1

r2g(θ )+ 1

r2sin2θh(ϕ)+k′2. (9.230) In the hydrogen atom we havek2=f (r)in the Schrửdinger wave equation, and this leads to a closed-form solution involving Laguerre polynomials.

Alternate Solutions

In a new approach to the heat flow PDE suggested by experiments, we now return to the one-dimensional PDE, Eq. (9.221), seeking solutions of a new functional form ψ (x, t )=u(x/√

t ), which is suggested by Example 15.1.1. Substitutingu(ξ ),ξ =x/√ t, into Eq. (9.221) using

∂ψ

∂x = u′

√t, ∂2ψ

∂x2 =u′′

t , ∂ψ

∂t = − x 2√

t3u′ (9.231)

with the notationu′(ξ )≡dudξ, the PDE is reduced to the ODE

2a2u′′(ξ )+ξ u′(ξ )=0. (9.232) Writing this ODE as

u′′

u′ = − ξ 2a2,

we can integrate it once to get lnu′= −4aξ22 +lnC1, with an integration constantC1. Ex- ponentiating and integrating again we find the solution

u(ξ )=C1 ξ

0

e−

ξ2

4a2 dξ+C2, (9.233)

involving two integration constantsCi. Normalizing this solution at timet=0 to temper- ature+1 for x >0 and −1 for x <0, our boundary conditions, fixes the constants Ci, so

ψ= 1 a√π

√x t 0

e−

ξ2

4a2 dξ= 2

√π x

2a√ t 0

e−v2dv= x

2a√ t

, (9.234)

wheredenotes Gauss’ error function (see Exercise 5.10.4). See Example 15.1.1 for a derivation using a Fourier transform. We need to generalize this specific solution to adapt it to boundary conditions.

To this end we now generate new solutions of the PDE with constant coefficients by differentiating a special solution, Eq. (9.234). In other words, ifψ (x, t ) solves the PDE in Eq. (9.221), so do ∂ψ∂t and ∂ψ∂x, because these derivatives and the differentiations of the PDE commute; that is, the order in which they are carried out does not matter. Note carefully that this method no longer works if any coefficient of the PDE depends ont orx explicitly. However, PDEs with constant coefficients dominate in physics. Examples are Newton’s equations of motion (ODEs) in classical mechanics, the wave equations of electrodynamics, and Poisson’s and Laplace’s equations in electrostatics and gravity. Even Einstein’s nonlinear field equations of general relativity take on this special form in local geodesic coordinates.

Therefore, by differentiating Eq. (9.234) with respect tox, we find the simpler, more basic solution

ψ1(x, t )= 1 a√

t πe−

x2

4a2t, (9.235)

and, repeating the process, another basic solution, ψ2(x, t )= x

2a3√ t3π

e−

x2

4a2t. (9.236)

Again, these solutions have to be generalized to adapt them to boundary conditions. And there is yet another method of generating new solutions of a PDE with constant coeffi- cients: We can translate a given solution, for example,ψ1(x, t )→ψ1(x−α, t ), and then integrate over the translation parameterα.Therefore

ψ (x, t )= 1 2a√

t π ∞

−∞

C(α)e−

(x−α)2

4a2t dα (9.237)

is again a solution, which we rewrite using the substitution ξ=x−α

2a√

t, α=x−2aξ√

t, dα= −2a dξ√

t . (9.238)

Thus, we find that

ψ (x, t )= 1

√π ∞

−∞

C(x−2aξ√

t )e−ξ2dξ (9.239)

is a solution of our PDE. In this form we recognize the significance of the weight function C(x)from the translation method because, att =0, ψ (x,0)=C(x)=ψ0(x)is deter- mined by the boundary condition, and∞

−∞e−ξ2dξ =√π. Therefore, we can also write the solution as

ψ (x, t )= 1

√π ∞

−∞

ψ0(x−2aξ√

t )e−ξ2dξ, (9.240) displaying the role of the boundary condition explicitly. From Eq. (9.240) we see that the initial temperature distribution, ψ0(x), spreads out over time and is damped by the Gaussian weight function.

Example 9.8.2 SPECIALBOUNDARYCONDITIONAGAIN

Let us express the solution of Example 9.8.1 in terms of the error function solution of Eq. (9.234). The boundary condition at t =0 is ψ0(x)=1 for −1< x <1 and zero for|x|>1. From Eq. (9.240) we find the limits on the integration variable ξ by setting x−2aξ√

t = ±1. This yields the integration endpointsξ =(±1+x)/2a√

t. Therefore our solution becomes

ψ (x, t )= 1

√π x+1

2a√ t x−1 2a√ t

e−ξ2dξ.

Using the error function defined in Eq. (9.234) we can also write this solution as follows ψ (x, t )=1

2

erf x+1

2a√ t

−erf x−1

2a√ t

. (9.241)

Comparing this form of our solution with that from Example 9.8.1 we see that we can express Eq. (9.241) as the Fourier integral of Example 9.8.1, an identity that gives the Fourier integral, Eq. (9.224), in closed form of the tabulated error function.

Finally, we consider the heat flow case for an extended spherically symmetric medium centered at the origin, which prescribes polar coordinatesr, θ, ϕ.We expect a solution of the formψ (r, t )=u(r, t ). Using Eq. (2.48) we find the PDE

∂u

∂t =a2 ∂2u

∂r2 +2 r

∂u

∂r

, (9.242)

which we transform to the one-dimensional heat flow PDE by the substitution u=v(r, t )

r , ∂u

∂r =1 r

∂v

∂r − v

r2, ∂u

∂t =1 r

∂v

∂t,

∂2u

∂r2 =1 r

∂2v

∂r2− 2 r2

∂v

∂r +2v

r3. (9.243)

This yields the PDE

∂v

∂t =a2∂2v

∂r2. (9.244)

Example 9.8.3 SPHERICALLYSYMMETRICHEATFLOW

Let us apply the one-dimensional heat flow PDE with the solution Eq. (9.234) to a spheri- cally symmetric heat flow under fairly common boundary conditions, wherex is released by the radial variable. Initially we have zero temperature everywhere. Then, at timet=0, a finite amount of heat energyQis released at the origin, spreading evenly in all directions.

What is the resulting spatial and temporal temperature distribution?

Inspecting our special solution in Eq. (9.236) we see that, fort→0, the temperature v(r, t )

r = C

√t3e−

r2

4a2t (9.245)

goes to zero for allr=0, so zero initial temperature is guaranteed. Ast→ ∞, the temper- aturev/r→0 for allrincluding the origin, which is implicit in our boundary conditions.

The constantCcan be determined from energy conservation, which gives the constraint Q=σρ

v

rd3r=4π σρC

√t3 ∞

0

r2e−

r2

4a2t dr=8

π3σρa3C, (9.246) whereρ is the constant density of the medium and σ is its specific heat. Here we have rescaled the integration variable and integrated by parts to get

0

e−

r2

4a2tr2dr =(2a√ t )3

0

e−ξ2ξ2dξ, ∞

0

e−ξ2ξ2dξ = −ξ 2e−ξ2

∞ 0

+1 2

0

e−ξ2dξ=

√π 4 .

The temperature, as given by Eq. (9.245) at any moment, which is at fixedt, is a Gaussian distribution that flattens out as time increases, because its width is proportional to√

t. As a function of time the temperature is proportional tot−3/2e−T /t, withT ≡r2/4a2, which rises from zero to a maximum and then falls off to zero again for large times. To find the maximum, we set

d dt

t−3/2e−T /t

=t−5/2e−T /t T

t −3 2

=0, (9.247)

from which we findt=2T /3.

In the case of cylindrical symmetry (in the plane z=0 in plane polar coordinates ρ=

x2+y2, ϕ) we look for a temperatureψ=u(ρ, t )that then satisfies the ODE (using Eq. (2.35) in the diffusion equation)

∂u

∂t =a2 ∂2u

∂ρ2+1 ρ

∂u

∂ρ

, (9.248)

which is the planar analog of Eq. (9.244). This ODE also has solutions with the functional dependenceρ/√

t≡r. Upon substituting u=v

ρ

√t

, ∂u

∂t = − ρv′

2t3/2, ∂u

∂ρ = v′

√t, ∂2u

∂ρ2 =v′

t (9.249)

into Eq. (9.248) with the notationv′≡dvdr, we find the ODE a2v′′+

a2 r +r

2

v′=0. (9.250)

This is a first-order ODE forv′, which we can integrate when we separate the variablesv andras

v′′

v′ = − 1

r + r 2a2

. (9.251)

This yields

v(r)=C re−

r2 4a2 =C

√t ρ e−

ρ2

4a2t. (9.252)

This special solution for cylindrical symmetry can be similarly generalized and adapted to boundary conditions, as for the spherical case. Finally, thez-dependence can be factored in, becausezseparates from the plane polar radial variableρ.

In summary, PDEs can be solved with initial conditions, just like ODEs, or with bound- ary conditions prescribing the value of the solution or its derivative on boundary surfaces, curves, or points. When the solution is prescribed on the boundary, the PDE is called a Dirichlet problem;if the normal derivative of the solution is prescribed on the boundary, the PDE is called a Neumann problem.

When the initial temperature is prescribed for the one-dimensional or three-dimensional heat equation(with spherical or cylindrical symmetry)it becomes a weight function of the solution, in terms of an integral over the generic Gaussian solution. The three-dimensional

heat equation, with spherical or cylindrical boundary conditions, is solved by separation of the variables, leading to eigenfunctions in each separated variable and eigenvalues as separation constants. For finite boundary intervals in each spatial coordinate, the sum over separation constants leads to a Fourier-series solution, while infinite boundary conditions lead to a Fourier-integral solution. The separation of variables method attempts to solve a PDE by writing the solution as a product of functions of one variable each. General conditions for the separation method to work are provided by the symmetry properties of the PDE, to which continuous group theory applies.

Additional Readings

Bateman, H., Partial Differential Equations of Mathematical Physics. New York: Dover (1944), 1st ed. (1932).

A wealth of applications of various partial differential equations in classical physics. Excellent examples of the use of different coordinate systems — ellipsoidal, paraboloidal, toroidal coordinates, and so on.

Cohen, H., Mathematics for Scientists and Engineers. Englewood Cliffs, NJ: Prentice-Hall (1992).

Courant, R., and D. Hilbert, Methods of Mathematical Physics, Vol. 1 (English edition). New York: Interscience (1953), Wiley (1989). This is one of the classic works of mathematical physics. Originally published in Ger- man in 1924, the revised English edition is an excellent reference for a rigorous treatment of Green’s functions and for a wide variety of other topics on mathematical physics.

Davis, P. J., and P. Rabinowitz, Numerical Integration. Waltham, MA: Blaisdell (1967). This book covers a great deal of material in a relatively easy-to-read form. Appendix 1 (On the Practical Evaluation of Integrals by M. Abramowitz) is excellent as an overall view.

Garcia, A. L., Numerical Methods for Physics. Englewood Cliffs, NJ: Prentice-Hall (1994).

Hamming, R. W., Numerical Methods for Scientists and Engineers, 2nd ed. New York: McGraw-Hill (1973), reprinted Dover (1987). This well-written text discusses a wide variety of numerical methods from zeros of functions to the fast Fourier transform. All topics are selected and developed with a modern computer in mind.

Hubbard, J., and B. H. West, Differential Equations. Berlin: Springer (1995).

Ince, E. L., Ordinary Differential Equations. New York: Dover (1956). The classic work in the theory of ordinary differential equations.

Lapidus, L., and J. H. Seinfeld, Numerical Solutions of Ordinary Differential Equations. New York: Academic Press (1971). A detailed and comprehensive discussion of numerical techniques, with emphasis on the Runge–

Kutta and predictor–corrector methods. Recent work on the improvement of characteristics such as stability is clearly presented.

Margenau, H., and G. M. Murhpy, The Mathematics of Physics and Chemistry, 2nd ed. Princeton, NJ: Van Nos- trand (1956). Chapter 5 covers curvilinear coordinates and 13 specific coordinate systems.

Miller, R. K., and A.N. Michel, Ordinary Differential Equations. New York: Academic Press (1982).

Morse, P. M., and H. Feshbach, Methods of Theoretical Physics. New York: McGraw-Hill (1953). Chapter 5 includes a description of several different coordinate systems. Note that Morse and Feshbach are not above using left-handed coordinate systems even for Cartesian coordinates. Elsewhere in this excellent (and difficult) book are many examples of the use of the various coordinate systems in solving physical problems. Chapter 7 is a particularly detailed, complete discussion of Green’s functions from the point of view of mathematical physics. Note, however, that Morse and Feshbach frequently choose a source of 4π δ(rr′)in place of our δ(rr′). Considerable attention is devoted to bounded regions.

Murphy, G. M., Ordinary Differential Equations and Their Solutions. Princeton, NJ: Van Nostrand (1960). A thor- ough, relatively readable treatment of ordinary differential equations, both linear and nonlinear.

Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, 2nd ed. Cambridge, UK:

Cambridge University Press (1992).

Ralston, A., and H. Wilf, eds., Mathematical Methods for Digital Computers. New York: Wiley (1960).

Ritger, P. D., and N. J. Rose, Differential Equations with Applications. New York: McGraw-Hill (1968).

Stakgold, I., Green’s Functions and Boundary Value Problems, 2nd ed. New York: Wiley (1997).

Stoer, J., and R. Burlirsch, Introduction to Numerical Analysis. New York: Springer-Verlag (1992).

Stroud, A. H., Numerical Quadrature and Solution of Ordinary Differential Equations, Applied Mathematics Series, Vol. 10. New York: Springer-Verlag (1974). A balanced, readable, and very helpful discussion of var- ious methods of integrating differential equations. Stroud is familiar with the work in this field and provides numerous references.

S TURM –L IOUVILLE

T HEORY — O RTHOGONAL F UNCTIONS

In the preceding chapter we developed two linearly independent solutions of the second- order linear homogeneous differential equation and proved that no third, linearly inde- pendent solution existed. In this chapter the emphasis shifts from solving the differential equation to developing and understanding general properties of the solutions. There is a close analogy between the concepts in this chapter and those of linear algebra in Chap- ter 3. Functions here play the role of vectors there, and linear operators that of matri- ces in Chapter 3. The diagonalization of a real symmetric matrix in Chapter 3 corre- sponds here to the solution of an ODE defined by a self-adjoint operator L in terms of its eigenfunctions, which are the “continuous” analog of the eigenvectors in Chap- ter 3. Examples for the corresponding analogy between Hermitian matrices and Her- mitian operators are Hamiltonians in quantum mechanics and their energy eigenfunc- tions.

In Section 10.1 the concepts of self-adjoint operator, eigenfunction, eigenvalue, and Her- mitian operator are presented. The concept of adjoint operator, given first in terms of dif- ferential equations, is then redefined in accordance with usage in quantum mechanics, where eigenfunctions take complex values. The vital properties of reality of eigenvalues and orthogonality of eigenfunctions are derived in Section 10.2. In Section 10.3 we dis- cuss the Gram–Schmidt procedure for systematically constructuring sets of orthogonal functions. Finally, the general property of the completeness of a set of eigenfunctions is explored in Section 10.4, and Green’s functions from Chapter 9 are continued in Sec- tion 10.5.

621

Một phần của tài liệu TOAN CHO VAT LY (Trang 625 - 636)

Tải bản đầy đủ (PDF)

(1.196 trang)