1. Trang chủ
  2. » Giáo án - Bài giảng

King a billingham j otto s differential equations linear nonlinear ordinary partial

554 27 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 554
Dung lượng 3,75 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

1 Variable Coefficient, Second Order, Linear, Ordinary 1.2 The Method of Variation of Parameters 7 1.3 Solution by Power Series: The Method of Frobenius 11 2.1 Definition of the Legendre Po

Trang 2

Differential Equations Linear, Nonlinear, Ordinary, Partial

When mathematical modelling is used to describe physical, biological or chemical nomena, one of the most common results of the modelling process is a system of ordinary

phe-or partial differential equations Finding and interpreting the solutions of these differentialequations is therefore a central part of applied mathematics, and a thorough understand-ing of differential equations is essential for any applied mathematician The aim of thisbook is to develop the required skills on the part of the reader

The authors focus on the business of constructing solutions analytically and ing their meaning, although they do use rigorous analysis where needed The reader isassumed to have some basic knowledge of linear, constant coefficient ordinary differentialequations, real analysis and linear algebra The book will thus appeal to undergraduates

interpret-in mathematics, but would also be of use to physicists and enginterpret-ineers MATLAB is usedextensively to illustrate the material There are many worked examples based on in-teresting real-world problems A large selection of exercises is provided, including severallengthier projects, some of which involve the use of MATLAB The coverage is broad, rang-ing from basic second-order ODEs including the method of Frobenius, Sturm-Liouville the-ory, Fourier and Laplace transforms, and existence and uniqueness, through to techniquesfor nonlinear differential equations including phase plane methods, bifurcation theory andchaos, asymptotic methods, and control theory This broad coverage, the authors’ clearpresentation and the fact that the book has been thoroughly class-tested will increase itsappeal to undergraduates at each stage of their studies

Trang 4

Differential EquationsLinear, Nonlinear, Ordinary, Partial

A.C King, J Billingham and S.R Otto

Trang 5

  

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press

The Edinburgh Building, Cambridge  , United Kingdom

First published in print format

Information on this title: www.cambridge.org/9780521816588

This book is in copyright Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

- ---

- ---

- ---

Cambridge University Press has no responsibility for the persistence or accuracy of

s for external or third-party internet websites referred to in this book, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Published in the United States of America by Cambridge University Press, New York

www.cambridge.org

hardback paperback paperback

eBook (NetLibrary) eBook (NetLibrary) hardback

Trang 6

1 Variable Coefficient, Second Order, Linear, Ordinary

1.2 The Method of Variation of Parameters 7

1.3 Solution by Power Series: The Method of Frobenius 11

2.1 Definition of the Legendre Polynomials, P n (x) 3

2.3 Differential and Recurrence Relations Between Legendre

2.5 Orthogonality of the Legendre Polynomials 41

2.6 Physical Applications of the Legendre Polynomials 44

3.1 The Gamma Function and the Pockhammer Symbol 58

3.2 Series Solutions of Bessel’s Equation 60

3.3 The Generating Function for J n (x), n an integer 64

3.4 Differential and Recurrence Relations Between Bessel Functions 69

3.6 Orthogonality of the Bessel Functions 71

3.7 Inhomogeneous Terms in Bessel’s Equation 77

3.8 Solutions Expressible as Bessel Functions 79

3.9 Physical Applications of the Bessel Functions 80

4 Boundary Value Problems, Green’s Functions and

4.1 Inhomogeneous Linear Boundary Value Problems 96

4.2 The Solution of Boundary Value Problems by Eigenfunction

Trang 7

vi CONTENTS

5.4 Solution of Laplace’s Equation Using Fourier Transforms 143

5.5 Generalization to Higher Dimensions 145

6.2 Properties of the Laplace Transform 154

6.3 The Solution of Ordinary Differential Equations using Laplace

6.4 The Inversion Formula for Laplace Transforms 162

7 Classification, Properties and Complex Variable Methods for

7.1 Classification and Properties of Linear, Second Order Partial

Differential Equations in Two Independent Variables 175

7.2 Complex Variable Methods for Solving Laplace’s Equation 186

Part Two: Nonlinear Equations and Advanced Techniques 201

8 Existence, Uniqueness, Continuity and Comparison of

9.2 First Order Autonomous Nonlinear Ordinary Differential

10.4 Integration of a First Order Equation with a Known Group

Trang 8

CONTENTS vii

10.5 Towards the Systematic Determination of Groups Under Which

a First Order Equation is Invariant 265

10.6 Invariants for Second Order Differential Equations 266

11.2 The Asymptotic Evaluation of Integrals 280

12.1 An Instructive Analogy: Algebraic Equations 303

13.1 Zero Eigenvalues and the Centre Manifold Theorem 372

14.4 Examples of Second Order Control Problems 426

14.5 Properties of the Controllable Set 429

14.7 The Time-Optimal Maximum Principle (TOMP) 436

Trang 10

of any applied mathematician, and this book is aimed at building up skills in thisarea For similar reasons, the book should also be of use to mathematically-inclinedphysicists and engineers.

Although the importance of studying differential equations is not generally inquestion, exactly how the theory of differential equations should be taught, andwhat aspects should be emphasized, is more controversial In our experience, text-books on differential equations usually fall into one of two categories Firstly, there

is the type of textbook that emphasizes the importance of abstract mathematicalresults, proving each of its theorems with full mathematical rigour Such textbooksare usually aimed at graduate students, and are inappropriate for the average un-dergraduate Secondly, there is the type of textbook that shows the student how

to construct solutions of differential equations, with particular emphasis on rithmic methods These textbooks often tackle only linear equations, and have nopretension to mathematical rigour However, they are usually well-stocked withinteresting examples, and often include sections on numerical solution methods

algo-In this textbook, we steer a course between these two extremes, starting at thelevel of preparedness of a typical, but well-motivated, second year undergraduate

at a British university As such, the book begins in an unsophisticated style withthe clear objective of obtaining quantitative results for a particular linear ordi-nary differential equation The text is, however, written in a progressive manner,with the aim of developing a deeper understanding of ordinary and partial differ-ential equations, including conditions for the existence and uniqueness of solutions,solutions by group theoretical and asymptotic methods, the basic ideas of con-trol theory, and nonlinear systems, including bifurcation theory and chaos Theemphasis of the book is on analytical and asymptotic solution methods However,where appropriate, we have supplemented the text by including numerical solutionsand graphs produced using MATLAB†, version 6 We assume some knowledge of

† MATLAB is a registered trademark of The MathWorks, Inc.

Trang 11

x PREFACE

MATLAB (summarized in Appendix 7), but explain any nontrivial aspects as theyarise Where mathematical rigour is required, we have presented the appropriateanalysis, on the basis that the student has taken first courses in analysis and linearalgebra We have, however, avoided any functional analysis Most of the material

in the book has been taught by us in courses for undergraduates at the University

of Birmingham This has given us some insight into what students find difficult,and, as a consequence, what needs to be emphasized and re-iterated

The book is divided into two parts In the first of these, we tackle linear ential equations The first three chapters are concerned with variable coefficient,linear, second order ordinary differential equations, emphasizing the methods ofreduction of order and variation of parameters, and series solution by the method

differ-of Frobenius In particular, we discuss Legendre functions (Chapter 2) and Besselfunctions (Chapter 3) in detail, and motivate this by giving examples of how theyarise in real modelling problems These examples lead to partial differential equa-tions, and we use separation of variables to obtain Legendre’s and Bessel’s equa-tions In Chapter 4, the emphasis is on boundary value problems, and we showhow these differ from initial value problems We introduce Sturm–Liouville theory

in this chapter, and prove various results on eigenvalue problems The next twochapters of the first part of the book are concerned with Fourier series, and Fourierand Laplace transforms We discuss in detail the convergence of Fourier series, sincethe analysis involved is far more straightforward than that associated with otherbasis functions Our approach to Fourier transforms involves a short introduction

to the theory of generalized functions The advantage of this approach is that adiscussion of what types of function possess a Fourier transform is straightforward,

since all generalized functions possess a Fourier transform We show how Fourier

transforms can be used to construct the free space Green’s function for both nary and partial differential equations We also use Fourier transforms to derivethe solutions of the Dirichlet and Neumann problems for Laplace’s equation Ourdiscussion of the Laplace transform includes an outline proof of the inversion the-orem, and several examples of physical problems, for example involving diffusion,that can be solved by this method In Chapter 7 we discuss the classification oflinear, second order partial differential equations, emphasizing the reasons why thecanonical examples of elliptic, parabolic and hyperbolic equations, namely Laplace’sequation, the diffusion equation and the wave equation, have the properties thatthey do We also consider complex variable methods for solving Laplace’s equation,emphasizing their application to problems in fluid mechanics

ordi-The second part of the book is concerned with nonlinear problems and moreadvanced techniques Although we have used a lot of the material in Chapters 9and 14 (phase plane techniques and control theory) in a course for second yearundergraduates, the bulk of the material here is aimed at third year students Webegin in Chapter 8 with a brief introduction to the rigorous analysis of ordinarydifferential equations Here the emphasis is on existence, uniqueness and com-parison theorems In Chapter 9 we introduce the phase plane and its associatedtechniques This is the first of three chapters (the others being Chapters 13 and 15)that form an introduction to the theory of nonlinear ordinary differential equations,

Trang 12

PREFACE xi

often known as dynamical systems In Chapter 10, we show how the ideas of grouptheory can be used to find exact solutions of ordinary and partial differential equa-tions In Chapters 11 and 12 we discuss the theory and practice of asymptoticanalysis After discussing the basic ideas at the beginning of Chapter 11, we move

on to study the three most important techniques for the asymptotic evaluation ofintegrals: Laplace’s method, the method of stationary phase and the method ofsteepest descents Chapter 12 is devoted to the asymptotic solution of differentialequations, and we introduce the method of matched asymptotic expansions, andthe associated idea of asymptotic matching, the method of multiple scales, includ-ing Kuzmak’s method for analysing the slow damping of nonlinear oscillators, andthe WKB expansion We illustrate each of these methods with a wide variety ofexamples, for both nonlinear ordinary differential equations and partial differentialequations In Chapter 13 we cover the centre manifold theorem, Lyapunov func-tions and an introduction to bifurcation theory Chapter 14 is about time-optimalcontrol theory in the phase plane, and includes a discussion of the controllabilitymatrix and the time-optimal maximum principle for second order linear systems ofordinary differential equations Chapter 15 is on chaotic systems, and, after someillustrative examples, emphasizes the theory of homoclinic tangles and Mel’nikovtheory

There is a set of exercises at the end of each chapter Harder exercises aremarked with a star, and many chapters include a project, which is rather longerthan the average exercise, and whose solution involves searches in the library or onthe Internet, and deeper study Bona fide teachers and instructors can obtain fullworked solutions to many of the exercises by emailing solutions@cambridge.org

In order to follow many of the ideas and calculations that we describe in thisbook, and to fully appreciate the more advanced material, the reader may need

to acquire (or refresh) some basic skills These are covered in the appendices,and fall into six basic areas: linear algebra, continuity and differentiability, powerseries, sequences and series of functions, ordinary differential equations and complexvariables

We would like to thank our friends and colleagues, Adam Burbidge (Nestl´e search Centre, Lausanne), Norrie Everitt (Birmingham), Chris Good (Birming-ham), Ray Jones (Birmingham), John King (Nottingham), Dave Needham (Read-ing), Nigel Scott (East Anglia) and Warren Smith (Birmingham), who read andcommented on one or more chapters of the book before it was published Anynonsense remaining is, of course, our fault and not theirs

Re-ACK, JB and SRO, Birmingham 2002

Trang 14

Part One

Linear Equations

1

Trang 16

CHAPTER ONE

Variable Coefficient, Second Order, Linear,

Ordinary Differential Equations

Many physical, chemical and biological systems can be described using ical models Once the model is formulated, we usually need to solve a differentialequation in order to predict and quantify the features of the system being mod-elled As a precursor to this, we consider linear, second order ordinary differentialequations of the form

mathemat-P (x) d

dx2 + Q(x) dy

dx + R(x)y = F (x),

with P (x), Q(x) and R(x) finite polynomials that contain no common factor This

equation is inhomogeneous and has variable coefficients The form of these nomials varies according to the underlying physical problem that we are studying.However, we will postpone any discussion of the physical origin of such equationsuntil we have considered some classical mathematical models in Chapters 2 and 3

poly-After dividing through by P (x), we obtain the more convenient, equivalent form,

d2y

dx2 + a1(x) dy

dx + a0(x)y = f (x). (1.1)

This process is mathematically legitimate, provided that P (x) = 0 If P (x0) = 0

at some point x = x0, it is not legitimate, and we call x0 a singular point of the

equation If P (x0) = 0, x0 is a regular or ordinary point of the equation If

P (x) = 0 for all points x in the interval where we want to solve the equation, we

say that the equation is nonsingular, or regular, on the interval.

We usually need to solve (1.1) subject to either initial conditions of the form

y(a) = α, y  (a) = β or boundary conditions, of which y(a) = α and y(b) = β

are typical examples It is worth reminding ourselves that, given the ordinary

dif-ferential equation and initial conditions (an initial value problem), the objective

is to determine the solution for other values of x, typically, x > a, as illustrated in

Figure 1.1 As an example, consider a projectile The initial conditions are the sition of the projectile and the speed and angle to the horizontal at which it is fired

po-We then want to know the path of the projectile, given these initial conditions.For initial value problems of this form, it is possible to show that:

(i) If a1(x), a0(x) and f (x) are continuous on some open interval I that contains the initial point a, a unique solution of the initial value problem exists on the interval I, as we shall demonstrate in Chapter 8.

Trang 17

4 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

calculate the solution for x > a

given y(a) = α and y'(a) = β y

α

Fig 1.1 An initial value problem

(ii) The structure of the solution of the initial value problem is of the form

,

where A, B are constants that are fixed by the initial conditions and u1(x) and u2(x) are linearly independent solutions of the corresponding homoge- neous problem y  + a1(x)y  + a0(x)y = 0.

These results can be proved rigorously, but nonconstructively, by studying theoperator

of the null space of L. This subspace is completely determined once its basis

is known The solution of the inhomogeneous problem, Ly = f , is then given formally as y = L −1 f Unfortunately, if we actually want to construct the solution

of a particular equation, there is a lot more work to do

Before we try to construct the general solution of the inhomogeneous initial valueproblem, we will outline a series of subproblems that are more tractable

Trang 18

1.1 THE METHOD OF REDUCTION OF ORDER 5

1.1 The Method of Reduction of Order

As a first simplification we discuss the solution of the homogeneous differentialequation

Trang 19

6 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

We can recognize ˜cu1(x) as the part of the complementary function that we knew

to start with, and

as the second part of the complementary function This result is called the

reduc-tion of order formula.

This gives the second solution of (1.2) as

u2(x) = x

 x1

t2+ 1

2(1 + t)+

12(1− t)

Trang 20

1.2 THE METHOD OF VARIATION OF PARAMETERS 7

1.2 The Method of Variation of Parameters

Let’s now consider how to find the particular integral given the complementary function, comprising u1(x) and u2(x) As the name of this technique suggests, we

take the constants in the complementary function to be variable, and assume that

Trang 21

8 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

is called the Wronskian These expressions can be integrated to give

vari-a close connection with the theory of continuous groups, which we will investigvari-ate

in Chapter 10

Trang 22

1.2 THE METHOD OF VARIATION OF PARAMETERS 9

1.2.1 The Wronskian

Before we carry on, let’s pause to discuss some further properties of the Wronskian

Recall that if V is a vector space overR, then two elements v1, v2∈ V are linearly

dependent if∃ α1, α2∈ R, with α1and α2not both zero, such that α1v12v2= 0.

Now let V = C1(a, b) be the set of once-differentiable functions over the interval

a < x < b If u1, u2 ∈ C1(a, b) are linearly dependent, ∃ α1, α2 ∈ R such that

α1u1(x) + α2u2(x) = 0 ∀x ∈ (a, b) Notice that, by direct differentiation, this also

In other words, the Wronskian of two linearly dependent functions is identically

zero on (a, b) The contrapositive of this result is that if W ≡ 0 on (a, b), then u1

and u2 are linearly independent on (a, b).

The functions u1(x) = f (x) and u2(x) = kf (x), with k a constant, are linearly

dependent on any interval, since their Wronskian is

Trang 23

10 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

This first order differential equation has solution

which is known as Abel’s formula This gives us an easy way of finding the

Wronskian of the solutions of any second order differential equation without having

to construct the solutions themselves

=x0W (x0)

A x

for some constant A To find this constant, it is usually necessary to know more about the solutions u1(x) and u2(x) We will describe a technique for doing this in

Section 1.3

We end this section with a couple of useful theorems

Theorem 1.1 If u1 and u2 are linearly independent solutions of the neous, nonsingular ordinary differential equation (1.2), then the Wronskian is either strictly positive or strictly negative.

homoge-Proof From Abel’s formula, and since the exponential function does not change

sign, the Wronskian is identically positive, identically negative or identically zero

We just need to exclude the possibility that W is ever zero Suppose that W (x1) =

hence u1(x1) = ku2(x1) and u 

1(x) = ku 

2(x) for some constant k The function

u(x) = u1(x) − ku2(x) is also a solution of (1.2) by linearity, and satisfies the initial conditions u(x1) = 0, u  (x1) = 0 Since (1.2) has a unique solution, the obvious

solution, u ≡ 0, is the only solution This means that u1≡ ku2 Hence u1 and u2

are linearly dependent – a contradiction

The nonsingularity of the differential equation is crucial here If we consider the

equation x2y  − 2xy  + 2y = 0, which has u1(x) = x2and u2(x) = x as its linearly

independent solutions, the Wronksian is −x2, which vanishes at x = 0 This is because the coefficient of y  also vanishes at x = 0.

Theorem 1.2 (The Sturm separation theorem) If u1(x) and u2(x) are the

linearly independent solutions of a nonsingular, homogeneous equation, (1.2), then

Trang 24

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 11

the zeros of u1(x) and u2(x) occur alternately In other words, successive zeros of

u1(x) are separated by successive zeros of u2(x) and vice versa.

Proof Suppose that x1 and x2 are successive zeros of u2(x), so that W (x i) =

u1(x i )u 

2(x i ) for i = 1 or 2 We also know that W (x) is of one sign on [x1, x2],

since u1(x) and u2(x) are linearly independent This means that u1(x i ) and u 

2(x i)

are nonzero Now if u 

2(x1) is positive then u 

2(x2) is negative (or vice versa), since

u2(x2) is zero Since the Wronskian cannot change sign between x1 and x2, u1(x) must change sign, and hence u1 has a zero in [x1, x2], as we claimed

As an example of this, consider the equation y  +ω2

y = 0, which has solution y =

A sin ωx + B cos ωx If we consider any two of the zeros of sin ωx, it is immediately

clear that cos ωx has a zero between them.

1.3 Solution by Power Series: The Method of Frobenius

Up to this point, we have considered ordinary differential equations for which weknow at least one solution of the homogeneous problem From this we have seen that

we can easily construct the second independent solution and, in the inhomogeneouscase, the particular integral We now turn our attention to the more difficultcase, in which we cannot determine a solution of the homogeneous problem byinspection We must devise a method that is capable of solving variable coefficientordinary differential equations in general As we noted at the start of the chapter,

we will restrict our attention to the case where the variable coefficients are simplepolynomials This suggests that we can look for a solution of the form

where the constants c, a0, a1, , are as yet undetermined This is known as the

method of Frobenius Later on, we will give some idea of why and when this

method can be used For the moment, we will just try to make it work We proceed

by example, with the simplest case first

1.3.1 The Roots of the Indicial Equation Differ by an Integer

Consider the equation

Trang 25

12 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

We substitute (1.8) to (1.10) into (1.11), which gives

The two summations in (1.12) begin at the same power of x, namely x 2+c If we

let m = n + 2 in the last summation (notice that if n = 0 then m = 2, and n = ∞

implies that m = ∞), (1.12) becomes

Trang 26

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 13

Since the last two summations involve identical powers of x, we can combine them

Since (1.13) must hold for all values of x, the coefficient of each power of x must

be zero The coefficient of x c is therefore

Up to this point, most Frobenius analysis is very similar It is here that the different

structures come into play If we were to use the solution a0 = 0, the series (1.8)

would have a1x c+1 as its first term This is just equivalent to increasing c by 1 We therefore assume that a0= 0, which means that c must satisfy c21

4 = 0 This is

called the indicial equation, and implies that c = ±1

2 Now, progressing to the

next term, proportional to x c+1, we find that

Choosing c = 12 gives a1= 0, and, if we were to do this, we would find that we had

constructed a solution with one arbitrary constant However, if we choose c = −1

This is called a recurrence relation We solve it by observation as follows We

start by rearranging to give

Trang 27

14 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

A pattern is emerging here and we propose that

since we can recognize the Taylor series expansions for sine and cosine

This particular differential equation is an example of the use of the method ofFrobenius, formalized by

Frobenius General Rule I

If the indicial equation has two distinct roots,

c = α, β (α < β), whose difference is an teger, and one of the coefficients of x k becomes

in-indeterminate on putting c = α, both solutions can be generated by putting c = α in the recur-

rence relation

† In the usual way, we must show that (1.15) is true for n = 0 and that, when the value of a 2n+1

is substituted into the recurrence relation, we obtaina 2(n+1)+1, as given by substitutingn + 1

forn in (1.15).

Trang 28

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 15

In the above example the indicial equation was c21

4 = 0, which has solutions c =

0 When we choose the lower of the two values (c = −1

2) this expression does not

give us any information about the constant a1, in other words a1 is indeterminate

1.3.2 The Roots of the Indicial Equation Differ by a Noninteger

As before, let’s assume that the solution can be written as the power series (1.8)

As in the previous example, this can be differentiated and substituted into theequation to yield

We now extract the first term from the left hand summation so that both

summa-tions start with a term proportional to x c This gives

a0c(2c − 1)x c −1+

n=1

a n (n + c)(2n + 2c − 1)x n+c −1

Trang 29

16 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

As in the previous example we can now consider the coefficients of successive

powers of x We start with the coefficient of x c −1, which gives the indicial equation,

a0c(2c − 1) = 0 Since a0= 0, this implies that c = 0 or c = 1

2 Notice that theseroots do not differ by an integer The general term in the summation shows that

= a n −1



2n2− 5n n(2n − 1)

Trang 30

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 17

where we have used the expression for a1in terms of a0 Now progressing to n = 3,

we have

a3= a2

15

A simple MATLAB† function that evaluates (1.18) is

Although we could now use the method of reduction of order, since we haveconstructed a solution, this would be very complicated It is easier to consider thesecond root of the indicial equation

† See Appendix 7 for a short introduction to MATLAB.

Trang 31

18 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

Fig 1.2 The solution of (1.16) given by (1.18)

= a n −1



n − 2 n



.

We again recall that this relation holds for n  1 and start with n = 1, which gives

a1 = a0(−1) Substituting n = 2 gives a2 = 0 and, since all successive a i will be

written in terms of a2, a i = 0 for i = 2, 3, The second solution of the equation

is therefore y = Bx 1/2(1−x) We can now use this simple solution in the reduction

of order formula, (1.3), to determine an analytical formula for the first solution,(1.18) For example, for 0 x  1, we find that (1.18) can be written as

This expression has a logarithmic singularity in its derivative at x = 1, which

explains why the radius of convergence of the power series solution (1.18) is|x|  1.

This differential equation is an example of the second major case of the method

of Frobenius, formalized by

Trang 32

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 19

Frobenius General Rule II

If the indicial equation has two distinct roots,

c = α, β (α < β), whose difference is not an

integer, the general solution of the equation is

found by successively substituting c = α then c =

β into the general recurrence relation.

1.3.3 The Roots of the Indicial Equation are Equal

Let’s try to determine the two solutions of the differential equation

where we have combined the two summations The indicial equation is c2 = 0

which has a double root at c = 0 We know that there must be two solutions, but

it appears that there is only one available to us For the moment let’s see how far

we can get by setting c = 0 The recurrence relation is then

a n=−a n −1 n + 1 n2 for n = 1, 2, When n = 1 we find that

a1=−a0

2

12,

Trang 33

20 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

This solution is one that we could not have readily determined simply by inspection

We could now use the method of reduction of order to find the second solution, but

we will proceed with the method of Frobenius so that we can see how it works inthis case

Consider (1.19), which we write out more fully as

The best we can do at this stage is to set a n (n + c)2+ a n −1 (n + c + 1) = 0 for n 1,

as this gets rid of most of the terms This gives us a n as a function of c for n 1,and leaves us with

Let’s now take a partial derivative with respect to c, where we regard y as a function

of both x and c, making use of

Trang 34

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 21

Notice that we have used the fact that a0is independent of c We need to be careful

when evaluating the right hand side of this expression Differentiating using theproduct rule we have

Notice that this procedure only works because (1.20) has a repeated root at c = 0.

We conclude that ∂y

∂c





c=0

is a second solution of our ordinary differential equation

To construct this solution, we differentiate the power series (1.8) (carefully!) togive

using a similar technique as before to deal with the differentiation of x cwith respect

to c Note that, although a0is not a function of c, the other coefficients are Putting

Trang 35

22 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

Starting with n = 1 we find that

a1= −a0(c + 2) (c + 1)2 ,

whilst for n = 2,

a2= −a1(c + 3) (c + 2)2 ,

and substituting for a1in terms of a0 gives us

which we can write as

a n = (−1) n a0

n j=1 (c + j + 1)

n j=1 (c + j)

a n

da n dc

n j=1 j

Trang 36

facto-1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 23

This methodology is formalized in

Frobenius General Rule III

If the indicial equation has a double root, c = α,

one solution is obtained by putting c = α into the

The indicial equation has roots c = ±1, and, by choosing either of these, we find

that a1= 0 If we now look for the general solution of

Trang 37

24 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

However, if c = −1, the coefficients a 2n for n = 1, 2, are singular.

In order to find the structure of the second linearly independent solution, we use

the reduction of order formula, (1.3) Substituting for u1(x) gives

where v(x) = 1 + b2x2+ b4x4+· · · If we assume a solution structure of this form

and substitute it into (1.22), it is straightforward to pick off the coefficients b 2n.Finally, note that we showed in Section 1.2.1 that the Wronskian of (1.22) is

W = A/x for some constant A Now, since we know that u1 = x + · · · and

u2=−1/2x+· · · , we must have W = x(1/2x2) + 1/2x + · · · = 1/x+· · · , and hence

A = 1.

1.3.4 Singular Points of Differential Equations

In this section, we give some definitions and a statement of a theorem that tells

us when the method of Frobenius can be used, and for what values of x the infinite

series will converge We consider a second order, variable coefficient differentialequation of the form

P (x) d

dx2 + Q(x) dy

Before we proceed, we need to further refine our ideas about singular points If x0

is a singular point and (x −x0)Q(x)/P (x) and (x −x0)2R(x)/P (x) have convergent

Trang 38

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 25

Taylor series expansions about x0, then x = x0is called a regular singular point;

otherwise, x0is called an irregular singular point.

There are singular points where x2− x = 0, at x = 0 and x = 1 Let’s start by

looking at the singular point x = 0 Consider the expression

Upon expanding (1− x) −1 using the binomial expansion we have

This power series is convergent provided |x| < 1 (by considering the binomial

expansion used above) Now

Expanding in powers of (1− x) using the binomial expansion gives

Trang 39

26 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS

which is a power series in (x − 1) that is convergent provided |x − 1| < 1 We also

If x0is a regular singular point, then there exists a unique series solution in the neighbourhood of x0, which converges for |x − x0| < ρ, where ρ is the smaller of the radii of convergence of the series (x − x0)Q(x)/P (x) and (x − x0)2R(x)/P (x) Proof This can be found in Kreider, Kuller, Ostberg and Perkins (1966) and is due

to Fuchs We give an outline of why the result should hold in Section 1.3.5 Weare more concerned here with using the result to tell us when a series solution willconverge

Example

Consider the differential equation (1.24) We have already seen that x = 0 is a regular singular point The radii of convergence of xQ(x)/P (x) and x2R(x)/P (x)

are both unity, and hence the series solution

n=0 a n x n+c exists, is unique, andwill converge for|x| < 1.

1.3.5 An outline proof of Theorem 1.3

We will now give a sketch of why Theorem 1.3 holds Consider the equation

P (x)y  + Q(x)y  + R(x)y = 0.

When x = 0 is an ordinary point, assuming that

P (x) = P0+ xP1+· · · , Q(x) = Q0+ xQ1+· · · , R(x) = R0+ xR1+· · · ,

Trang 40

1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 27

we can look for a solution using the method of Frobenius When the terms areordered, we find that the first two terms in the expansion are

P0a0c(c − 1)x c −2+{P0a1c(c + 1) + P1a1c(c − 1) + Q0a1c } x c −1+· · · = 0.

The indicial equation, c(c − 1) = 0, has two distinct roots that differ by an integer

and, following Frobenius General Rule I, we can choose c = 0 and find a solution

y  term in the equation alone Let’s now try to include the y  and y terms as well,

by making the assumption that

Q(x) = Q0+ Q1x + · · · , R(x) = R −1

x + R0+· · ·

Then, after substitution of the Frobenius series into the equation, the coefficient of

x c −1 gives the indicial equation as

behaviour of Q(x) and R(x) close to x = 0 are what is required to make it a regular

singular point That the series converges, as claimed by the theorem, is most easilyshown using the theory of complex variables; this is done in Section A6.5

1.3.6 The point at infinity

Our discussion of singular points can be extended to include the point at infinity

by defining s = 1/x and considering the properties of the point s = 0 In particular,

s



, Q(s) = 2sˆ 3P

1

s



− s2Q

1

s



, R(s) = Rˆ

1

s



.

Ngày đăng: 04/03/2019, 11:08

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN